content
stringlengths
86
994k
meta
stringlengths
288
619
Is the stabilizer of an irreducible subvariety of an abelian variety irreducible ? up vote 0 down vote favorite Let $A$ be a (semi-)abelian variety over an algebraically closed field $K$, and $X$ be a closed irreducible subvariety. Can $X$ have a non-trivial finite stabilizer ? By stabilizer, I mean the closed subgroup $S\subset A$ such that $X$ is stable by translation by any point in $S$. This question is equivalent to the one in the title since you can always take the quotient by the neutral component of $S$ to reduce to a finite group. ag.algebraic-geometry abelian-varieties 1 No. Let $f:A \to B$ be a separable isogeny and $D$ a smooth ample divisor in $B$. Then $f^{−1}(D)$ is irreducible if $\dim(X)>1$ (since it is smooth and connected) and is stabilized by the kernel of $f$. – ulrich May 15 '13 at 13:12 add comment 1 Answer active oldest votes No. Take a curve in its Jacobian and pull it back by multiplication by some n. The resulting pullback is a curve invariant by the n torsion. up vote 2 down vote accepted add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry abelian-varieties or ask your own question.
{"url":"http://mathoverflow.net/questions/130710/is-the-stabilizer-of-an-irreducible-subvariety-of-an-abelian-variety-irreducible","timestamp":"2014-04-16T22:21:31Z","content_type":null,"content_length":"51079","record_id":"<urn:uuid:45d3237a-4609-4750-b128-cf8aca406b82>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
The Voltage Waveform Vin(t) Of Figure P8.34a Drives ... | Chegg.com Please solve question 34 with step by step. Thank you. Image text transcribed for accessibility: The voltage waveform vin(t) of Figure P8.34a drives the circuit of Figure P8.34b. The voltage-controlled switch S1 closes when the capacitor voltage goes positive and opens when the capacitor voltage vC(t) goes negative. Compute the voltage vC(t) across the capacitor. Assume that vin(t) has been at -10 V for t , 0 for a very long time. Hint: Use the elapsed time formula as needed. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/voltage-waveform-vin-t-figure-p834a-drives-circuit-figure-p834b-voltage-controlled-switch--q3043796","timestamp":"2014-04-18T21:39:06Z","content_type":null,"content_length":"18625","record_id":"<urn:uuid:215d4313-5ea7-44c5-a489-92f59786e90e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Friendswood Geometry Tutor Find a Friendswood Geometry Tutor ...I have experience tutoring in phonics, reading, reading comprehension and math, including elementary, pre-algebra, algebra & geometry. I possess a special talent for making learning fun by utilizing creative ways in which each student can relate. I also have experience tutoring for the Texas St... 12 Subjects: including geometry, reading, English, biology ...I got a 31 on my own ACT math test. To be a teacher, I had to take the Praxis II exam, on which I scored in the top 15%. On the GRE, the test I needed to take for my masters, I nearly scored perfect. If you give me a chance, I know that I can help your child find ways to get a better score on their tests. 20 Subjects: including geometry, Spanish, algebra 1, algebra 2 ...I have 3 to 4 years experience in mathematics. I have many interest in mathematics. They range from Differential Geometry to Ordinary differential Equations. 16 Subjects: including geometry, calculus, algebra 1, algebra 2 ...I've tutored students in English, reading, and writing. I enjoy teaching a great deal and am well versed in teaching in different styles to fit the student. I like to focus on building a student's confidence and understanding over memorization. 34 Subjects: including geometry, chemistry, reading, English ...I am a loving and patient Christian mom of three children. I have 20 years of experience teaching Algebra and Chinese in elementary and middle school in Taiwan and USA. I also have two years experience teaching Chinese phonics at Evergreen Chinese school. 12 Subjects: including geometry, reading, Chinese, algebra 1
{"url":"http://www.purplemath.com/friendswood_tx_geometry_tutors.php","timestamp":"2014-04-21T14:59:16Z","content_type":null,"content_length":"23855","record_id":"<urn:uuid:b157f40b-9893-4d7c-ad9f-c3da22cd0fc9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
A New Approach to Power-Model Regression of Corrosion Penetration Data STP1137: A New Approach to Power-Model Regression of Corrosion Penetration Data McCuen, RH Professor, University of Maryland, College Park, MD Albrecht, P Professor, University of Maryland, College Park, MD Cheng, J Professor, Nanjing Institute of Chemical Technology, Nanjing, Pages: 31 Published: Jan 1992 Corrosion penetration data have traditionally been fit with the power model using a log transformation of both the exposure time and the penetration. The transformation has several disadvantages, including (1) the resulting model gives biased estimates of the penetration, (2) the sum of the squares of the errors in penetration are not minimized even though the sum of the squares of the logarithmic errors are minimized, and (3) the logarithmic transformation results in greater emphasis being placed on the penetration for the shorter exposure times. An alternative method of fitting power models is discussed. The numerical calibration method is shown to produce better fitting and more rational power models than is provided with a logarithmic transformation. Unbiased power models can also be fitted. The fitting procedure is more complex than that for the log transform method. The two fitting methods were compared using 32 sets of corrosion penetration data. The average standard error ratio for the numerical procedure was 58 percent of that for the logarithmic analyses, which suggests that the numerical procedure consistently produced better penetration estimates for the largest measured exposure time. Thus, more accurate 75-year projections of penetration should be expected with the numerical model. Analyses of the records suggest that record lengths of at least 10 years are necessary to produce accurate model coefficients. Atmospheric corrosion, penetration, structures, weathering steel, power model, numerical optimization Paper ID: STP19754S Committee/Subcommittee: G01.04 DOI: 10.1520/STP19754S
{"url":"http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP19754S.htm","timestamp":"2014-04-21T15:08:21Z","content_type":null,"content_length":"13222","record_id":"<urn:uuid:d5d7404c-0319-4797-b5eb-4e7936ccf079>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
general question October 20th 2010, 03:07 AM general question Q. For any acute triangle, all its angles have measure in integral degree. The smallest angle has measure 1/5 of the measure of the greatest angle. Find the measure of the angles of the triangle? October 20th 2010, 05:05 PM A nice problem that combines geometry and algebra. Let x be the smallest angle; then the angles are x, 5x, and 180 - 6x. Saying that each of these angles is acute gives you two inequalities (well, three, but the third one trivially follows from one of the others). In my calculations, these inequalities have two integer solutions. However, in one of them, 5x is not the greatest angle. This leaves one solution. October 20th 2010, 07:10 PM You can simply "think out loud" to solve this one: since acute, then all angles < 90 since integers, then highest = 89 BUT since smallest is 1/5 of highest, then highest must be divisible by 5, so highest = 85 so my lowest would be 1/5(85) = 17 let's see; 180 - 85 - 17 = 78 ....YA!!
{"url":"http://mathhelpforum.com/geometry/160356-general-question-print.html","timestamp":"2014-04-17T19:31:56Z","content_type":null,"content_length":"5012","record_id":"<urn:uuid:bf66980f-3224-46f5-aa6a-8daf72ceae91>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
a little linear algebra with a normal distribution March 9th 2011, 03:52 PM a little linear algebra with a normal distribution Let Y = <Y1, Y2, Y3> and let v = <1,2,-3>. Suppose Yi ~ Norm(5,2). What is the distribution of v dot Y? I know that the dot product of the two vectors is Y1 + 2Y2 - 3Y3. But I don't understand the concept of how to tie the Normal distribution into this result. March 9th 2011, 04:42 PM mr fantastic Get the distribution of the random variable $X = Y_1 + 2Y_2 - 3Y_3$. I assume you know how to do this? March 9th 2011, 06:44 PM Umm no I'm not completely sure. It's been a while. March 9th 2011, 07:49 PM mr fantastic Here is a link for getting the sum: Sum of normally distributed random variables - Wikipedia, the free encyclopedia Getting the difference is very similar (means subtract but the variances still add). March 9th 2011, 08:28 PM Oh yeah!!! This is coming back to me. So here's what I got: Norm(0,sqrt(56)). I got the standard deviation by doing: sqrt(4+4*4+9*4). Does that look/sound right? March 10th 2011, 12:29 AM mr fantastic
{"url":"http://mathhelpforum.com/advanced-statistics/174024-little-linear-algebra-normal-distribution-print.html","timestamp":"2014-04-18T06:41:14Z","content_type":null,"content_length":"7450","record_id":"<urn:uuid:9918355f-06b8-4d19-b60b-8e8dfc3354a0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
Pig, mail # user - Compute Product of Values in Bag - 2013-05-02, 23:41 Pig, mail # user - Compute Product of Values in Bag Compute Product of Values in Bag Sergey Goder 2013-05-02, 23:41 I was wondering if there is a way to compute the product of all the values in a bag much like the built in function SUM does currently. For reference, I am currently implementing a multinomial naive bayes classifier and need to compute the product of probabilities. I am trying to avoid using a UDF because the requirements of the project prohibit me from using a secondary language such as python, javascript, etc.
{"url":"http://search-hadoop.com/m/EKaLnlq5s2/v=threaded","timestamp":"2014-04-17T19:40:26Z","content_type":null,"content_length":"12933","record_id":"<urn:uuid:9a9ec37f-a14c-416a-a1f0-92c4acabd8aa>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Journal of the Brazilian Society of Mechanical Sciences and Engineering Services on Demand Related links Print version ISSN 1678-5878 J. Braz. Soc. Mech. Sci. & Eng. vol.34 no.3 Rio de Janeiro July/Sept. 2012 TECHNICAL PAPERS AEROSPACE ENGINEERING A study of convective flux schemes for aerospace flows Enda Dimitri V. Bigarella^I; João Luiz F. Azevedo^II ^IEmbraer S.A., Av. Brigadeiro Faria Lima, 2170, São José dos Campos, 12227-901 SP, Brazil, enda.bigarella@gmail.com ^IIInstituto de Aeronáutica e Espaço, Departamento de Ciência e Tecnologia Aeroespacial, DCTA/IAE/ALA, São José dos Campos, 12228-903 SP, Brazil, joaoluiz.azevedo@gmail.com This paper presents the effects of some convective flux computation schemes on boundary layer and shocked flow solutions. Second-order accurate centered and upwind convective flux computation schemes are discussed. The centered Jameson scheme, plus explicitly added artificial dissipation terms are considered. Three artificial dissipation models, namely a scalar and a matrix version of a switched model, and the CUSP scheme are available. Some implementation options regarding these methods are proposed and addressed in the paper. For the upwind option, the Roe flux-difference splitting scheme is used. The CUSP and Roe schemes require property reconstructions to achieve second-order accuracy in space. A multidimensional limited MUSCL interpolation method is used to perform property reconstruction. Extended multidimensional limiter formulation and implementation are here proposed and verified. Theoretical flow solutions are used in order to provide a representative testbed for the current study. It is observed that explicitly added artificial dissipation terms of the centered scheme may nonphysically modify the numerical solution, whereas upwind schemes seem to better represent the flow structure. Keywords: CFD, numerical flux schemes, compressible viscous flows, compressible inviscid flows The paper reports recent improvements on a finite volume method for 3-D unstructured meshes developed by the CFD group at Instituto de Aeronáutica e Espaço (IAE). Flow phenomena typical of aerospace applications are usually associated with transonic and supersonic shock waves and high-Reynolds number boundary layers. The correct computation of such flow phenomena is of paramount importance for the representativeness of numerical simulations for high Mach and Reynolds-number flight conditions, since they are decisive for the final aerodynamic data important for engineering purposes. The numerical modeling of these flow features, through flux computation schemes, must be representative of the physics of these phenomena, as well as numerically adequate in terms of robustness and costs. In light of that, the paper addresses several flux computation schemes suitable for the typical aerospace applications of IAE. Second-order accurate centered-(Jameson, Schmidt and Turkel, 1981) and upwind flux-difference splitting (Roe, 1981) schemes are considered here. In the centered case, explicit addition of artificial dissipation terms is required to control nonlinear instabilities in the numerical solution. For computation of these terms in the current work, both the scalar and the matrix versions of a switched second-and fourth-difference scheme are considered (Mavriplis, 1990; Turkel and Vatsa, 1994). The Convective-Upwind Split-Pressure (CUSP) artificial dissipation model (Jameson, 1995a; Jameson, 1995b) is also considered in the centered scheme case. Some implementation options are proposed and discussed in the paper, in terms of computational effort and numerical solution quality. The CUSP and the Roe upwind schemes require special treatment of properties in the control-volume faces to achieve 2nd-order accuracy in space. The multidimensional, limited, MUSCL (van Leer, 1979) reconstruction scheme of Barth and Jespersen (1989) is adopted here. This limiter formulation is here addressed, and an extension for this formulation is proposed and assessed in the paper. A computationally cheap and robust integration of the limited MUSCL-reconstructed schemes is also proposed, which allows for large computational resource savings while maintaining the expected level of Inviscid flows for a 1-D shock tube and the Boeing A4 supercritical airfoil (Nishimura, 1992) configurations are considered in order to address the flux computation schemes for shock wave capturing. A mesh refinement study is performed for the airfoil case in order to assess the dependency of the numerical schemes with grid density and topology. Subsonic laminar flows over a flat plate address the effects of the numerical flux schemes in boundary layer flows. It is known that flux schemes may have influence in such flow solutions, as reported by Swanson, Radespiel and Turkel (1998), Zingg et al. (1999), Allmaras (2002), and Bigarella (2002). The present group attributes such problems to nonphysical behavior of centered flux schemes, more precisely in the explicitly added artificial dissipation model, as reported in Bigarella, Moreira and Azevedo (2004). The present paper shows conclusive results that corroborate this assertive. Mesh density and topology are also addressed for such test case. Generally, improved accuracy is obtained with the new flux computation schemes. This section presents the motivation for the current effort. The next section presents a brief discussion on the theoretical and numerical formulations embedded in the current numerical tool. Detailed discussion on the centered schemes here considered is performed in the third section. Similar discussion is performed for the upwind and the reconstruction schemes in the fourth section. The fifth section presents the discussions on the obtained numerical results. The last section closes the work with concluding remarks from the current effort. a = speed of sound C = convective operator CFL = Courant-Friedrichs-Lewy number Cp = pressure coefficient D = artificial dissipation operator d = artificial dissipation term e = total energy per unit volume e[i] = internal energy nf = number of faces that compose a control volume p = static pressure P[e] = inviscid flux vector P[v ] = viscous flux vector Pr = Prandtl number q = heat flux vector Q = vector of conserved properties Re = Reynolds number S = |S| = face area u, v, w = cartesian velocity components v = cartesian velocity vector V = viscous operator x, y, z = cartesian coordinates Greek Symbols α = angle of attack Δt = time step Φ = gradient ratio for limiter computation γ = ratio of specific heats µ = dynamic viscosity coefficient ψ = control volume limiter ρ = density τ = viscous stress tensor ∞ = freestream property i,m = grid control volume indices k = face index ℓ = laminar property L,R = interface left and right properties t = turbulent property * = dimensional property n = time instant Theoretical and Numerical Formulations The flows of interest in the present context are modeled by the 3-D compressible Reynolds-averaged Navier-Stokes (RANS) equations, written in dimensionless form and assuming a perfect gas, as The inviscid and viscous flux vectors are given as The shear-stress tensor is defined by where u[i] represents the Cartesian velocity components, and x[i] represents the Cartesian coordinates. The viscous force work and heat transfer term, β[i], is defined as β[i] = τ[ij]u[j] q[i], where the heat transfer component is defined as The molecular dynamic viscosity coefficient is computed by the Sutherland law (Anderson, 1991). The dimensionless pressure can be calculated from the perfect gas equation of state. This set of equations is solved according to a finite volume formulation (Scalabrin, 2002). Flow equations are integrated in time by a fully explicit, 2nd-order accurate, 5-stage, Runge-Kutta time stepping scheme. An agglomeration full-multigrid scheme (FMG) is included in order to achieve better convergence rates for the simulations. More details on the theoretical and numerical formulations can be found in Bigarella, Basso and Azevedo (2004), and Bigarella and Azevedo (2005). Centered Spatial Discretization Schemes Centered schemes require the explicit addition of artificial dissipation terms in order to control nonlinear instabilities that may arise in the flow simulation. Several models to compute the artificial terms are included in the present numerical formulation. A description of the available models is presented in the forthcoming subsections. Mavriplis scalar switched model (MAVR) The centered spatial discretization of the convective fluxes, C[i], in this scheme is proposed by Jameson, Schmidt and Turkel (1981). The convective operator is calculated as the sum of the inviscid fluxes on the faces of the i-th volume as where Q[i] and Q[m] are the conserved properties in the i-th and m-th cells, respectively, that share the k-th face. The artificial dissipation operator is built by a blend of undivided Laplacian and bi-harmonic operators. In regions of high property gradients, the bi-harmonic operator is turned off in order to avoid oscillations. In smooth regions, the undivided Laplacian operator is turned off in order to maintain 2nd order accuracy. A numerical pressure sensor is responsible for this switching between the operators. The expression for the artificial dissipation operator is given by where m represents the neighbor of the i-th element, attached to the k-th face, and nb is the total number of neighbors of the i-th control volume. Furthermore, In this work, K[2] and K[4] are assumed equal to 1/4 and 3/256, Vatsa (1994), Vn = 0.25 and Vl = 0.025, are used in the present effort. respectively, as recommended by Jameson, Schmidt and Turkel Furthermore, (1981). The A[i] matrix coefficient in Eq. (6) is replaced by a scalar coefficient (Mavriplis, 1988; Mavriplis, 1990) defined as This formulation is constructed so as to obtain steady state solutions which are independent of the time step (Azevedo, 1992). In the multistage Runge-Kutta time integration previously described, the artificial dissipation operator is calculated only on the first, third and fifth stages for viscous flow simulations. For the inviscid calculations, the artificial dissipation operator is calculated in the first and in the second stages only. This approach guarantees the accuracy for the numerical solution while reducing computational costs per iteration (Jameson, Schmidt and Turkel, 1981). Furthermore, the MAVR model has also been integrated into the multigrid framework. In order to achieve lower computational costs for the multigrid cycles, only the first order artificial dissipation model is used in the coarser mesh levels. This operation is achieved by not computing the bi-harmonic term in Eq. (6) and by setting [2] ← [2] +[4] in these levels. Matrix switched model (MATD) The formulation for the matrix model (MATD) is similar to the previously described one for the MAVR model, except for the definition of the A[i] terms. In this case, the flux Jacobian matrices, as defined in Turkel and Vatsa (1994), are used instead of the scalar term inside the summation in Eq. (8). The A[i] term, re-interpreted for the present cell-centered, face-based finite-volume framework, can be written as In this equation, the following definitions in the k-th face are used: and the k subscript is dropped in order to avoid overloading the previous formulation nomenclature. In the definitions, v[n] is the normal velocity component, computed as v[n] = v · n, where the unit area vector is defined as n = S/|S|. Furthermore, in these expressions, V[n] limits the eigenvalues associated with the nonlinear characteristic fields whereas V[l] provides a similar limiter for the linear characteristic fields. Such limiters are used near stagnation and/or sonic lines, where the eigenvalues approach zero, in order to avoid zero artificial dissipation. The values recommended for these limiters bt Turkel and Vatsa (1994), V[n] = 0.25 and V[l] = 0.025, are used in the present effort. Furthermore, and H =(e + p) /ρ is the total enthalpy. In these definitions, the k-th subscript, that indicates a variable computed in the face, has been eliminated in order to avoid overloading the equations with In the finite difference context in which the matrix-based artificial dissipation model is originally presented (Turkel and Vatsa, 1994), its numerical implementation is very attractive due to the advantageous form of the |A[k]| matrix in terms of vector multiplications, Eqs. (10) -(13). Written in this way, the final dissipation vector is directly computed through vector multiplications rather than being necessary to compute and store the complete matrix coefficient. Thus, in Turkel and Vatsa (1994), this dissipation model only requires up to 20% more computational cost per iteration and much less memory overhead, while providing upwind-like solutions for shock-wave flows. The artificial dissipation model in the current context is scaled with the use of integrated coefficients, such as the scalar coefficient shown in Eq. (8). Therefore, the advantage of having the artificial dissipation contribution computed directly by the product of the |A[k]|matrix and a difference of conserved properties, which uses the form given in Eq. (10) by Turkel and Vatsa (1994), is destroyed by the need to perform the surface integral of the matrix coefficient shown in Eq. (9). Hence, one has to actually form the |A[k]| matrix in the present finite volume context. This is the straightforward extension of the scalar option to the matrix one, here termed MATD[sf ]. The finite difference-like option, named MATD [fd] , in which the attractive form of the scaling matrix is used, can be readily obtained by replacing the A[m] + A[i]) coefficient in Eq. (6) by the |A[k]||S[k]| scaling matrix. Another option in which the advantageous form of the scaling matrix is kept while still using an integrated coefficient, though in a nonconservative fashion, can also be obtained, here termed MAT D[nc]. This option is given as which means that the matrix coefficient computed in the face is directly used in the summation, which allows for the use of the faster vector products, and the surface integral is obtained through the area of the faces that compose the i-th cell. The three previous matrix-based artificial dissipation forms are addressed in the present work. In order to approximate the MATD artificial dissipation terms to an upwind scheme behavior in the vicinity of shock-wave regions, the recommended value for the K[2] constant is K[2] = 1/2 (Turkel and Vatsa, 1994). Furthermore, it has been observed, during the application of this method along with the multigrid scheme in highly stretched grids, that it may be beneficial to increase the K[4] value to K[4] = 1/64 (Bigarella and Azevedo, 2005). Convective Upwind and Split Pressure Scheme (CUSP) The Jameson CUSP model (Jameson, 1995a; Jameson, 1995b; Swanson, Radespiel and Turkel, 1998) is inspired in earlier work on flux-vector splitting methods. It is based on a splitting of the flux function into convective and pressure contributions. In some sense, the pressure terms contribute to the acoustic waves while the velocity terms contribute to convective waves, which makes it reasonable to treat these flux terms differently. Previously, the scalar and matrix-valued artificial dissipation terms have been constructed considering differentials in the conserved property arrays. For the CUSP model, the artificial dissipation terms are, instead, chosen as a linear combination of the conserved property array and the flux vectors. The second-order accurate, CUSP model, artificial dissipation term is re-interpreted for the present cell-centered, face-based finite volume framework as follows: In these equations, M[n] = v[n]/a is the Mach number in the face normal direction, and [CUSP] is a threshold control value introduced in order to avoid zero artificial dissipation near stagnation lines. The L and R subscripts represent reconstructed neighboring properties of the k-th face. The definitions for such properties is presented in the forthcoming section which discusses the MUSCL reconstruction scheme. In the above scheme definitions, the k-th subscript, which indicates a variable computed in the face, has been eliminated in order to avoid overloading the equations with symbols. It is important to remark here that face properties are computed using the Roe average procedure (Roe, 1981; Swanson, Radespiel and Turkel, 1998). The centered spatial discretization of the convective fluxes, C[i], in this scheme, for the present context, is defined as which means that reconstructed properties are also used to build the convective fluxes in the CUSP scheme, here termed CUSP[rec] scheme. This does not seem to be the approach chosen by other CUSP users (Jameson, 1995a; Jameson, 1995b; Swanson, Radespiel and Turkel, 1998; Zingg et al., 1999). In these references, the respective authors apparently define the convective flux operator similarly to the one presented in Eq. (5), that is, reconstructed properties are only used to build the dissipation terms and constant property distribution is assumed to build the convective terms. This approach is named CUSP[ctt ]in the present context and it is compared to the here proposed fully reconstructed approach, as defined in Eq. (17). Upwind Spatial Discretization Scheme Upwind Roe flux-difference splitting scheme (f ROE) General definition of the scheme. The upwind discretization in the present context is performed by the Roe (1981) flux-difference splitting method. In the present context, the f ROE inviscid numerical flux in the k-th face can be written as where k-th face normal direction, defined as The authors observe that this form of computing the central difference portion of the Roe flux is slightly different from the standard calculation shown in Roe (1981). In the present case, the authors are computing the flux of the averaged conserved property vector, whereas Roe (1981) calculates the average of the fluxes themselves in the original reference. In the present formulation, |λ [j]| represents the magnitude of the eigenvalues associated with the Euler equations, given as Similarly, r[i] represents the associated eigenvectors, given by where Θ[1] = 0.5v · v. The δ[j] terms represent the projections of the property jumps at the interface over the system eigenvectors, defined as the elements of where Δ() represents the corresponding property jump at the interface. Moreover, the left eigenvectors are the rows of the L matrix, which are defined as with Θ[2] =(γ 1)/a^2, Θ[3] = Θ[2]/2 and Θ[4] = Θ[1]Θ[2]. In the above definitions, the k-th subscript, which indicates a variable computed in the face, is eliminated in order to avoid overloading the equations with symbols. In the classical form in which the f ROE scheme is presented, such as in Eq. (18), the underlining argument is the numerical flux concept, as also found in other upwind scheme examples (Azevedo, Figueira da Silva and Strauss, 2010; Steger and Warming, 1981). Therefore, each time the numerical flux is built, the inherent numerical dissipation is also evaluated. In an explicit Runge-Kutta-type multistage scheme, this fact means that the Roe matrix defined in Eq. (19) is computed in all stages. The present authors rather understand the f ROE scheme as the sum of a centered convective flux, defined as in Eq. (17), and an upwind-biased numerical dissipation contribution, that is given by Therefore, the attractive, cheaper, alternate computation of the numerical dissipation in the multistage scheme, as already used for the switched artificial dissipation schemes, can also be extended for the upwind flux computation. A detailed comparison between the classical implementation, named ROE[cla], and the alternate multistage option, termed ROE[alt] , is further assessed in the present work. An analysis of numerical solution quality and computational costs is also performed. Roe averaging. Similarly as in the CUSP scheme, properties in the volume faces are computed using the Roe (1981) average procedure. The conserved properties in the faces are defined such that the flux in that face can be represented by a parameter vector, resulting in P = P(w) and Q = Q(w), where w is the parameter vector. This parameter vector is chosen in Roe (1981) as This definition allows the exact solution of the problem proposed by Roe (1981), in the form of Eq. (19). Conserved properties in the k-th face are obtained through the previous parameter vector definition, resulting in where w[j] is a component of the parameter vector, w, and q[j] is a component of the conserved property vector, Q. Stability and robustness enhancement. Similarly to the MATD artificial dissipation scheme, the eigenvalues for the Roe scheme, Eq. (20), can be clipped to avoid zero artificial dissipation near stagnation points or sonic speed regions. In the Roe scheme case, the eigenvalues are smoothly clipped to the [ROE] threshold value such as with λ[j]|/a. The threshold value is entered by the user, and it is usually set around [ROE] ≈ 0.05. For more complex geometries, mainly with bad cells in the mesh, robustness is enhanced with [ROE] ≈ 0.15. MUSCL reconstruction To achieve 2nd order accuracy in space for the CUSP and f ROE schemes, linear distributions of properties are assumed at each cell to compute the left and right states in the face. Such states are represented by the L and R subscripts, respectively, in the CUSP and f ROE definitions. The linear reconstruction of properties is achieved through the van Leer (1979) MUSCL scheme, in which the property at the interface is obtained through a limited extrapolation using the cell properties and their gradients. In order to perform such reconstruction at any point inside the control cell, the following expression is used for a generic element, q, of the conserved variable vector, Q, in Eq. (1), where (x, y, z) is a generic point in the i-th cell; q[i] is the discrete value of the generic property q in the i-th cell, which is attributed to the cell centroid; q is the gradient of property q; Gradients are computed with the aid of the gradient theorem (Swanson and Radespiel, 1991), in which derivatives are converted into line integrals over the cell faces such as where î[x ] represents the unit vector in the x direction, and Vi and S[i] are the i-th cell volume and external face area, respectively. In the present work, the control volume, V[i], to perform the gradient computation is chosen to be the i-th cell itself. This approach yields a formulation that is identical to the one for calculations of the RANS viscous terms. This procedure differs from the method proposed by Barth and Jespersen (1989), in which an extended control volume is assumed, but it is simpler and similar results are achieved (Azevedo, Figueira da Silva and Strauss, 2010). Therefore, the expressions for the reconstructed properties in the k-th face can be written as where q[i] and q[m] are the gradients computed for the i-th cell and its neighboring m-th cell, respectively; ψ[i] and ψ[m] represent the limiters in these cells; and [ki] and [km] are the distance vectors from the i-th and m-th cell centroids, respectively, to the k-th face centroid. The right-hand side cell, represented by the m subscript in the previous definitions, can be both an internal or a ghost cell. If the gradients are correctly set in the ghost cells, this formulation directly allows for reconstruction in the boundary faces similarly to internal faces. This procedure guarantees high discretization accuracy in the boundary faces as well as in internal faces. The 1st-order CUSP or f ROE schemes can be readily obtained by setting the limiter value to zero in Eq. (30). This operation is equivalent to writing Q[L] = Q[i] and Q[R] = Q[m] in the previou < formulation. The integration of MUSCL-reconstructed schemes with the multigrid framework is simply accomplished by computing the 2nd-order scheme in the finest grid level and the 1st-order one in the other coarser levels. This approach guarantees lower computational costs for the multigrid cycles while maintaining the adequate accuracy for the solution at the finest mesh level. The limiter options that are available in the present context are the minmod, superbee and van Albada limiters (Hirsch, 1991). The respective 1-D definitions for these limiters are and the respective function plots are shown in Fig. 1. The total variation diminishing (van Leer, 1979; Barth and Jespersen, 1989) region is limited between the minmod and the superbee curves. In the previous equations, Φ is defined as the ratio between the gradients of adjacent control volumes in the interface, which in a 1-D finite-difference context yields One should observe that the minmod and superbee limiters require the evaluation of maximum and minimum functions, which characterizes these limiters as nondifferentiable. The van Albada limiter, on the other hand, is continuously differentiable. This aspect is discussed further in the forthcoming paragraphs. Limiter formulations In a similar sense as discussed for the f ROE upwind scheme, the usual way of computing limiters is to perform such calculation every time the new numerical flux should be updated. The limiter computation work, though, is a very expensive task, amounting to more than half of an iteration computational effort, in the present context. Therefore, the idea of freezing the limiter along with the dissipation operator at some stages of the multistage time-stepping scheme seems to be very attractive in terms of possible computational resource savings. This possibility is proposed and addressed in terms of numerical solution quality and computational resource usage in the present work. Limiter computation options are now discussed in the forthcoming subsections. Barth and Jespersen multidimensional limiter implementation (MUSCL[BJ]). In this method, the extrapolated property in the k-th face of the i-th cell is bounded by the maximum and minimum values over the i-th cell centroid and its neighbor cell centroids (Barth and Jespersen, 1989). This TVD interpretation can be mathematically written as The Barth and Jespersen (1989) limiter computation in the i-th cell is initiated by collecting the minimum, q variable in the i-th cell and its neighboring cell centroids. A limiter is computed at each j-th vertex of the control volume as The j-th vertex property is extrapolated from the i-th cell centroid with the aid of Eq. (28), such as (q[i])[j] = q[i] + q · [ji], where [ji] is the distance vector from the i-th cell centroid to the j-th vertex. Barth and Jespersen (1989) argue that the use of the property in the cell vertices gives the best estimate of the solution gradient in the cell. The limiter value for the i-th control volume, ψ[i], is finally obtained as the minimum value of the limiters computed for the vertices. The control volume limiter, ψ[i], is eventually used to obtain the limited reconstructed property in the face, as shown in Eq. (30). General multidimensional limiter implementation (MUSCLge). The current extension of the 1-D limiters to the multidimensional case is originally based on the work of Barth and Jespersen (1989). Moreover, Azevedo, Figueira da Silva and Strauss (2010) also present some insights into this effort in a 2-D case. The present work, however, presents a further extension of the methodology of Azevedo, Figueira da Silva and Strauss (2010). This extension is aimed at allowing the user the choice of any desired limiter formulation. Barth and Jespersen (1989) proposal is a complete limiter implementation in itself, and it has some advantages as well as disadvantages. One of such disadvantages is that it is not a continuous limiter. This aspect is discussed further in the present work. In order to allow for a general multidimensional limiter implementation, a further extension to the work of Barth and Jespersen (1989) is here proposed. The difficulty in implementing a TVD method in a multidimensional unstructured scheme is related to how to define the gradient ratio, Φ. The definition for Φ in Eq. (32) is suitable for a finite-difference context. Nevertheless, if one considers Eq. (32) as the ratio of the central-to the upwind-difference of q, both evaluated at the interface i + 1/2, and also considering the bounding definition for the property in the face in Eq. (33), then a generalization of Eq. (32) to the k-th face of the i-th cell of an unstructured grid can be obtained as where is defined in Eq. (34) and the extrapolated property in the face, (q[i])[k], is given by In the previous definitions, [mi] is the vector distance from the i-th cell centroid to its m-th neighbor cell centroid; [ki] is the vector distance from the i-th cell centroid to the k-th face In this formulation, considering a quasi-uniform grid, in which |[mi]| ≈ 2|[ki]|, the numerator of Φ in Eq. (37) can be rewritten as where are the maximum and minimum properties, in the sense of Eq. (34), though obtained at the centroid of the faces that compose the i-th control volume. The variables can be mathematically defined kk where the property in the faces, q[faces], is the arithmetic average of the properties in the neighboring cells, as in Eq. (5). The multidimensional gradient ratio for an unstructured grid face is finally obtained as We now take the already presented Barth and Jespersen (1989) limiter formulation, though defined for the k-th cell face rather than the originally described cell vertex situation, and compare it with the previous generic gradient ratio definition. With the aid of Eq. (39), it can be concluded that the Barth and Jespersen limiter can be rewritten in terms of the previous generic gradient ratio as with den, num^+ and num^ previously defined in Eq. (42). From the previous result, it can be observed that the Barth and Jespersen limiter recasts the superbee limiter in the 0 < Φ < 1 region, for a 1-D case. Similar conclusions have already been presented in the literature, as in Bruner (1996). The advantage of the gradient ratio definition in Eqs. (41) and (42) is that it can be directly used in any other limiter definition, such as the ones presented in Eq. (31). It can also be used to recast the original Barth and Jespersen limiter formulation, as previously discussed, with a slight modification though. As also discussed, the original Barth and Jespersen limiter uses extrapolated properties in the nodes to build the gradient ratio, while extrapolated properties in the faces are preferred in the current implementation. Considering the Barth and Jespersen vertex choice in the current gradient ratio definition (Eq. (41)), it can be observed that In this formulation, the mesh intervals |[ji]| and |[ki]| cannot be cancelled as in Eq. (39). Moreover, as exemplified in Fig. 2, the distance ratio, |ki|/|ji|, is lower than one, which results is an implemented gradient ratio that is smaller than the correct one. This difference yields smaller limiter values, which can be interpreted as an undesired increase of diffusivity in the limiter implementation. This issue can be avoided with the use of extrapolated face properties, as proposed in Eqs. (41) and (42). Thus, the complete definition for the current multidimensional limiter is finally presented. The computation of the limiter in the i-th cell is initiated by collecting the minimum, , and the maximum, , values for the generic q variable in the centroid of the faces that compose this i-th cell, according to Eq. (40). The k-th face generic property, q[k], for instance, is defined as q[k] =(q[i ]+ q [m])/2, where the m-th cell shares the k-th face with the i-th cell. For each k-th face centroid, the property (q[i])[k] = q(x[k], y[k], z[k]) in that centroid is extrapolated as in Eq. (38). The gradient ratio necessary to compute the limiter value is obtained through Eq. (41). A limiter is computed at each face of the control volume. The limiter value for the ith control volume is finally obtained as the minimum value of the limiters computed for the faces. Smooth multidimensional limiter implementation. The nondifferentiable aspect of the minmod, superbee and Barth and Jespersen limiters poses some numerical difficulties in their utilization for practical numerical simulations. Their discontinuous formulation allows for limit cycles that hamper the convergence of upwind inviscid and viscous flow simulations to steady state (Venkatakrishnan, 1995). Furthermore, such limiters are also insensitive to the relative magnitudes of the neighboring gradients. This problem can be found in shock wave regions, where nondifferentiable limiters may present oscillations, or even in apparently smooth regions, such as farfield regions, where such limiters may respond to random machine-level noise. One option to work around this problem is to freeze the limiter after some code iterations or residue drop, but this technique seems to not always work and to be highly problem dependent (Venkatakrishnan, 1995). Such characteristics may also inhibit its application in actual production environment because of the need for user input in setting the limiter freezing operation for the simulation of interest. Another option is to use differentiable (or continuous) limiters instead of the ones which require maximum and minimum functions. Some examples can be found in Venkatakrishnan (1995), for instance. In that work, the continuous limiters are also augmented with a control parameter to drive the smoothness of the limiter in small-amplitude oscillation regions, and also to allow for a smooth transition from limiting to nonlimiting state. In that formulation (Venkatakrishnan, 1995), this limiter control is made grid-dependent in order to sensitize the local gradients to the local grid size, therefore eliminating small extrema oscillations. Although such control actually allows for machine-zero convergence, it seems that it somehow poses a trade-off between convergence and obtaining monotone (oscillation-free) steady-state solutions. The option chosen in the present work is to remove the grid dependence of the limiter control and to add, instead, a constant threshold value. The van Albada limiter, rewritten for such modification, is given by where [LIM] is the constant limiter control, chosen as [LIM] = 10^ 4 in the present work. This option seems to be appropriate for all aerospace cases considered by the present and other development groups (see, for instance, Oliveira, 1999), always allowing machine-zero steady-state convergence for monotone numerical solutions. These aspects are further analyzed in the results of the present Results and Discussion The flux computation schemes presented in the previous sections are applied to inviscid and viscous flows about typical aerospace configurations. Firstly and foremost, the actual order of accuracy of the discretization scheme is assessed. The influence of the numerical schemes on shock-wave resolution is, then, addressed with a 1-D shock-tube problem, and a transonic inviscid flow about a typical supercritical airfoil. Boundary layer flows are also addressed, for subsonic laminar flows about a flat plate configuration, with Reynolds number Re = 10^5 and Mach number M[∞] = 0.254. Discretization order of accuracy The current method for assessing the discretization order of accuracy is based on the verification methodology presented by Roache (1998) and the discretization order of accuracy estimation procedure from Baker (2005). In the current methodology, a source term carrying information of a generically prescribed solution for the RANS equations is explicitly added to the RHS operator in order to drive the numerical solution to the prescribed one (Roache, 1998). The difference between the converged computational solution and the original one is taken as a measure of the accuracy of the method, as well as a confirmation of the correctness of the implementation (Baker, 2005; Bigarella, 2002). For this verification effort, the chosen physical domain is a hexagonal block with unit sides. Several grid configurations are used for the simulations, including different number of grid points and different control volume types. The following sets of grids are used: 1. Uniformly spaced hexahedral meshes with 25 × 25 × 25, 50 × 50 × 50 and 75 × 75 × 75 points; 2. Two isotropic tetrahedral meshes with 25 × 25 × 25 and 50 × 50 × 50 control points in the domain edges. With the chosen computational meshes, the authors attempt to address the behavior of the numerical code with grid characteristics such as refinement and topology. This evaluation is performed for the four flux schemes available in the current code, namely the MAVR, MATD, CUSP and f ROE schemes. For the CUSP and f ROE schemes, the van Albada limiter is chosen. In this work, 2nd-order-accurate approximations for convective flux computations are available. Hence, the numerical error of the method as function of the mesh spacing can be written as where Δx, in the current study, is taken as the arithmetic average of the cubic root of the cell volumes, for each of the previously described grids. If one takes the logarithm of both sides of Eq. (46), that equation can be rewritten as The logarithm of the theoretical error of the method has a slope of two when plotted against the logarithm of the grid spacing. The actual spatial accuracy of the method, however, may be different from that presented in Eq. (47). The actual error can be written for a general case as where α is the slope of the actual spatial accuracy curve that is attained with the implemented scheme. The error is here taken as the RMS value of the difference between the prescribed and numerical density fields. The resulting slopes are collected in Table 1. From these results, it can be observed that all flux schemes sustain the nominal 2ndorder accuracy for the uniformly-spaced hexahedral meshes. The order of accuracy deteriorates for the tetrahedral meshes. The switched artificial dissipation schemes (MAVR and MATD) present larger accuracy losses than the MUSCL-reconstructed counterparts (CUSP and f ROE). The latter schemes present less mesh topology dependency, which is an indication of increased robustness that one would like to have under highly-demanding or even inadequate mesh cells. In Mavriplis (1997), it is argued that routine upwind schemes, such as the ones here used, are commonly applied in "a quasi-onedimensional fashion normal to control-volume faces". Although the reconstruction and limiter formulations used in the paper are truly multidimensional, the background upwind flux schemes are not, and they "may misinterpret flow features not aligned with control volume interfaces" (Mavriplis, 1997). The current authors attribute the observed loss of accuracy in the tetrahedral grid cases analyzed before to this misalignment behavior. The dependency of the discretization accuracy on mesh cell type and topology is acknowledged in the literature, as discussed by Mavriplis (1997), Deconinck, Roe and Struijs (1993), Sidilkover (1994), Peroomian and Chakravarthy (1997), Deconinck and Degrez (1999), Jawahar and Kamath (2000), and Drikakis (2003). These references, in particular, present efforts towards the development of numerical schemes that are less sensitive to mesh topology, and that present native multidimensional "cell-transparent" behavior. This is certainly an interesting development to be brought to the current code context, but it is beyond the scope of the present work. 1-D shock tube Computations of 1-D shock-tube inviscid flow cases are considered. Numerical results are compared to the analytical solution for this problem. For the numerical simulations, an equivalent 3D grid composed of a line of 500 hexahedra is used. The initial dimensionless density condition for the left half part of the shock tube is Δt = 10^ 5 is used for this transient solution, and the forthcoming plots are taken at t = 0.1. MATD results. The three possible implementation forms of the MATD artificial dissipation method are here assessed. The MATD[sf] option is the straightforward extension of the nominal finite-volume scalar artificial dissipation (MAVR) to a matrix version. The advantageous implementation form as found in a finite-difference context cannot be used due to the necessity of performing a surface integral of the scaling matrix. Another option, namely MAT D[fd] , uses this attractive finite-difference-like implementation form, which unfortunately is not in accordance with the current finite volume artificial dissipation framework formulation. This option is only considered here to verify this previous assertive. Finally, a mixed version that allows for a surface-integrated scaling matrix, though in a nonconservative form, with the same advantageous finite-differencelike matrix implementation, here termed MATD[nc], is suggested. Dimensionless pressure and density distributions along the tube longitudinal axis are presented in Fig. 3. One can clearly observe in this figure that all MATD options allow for pre-and post-discontinuity oscillation to build up. The less correct MATD[fd] option presents much larger oscillations, whereas the MATD[nc] option presents the lowest levels of oscillation. All options, however, correctly follow the analytical result trends. In terms of computational resource usage, the finite-difference-like implementation does present advantages over the finite volume form. The MATD[fd] and MATD[nc] options require about 30% less computational time than the MATD[sf] formulation for this test case. CUSP results. The CUSP scheme implementation options are here addressed. The CUSP[ctt] formulation (Jameson, 1995a; Jameson, 1995b; Swanson, Radespiel and Turkel, 1998) uses constant property distributions in the cell to build the centered convective fluxes, whereas the CUSP[rec] option proposed in the current work uses reconstructed properties in the faces to compute such fluxes. The dissipation formulation is identical between both options, and a value of [CUSP] = 0.3 is chosen for the CUSP constant. The van Albada limiter is used here in the reconstruction process within the proposed multidimensional limiter implementation. The limiter computation is only performed in alternate stages of the Runge-Kutta time step. Swanson, Radespiel and Turkel (1998) argue that the original CUSP scheme formulation, here termed CUSP[ctt] , does not nominally provide oscillation-free shocked-flow results. The current authors believe this behavior is due to the computational form of the centered convective fluxes, which uses constant properties in the cells in the original formulation. The authors believe that the use of reconstructed properties in the faces to build such fluxes may overcome such limitation. These arguments are corroborated by the pressure and density results presented in Fig. 4. Both CUSP implementation options compare very well with the analytical solution. It can be observed in Fig. 4 that the original CUSP[ctt] formulation does allow oscillations to build up near discontinuities, while the proposed CUSP[rec] option prevents such undesired behavior. Furthermore, the latter option also exhibits a crisper representation of the highpressure-side expansion region. Finally, the CUSP[rec] implementation requires less than 10% additional computational time than the original formulation to compute the current test case. f ROE results. The classical numerical flux implementation of the Roe flux scheme ( f ROE[cla]) is compared to the here proposed, cheaper implementation ( f ROE[alt] ) that uses the concept of centered convective flux plus upwind artificial dissipation terms computed in alternate stages of the Runge-Kutta time marching procedure. Limiter settings are exactly the same as used for the previous CUSP scheme simulations. Pressure and density distributions for the Roe scheme are presented in Fig. 5. Numerical solutions compare very well with the analytical one, and no oscillation near discontinuities can be found in the numerical solution. It is interesting to observe that no differences between f ROE[cla] and f ROE[alt] options can be observed. The f ROE[cla] scheme, however, is about twice as expensive as the f ROE[alt ]implementation, proposed in the present paper. MUSCL results. The original Barth and Jespersen multidimensional limiter (MUSCL[BJ] ) is compared to the generic multidimensional implementation (MUSCL[ge]) proposed in the work. The minmod, van Albada and superbee limiters are considered in order to demonstrate the capability of the current multidimensional reconstruction scheme to handle various limiter types. For this study, the f ROE [alt] scheme is used with limiter computations at alternate stages of the Runge-Kutta scheme. Pressure and density results for the previous limiter options are shown in Fig. 6. One can clearly observe in this figure that the correct solution is obtained for all cases, which demonstrates that the current multidimensional reconstruction scheme does allow for the use of various limiter formulations. In general, no oscillatory behavior in the numerical solutions can be observed in the results, regardless of the limiter formulation used. The comparison with the analytical solution is also very good, with the superbee limiter presenting crisper discontinuities, as expected, due to its less diffusive formulation. As already discussed, the Barth and Jespersen limiter recasts the superbee limiter in the 0 < Φ < 1 range, in the 1-D case. This is confirmed in Fig. 6, since the former limiter results are virtually identical to the latter ones. The van Albada limiter results lie within the more-and the less-diffusive minmod and superbee limiters, respectively, as expected. It should be remarked here that its augmented smoothness cannot be demonstrated in this transient case since it is a feature designed for steady problems. 2-D supercritical airfoil Similar analyses, as performed for the 1-D shock tube, are now considered for a multidimensional case. Transonic inviscid flows about the Boeing A4 supercritical airfoil (Nishimura, 1992) are chosen for such analyses. A C-type grid with 100 × 24 cells over the profile and along the normal direction, respectively, is considered. A view of this configuration can be found in Fig. 11 for another grid used in further studies in the paper. The farfield extends to 20 chords away from the profile. The freestream Mach number is M[∞] = 0.768 and the angle of attack is α = 1.4 deg. In the present simulations, three grid levels in a "V" cycle, with one iteration before and after property restrictions and prolongations, are used in the multigrid method. The CFL number in all flux computation schemes is set to CFL = 1.25. The numerical schemes are evaluated at a multidimensional shocked flow in order to assess their capability at correctly solving such test cases. Moreover, numerical results are not compared to experimental ones in this case because viscous terms and turbulence modeling are not included in the present calculations. MATD results. The three possible implementation forms of the MATD artificial dissipation method, namely MAT D[sf] , MATD[fd] and MATD[nc], are here assessed. Pressure coefficient distributions and the residue histories are presented in Fig. 7. It can be observed in Fig. 7(a) that the three Cp distributions present differences. The MATD[sf] results seem to present a larger amount of dissipation. The MATD[fd] option presents several oscillations near the shock wave discontinuity. This observation corroborates the assertive that the present switched artificial dissipation model is calibrated to receive only surface-integrated coefficients, which is not the case for the MATD[fd] option. The MATD[nc ]formulation avoids such oscillation problem while allowing less dissipative results at lower costs, i.e., approximately 30% cheaper than the MATD[sf] option. Residue histories show that both options, which include some surface integration in the definition of the scaling terms of the artificial dissipation, namely MATD[sf] and MATD[nc], converge well for this case, while the MATD[fd] formulation presents convergence stall due to the oscillatory behavior of the CUSP results. The CUSP[ctt] and CUSP[rec] scheme options are here addressed. Numerical settings are taken similarly to the previous 1D shock tube case. Cp distributions and residue histories for these cases are presented in Fig. 8. As already observed in the 1-D case, the original CUSP[ctt] implementation allows for oscillations to build up in the solution. This undesired behavior is avoided with the use of reconstructed properties in the faces to compute the convective flux terms. The convergence of the CUSP[rec] option seems to be more robust than the CUSP[ctt] implementation, mainly because of the lack of oscillatory structures in the numerical solution. f ROE results. The classical numerical flux implementation of the Roe flux scheme ( f ROE[cla]) is compared to the proposed, computationally cheaper, f ROE[alt] implementation. Numerical settings similar to those used in the 1-D shock tube case are also considered for the present study. As already expected, no large differences can be observed between the two solutions, in terms of both numerical resolution of the flow properties, i.e., airfoil pressure coefficient in this particular case, and residue histories, as shown in Fig. 9. The f ROE[alt] option, however, converges in almost half the computational time used by the classical f ROE[cla] implementation. The reader should observe that Fig. 9(b) is showing the residue histories as a function of multigrid cycles. However, the f ROE[alt ]option costs almost half the computational time of the f ROE[cla] option per multigrid cycle. These results, once again, show that the same quality of numerical solution and convergence rate can be obtained with the proposed implementation method at much lower computational resource usage. MUSCL results. The original Barth and Jespersen multidimensional limiter (MUSCL[BJ] ) is compared to the here proposed generic multidimensional implementation (MUSCL[ge]). The minmod, van Albada and superbee limiters are considered within the generic multidimensional reconstruction scheme. Numerical settings similar to those used in the 1-D shock tube case are also considered for the present study. Cp distributions and residue histories for these cases are presented in Fig. 10. No oscillatory behavior in the numerical solutions can be observed in all presented results. The solutions with the superbee and the Barth and Jespersen limiters present crisper discontinuities, as already observed in the shock-tube case. As also already observed, the van Albada limiter results lie within those of the minmod and superbee limiters. It is interesting to observe in the residue histories in Fig. 10(b) that the minmod, Barth and Jespersen and superbee limiters present residue stall. As already discussed, this behavior is due to their discontinuous formulation, which involves the evaluation of maximum and minimum functions. The continuous van Albada option, on the contrary, allows for automatic residue convergence, that is, convergence without the need for user inputs such as limiter freezing. Grid refinement study. The previous 2-D airfoil case is revisited for a mesh refinement study. In these analyses, the MATD model stands for the MATD[nc] option; the CUSP model is actually the CUSP [rec] option with the van Albada limiter computed at alternate stages of the Runge-Kutta time stepping scheme; and the f ROE scheme represents the f ROE[alt] implementation with the same previous CUSP limiter settings. Three C-type grids, with 100 × 24, 150 × 40 and 255 × 64 cells over the profile and along the normal direction, respectively, are used. A view of the grid with 255 × 64 cells can be found in Fig. 11. Pressure coefficient distributions over the profile, obtained with the previously discussed flux computation schemes, are presented in Fig. 11(b) for the 255 × 64-cell computational grid. One can observe in this figure that all numerical schemes yield results that are very similar to each other at computing a crisp shock-wave discontinuity and overall pressure distributions. These computational results can, therefore, be considered as a reasonable reference solution for further comparisons in the paper. It should also be remarked here that the numerical results are not compared to experimental results because an inviscid approximation is considered for the numerical simulations, which is not representative of the actual turbulent viscous wind-tunnel flow. The main interest here is the behavior of the numerical schemes at computing shocked flows at successively refined computational Pressure coefficient distributions over the profile obtained with different meshes and flux computation schemes are presented in Fig. 12. In this figure, one can observe that the MAVR scheme presents considerable variations in the results as the grid is refined. Moreover, the shock wave position also varies considerably with grid refinement. More consistent results can be obtained with the MATD model. Differences among the solutions are much smaller in this case and the shock wave position presents less changes with grid refinement. The CUSP and f ROE schemes present even more consistent results, and the variations in the numerical solution with grid refinement are much less pronounced in these cases. The numerical results obtained with both models are comparatively much similar to each other, with the CUSP scheme presenting slightly better results. Subsonic flat plate The present effort has been strongly motivated by an anomaly found in previous simulations of subsonic flat-plate boundary layers, more precisely, in the bend of the boundary layer profile (Strauss, 2001). Further studies associated this issue to the explicitly added artificial dissipation terms for the centered flux computation scheme, as reported by Bigarella (2002). A dependency of the numerical solution with the computational mesh topology and refinement has also been observed. Although Bigarella (2002) reports this problem in a different context, namely a finite difference code, the same issue can also be found with the present finite volume formulation (Bigarella, Moreira and Azevedo, 2004). Moreover, Bigarella, Moreira and Azevedo (2004) also discuss a detailed analysis of mesh topology for such boundary layer flows. Such work has shown that an adequate mesh topology for boundary layer flows should respect certain characteristics, which are described in the next The corresponding mesh generator places a user-provided number of computational cells inside the boundary layer. These cells are evenly spaced along the wall-normal direction, and they extend to a user-defined height, η[max], given by a Blasius-transformed length, which is defined as η =(y/x)η[max]. This grid construction allows the user to keep a constant number of points inside the boundary layer along the flat plate length. In the actual implementation, however, in order to avoid numerical difficulties near the plate leading edge, this assertive is valid for the last three quarters of the flat plate length. This specific grid construction requires the knowledge of the flow Reynolds number, which should be correctly provided by the user. The plate length is fixed as one and the grid extends two lengths upstream of the plate leading edge, and one length along the normal direction. Outside the boundary layer, an automatic exponential growth guarantees the normal direction length extension and a sufficiently low number of control volumes. One quarter of the number of points specified by the user for the longitudinal direction is placed in the two-length space ahead of the plate, and the remaining points are placed along the plate longitudinal direction. These points are clustered near the flat plate leading edge in order to account for the larger gradients that are expected in this region. Hence, subsonic laminar flows about a flat plate configuration, with Reynolds number Re = 10^5 and Mach number M[∞] = 0.254, are addressed. Three consecutively refined grids are generated for this flow case. For the present study, different number of cells inside the boundary layer, namely 10, 20 and 40 cells, are considered, with 30 cells outside the boundary layer. The user-defined boundary layer height in terms of the Blasius transformed coordinate is η[max] = 6. All grids have 81 points along the longitudinal direction. A view of the grid with 20 points inside the boundary layer can be found in Fig. 13. Figure 14 presents boundary layer results obtained with the previously described computational grids. The flux schemes considered in these analyses are the same used in the previous 2-D airfoil subsection. Centered-and upwind-scheme results have been considered in this figure, and they are compared to the theoretical Blasius solution. It is interesting to observe that the upwind f ROE scheme, as well as the centered MATD and CUSP models, guarantee the correct solution with all tested grid configurations. The MAVR centered scheme presents an anomaly in the bend of the boundary layer profile for the grids with a smaller number of points in the boundary layer, as also verified by Bigarella, Moreira and Azevedo (2004). The oscillation, nevertheless, decreases with the increasing number of points inside the boundary layer. It is only with the 40-point grid configuration that the correct solution can be obtained with the MAVR model. Concluding Remarks The paper presents results obtained with a finite volume code developed to solve the RANS equations over aerospace configurations. Several flux computation schemes are considered in the paper. The convective fluxes can be computed by either a centered scheme plus explicitly added artificial dissipation terms, or the Roe upwind scheme. For the centered scheme, three artificial dissipation models are addressed, namely a scalar and a matrix version of a switched model, and the CUSP scheme. Multidimensional interpolation is used in order to achieve second-order accuracy for schemes that require property reconstruction. An extension to the work of Barth and Jespersen is proposed and evaluated in the paper. Such extension aims at decreasing the level of dissipation added by the original limiter formulation, which has been verified in the presented results. As a byproduct of such effort, various limiter formulations can also be used within the multidimensional unstructured code structure. A smooth limiter option is also proposed and used to achieve machine-zero convergence of monotone numerical solutions without user interference. Several formulation and implementation approaches for such methods are proposed and assessed in the paper in order to enhance robustness, numerical accuracy and computational efficiency of the numerical tool for aerospace flow cases. Comparisons of numerical boundary layers for a zero-pressure gradient flat plate laminar flow with the corresponding theoretical Blasius solution show the level of accuracy that can be obtained with the present formulation. It is observed that the scalar artificial dissipation model presents a very large dependency on the grid density. For this model, about 40 cells inside the boundary layer are required to correctly solve the boundary layer flow. The matrix artificial dissipation model, as well as the CUSP and the Roe schemes, require only 10 points to achieve the same level of accuracy. The grid-independent converged solutions, for all methods, are very close to the theoretical Blasius solution. The code is also able to correctly solve for more complex flows, such as the transonic flow about a typical supercritical airfoil. The ability of the flux computation schemes in calculating shock waves in the solution is assessed in the present study, in particular with regard to the dependency with grid density. It is observed that more consistent solutions can be obtained with the Roe and CUSP schemes, to which small variations with grid refinement are verified. The scalar artificial dissipation model is not so effective in these analyses, and a considerable dependency of the numerical solution with the grid configuration is observed. The matrix version of the switched artificial dissipation model presents more consistent results than its scalar counterpart. The numerical schemes proposed in the paper compose a set of methods for accurately solving complex flow phenomena typical of aerospace flow applications. Numerical robustness, accuracy and efficiency could be obtained with the proposed implementation options. The schemes and the experience acquired in the present study have advanced the capability of simulating the transonic and supersonic viscous flows of interest to IAE, which motivated the current effort. The authors would like to acknowledge Conselho Nacional de Desenvolvimento Científico e Tecnológico, CNPq, which partially supported the work under the Research Grant No. 312064/2006-3. The authors also acknowledge Dr. P. Batten, of Metacomp Technologies, for his insights on the development of the limiter formulations here presented. The authors are further indebted to Fundaçao de Amparo a Pesquisa do Estado de São Paulo, FAPESP, which also partially supported the present development under Project No. 2004/16064-9. Allmaras, S., 2002, "Contamination of Laminar Boundary Layers by Artificial Dissipation in Navier-Stokes Solutions", Proceedings of the Conference on Numerical Methods in Fluid Dynamics, Reading, UK. [ Links ] Anderson, J.D., Jr., 1991, "Fundamentals of Aerodynamics", 2nd Edition, McGraw-Hill International Editions, New York, NY, USA, Chapter 15, p. 647. [ Links ] Azevedo, J.L.F., 1992, "On the Development of Unstructured Grid Finite Volume Solvers for High Speed Flows", Report NT-075-ASE-N/92, Instituto de Aeronáutica e Espaço, São José dos Campos, SP, Brazil. [ Links ] Azevedo, J.L.F., Figueira da Silva, L.F., and Strauss, D., 2010, "Order of Accuracy Study of Unstructured Grid Finite Volume Upwind Schemes", Journal of the Brazilian Society of Mechanical Sciences and Engineering, Vol. 32, No. 1, Jan.-Mar. 2010, pp. 78-93. [ Links ] Baker, T.J., 2005, "On the Relationship between Mesh Refinement and Solution Accuracy", AIAA Paper No. 2005-4875, Proceedings of the 17th AIAA Computational Fluid Dynamics Conference, Toronto, Ontario, Canada. [ Links ] Barth, T.J., and Jespersen, D.C., 1989, "The Design and Application of Upwind Schemes on Unstructured Meshes", AIAA Paper No. 89-0366, 27th AIAA Aerospace Sciences Meeting, Reno, NV, USA. [ Links ] Bigarella, E.D.V., 2002, "Three-Dimensional Turbulent Flow Simulations over Aerospace Configurations", Master Thesis, Instituto Tecnológico de Aeronáutica, São José dos Campos, SP, Brazil, 175 p. [ Links ] Bigarella, E.D.V., and Azevedo, J.L.F., 2005, "A Study of Convective Flux Computation Schemes for Aerodynamic Flows", AIAA Paper No. 2005-0633, Proceedings of the 43rd AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, USA. [ Links ] Bigarella, E.D.V., Basso, E., and Azevedo, J.L.F., 2004, "Centered and Upwind Multigrid Turbulent Flow Simulations with Applications to Launch Vehicles", AIAA Paper No. 2004-5384, Proceedings of the 22nd AIAA Applied Aerodynamics Conference and Exhibit, Providence, RI, USA. [ Links ] Bigarella, E.D.V., Moreira, F.C., and Azevedo, J.L.F., 2004, "On The Effect of Convective Flux Computation Schemes on Boundary Layer Flows", Proceedings of the 10th Brazilian Congress of Thermal Sciences -ENCIT 2004, Paper No. CIT04-0531, Rio de Janeiro, RJ, Brazil. [ Links ] Bruner, C.W.S., 1996, "Parallelization of the Euler Equations on Unstructured Grids", Ph.D. Thesis, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA. [ Links ] Deconinck, H., and Degrez, G., 1999, "Multidimensional Upwind Residual Distribution Schemes and Applications", 2nd International Symposium on Finite Volumes for Complex Applications, VKI Report 199941, Duisburg, Germany. [ Links ] Deconinck, H., Roe, P.L., and Struijs, R., 1993, "A Multidimensional Generalisation of Roe's Flux Difference Splitter for the Euler Equations", Computers & Fluids, Vol. 22, No. 2-3, pp. 215 222. [ Links ] Drikakis, D., 2003, "Advances in Turbulent Flow Computations Using High-Resolution Methods", Progress in Aerospace Sciences, Vol. 39, No. 6-7, pp. 405 424. [ Links ] Hirsch, C., 1991, "Numerical Computation of Internal and External Flows. 2. Computational Methods for Inviscid and Viscous Flows", Wiley, Chichester, UK, Chapter 21, pp. 493 589. [ Links ] Jameson, A., 1995a, "Analysis and Design of Numerical Schemes for Gas Dynamics 1. Artificial Diffusion, Upwind Biasing, Limiters and Their Effect on Accuracy and Multigrid Convergence", International Journal of Computational Fluid Dynamics, Vol. 4, pp. 171 218. [ Links ] Jameson, A., 1995b, "Analysis and Design of Numerical Schemes for Gas Dynamics 2. Artificial Diffusion and Discrete Shock Structure", International Journal of Computational Fluid Dynamics, Vol. 5, pp. 1 38. [ Links ] Jameson, A., Schmidt, W., and Turkel, E., 1981, "Numerical Solution of the Euler Equations by Finite Volume Methods Using Runge-Kutta Time-Stepping Schemes", AIAA Paper No. 81-1259, 14th AIAA Fluid and Plasma Dynamics Conference, Palo Alto, CA, USA. [ Links ] Jawahar, P., and Kamath, H., 2000, "A High-Resolution Procedure for Euler and Navier-Stokes Computations on Unstructured Grids", Journal of Computational Physics, Vol. 164, No. 1, pp. 165 203. [ Links ] Mavriplis, D.J., 1988, "Multigrid Solution of the Two-Dimensional Euler Equations on Unstructured Triangular Meshes", AIAA Journal, Vol. 26, No. 7, pp. 824 831. [ Links ] Mavriplis, D.J., 1990, "Accurate Multigrid Solution of the Euler Equations on Unstructured and Adaptive Meshes", AIAA Journal, Vol. 28, No. 2, pp. 213 221. [ Links ] Mavriplis, D.J., 1997, "Unstructured Grid Techniques", Annual Review in Fluid Mechanics, Vol. 29, pp. 473 514. [ Links ] Nishimura, Y., 1992, "Wind Tunnel Investigations on a Full Span 2-D Airfoil Model in the IAR 1.5m Wind Tunnel", BCAC and IAR Collaborative Work Program, NRC Report LTR-HA-5X5/0205. [ Links ] Oliveira, G.L., 1999, "Analyse Numérique de l'Effet du Défilement des Sillages liés aux Interactions Rotor-Stator Turbomachines", Ph.D. Thesis, Ecole Centrale de Lyon, Laboratoire de Mécanique des Fluides et d'Acoustique, UMR 5509, Lyon, France. [ Links ] Peroomian, O., and Chakravarthy, S., 1997, "A 'Grid-Transparent' Methodology for CFD", AIAA Paper No. 97-0724, 35th AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, USA. [ Links ] Roache, P.J., 1998, "Verification and Validation in Computational Science and Engineering", Hermosa Publishers, Albuquerque, NM, USA. [ Links ] Roe, P.L., 1981, "Approximate Riemann Solvers, Parameter Vectors, and Difference Schemes", Journal of Computational Physics, Vol. 43, No. 2, pp. 357 372. [ Links ] Scalabrin, L.C., 2002, "Numerical Simulation of Three-Dimensional Flows over Aerospace Configurations", Master Thesis, Instituto Tecnológico de Aeronáutica, São José dos Campos, SP, Brazil, 181 p. [ Links ] Sidilkover, D., 1994, "A Genuinely Multidimensional Upwind Scheme and Efficient Multigrid Solver for the Compressible Euler Equations", ICASE Report No. 94-84, NASA Langley Research Center, Hampton, VA, USA. [ Links ] Steger, J.L., and Warming, R.F., 1981, "Flux Vector Splitting of the Inviscid Gasdynamic Equations with Application to Finite Difference Methods", Journal of Computational Physics, Vol. 40, No. 2, pp. 263 293. [ Links ] Strauss, D., 2001, "An Unstructured Grid Approach to the Solution of Axisymmetric Launch Vehicle Flows", Master Thesis, Instituto Tecnológico de Aeronáutica, São José dos Campos, SP, Brazil, 127 p. [ Links ] Swanson, R.C., and Radespiel, R., 1991, "Cell Centered and Cell Vertex Multigrid Schemes for the Navier-Stokes Equations", AIAA Journal, Vol. 29, No. 5, pp. 697 703. [ Links ] Swanson, R.C., Radespiel, R., and Turkel, E., 1998, "On Some Numerical Dissipation Schemes", Journal of Computational Physics, Vol. 147, No. 2, pp. 518 544. [ Links ] Turkel, E., and Vatsa, V.N., 1994, "Effect of Artificial Viscosity on Three-Dimensional Flow Solutions", AIAA Journal, Vol. 32, No. 1, pp. 39 45. [ Links ] van Leer, B., 1979, "Towards the Ultimate Conservative Difference Scheme. V. A Second-Order Sequel to Godunov's Method", Journal of Computational Physics, Vol. 32, No. 1, pp. 101 136. [ Links ] Venkatakrishnan, V., 1995, "Convergence to Steady State Solutions of the Euler Equations on Unstructured Grids with Limiters", Journal of Computational Physics, Vol. 118, No. 1, pp. 120 130. [ Links Zingg, D.W., De Rango, S., Nemec, M., and Pulliam, T.H., 1999, " Comparison of Several Spatial Discretizations for the Navier-Stokes Equations", AIAA Paper No. 99-3260, Proceedings of the 14th AIAA Computational Fluid Dynamics Conference, Norfolk, VA, USA. [ Links ] Paper received 27 July 2009 Paper accepted 29 November 2011 Technical Editor: Eduardo Belo
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S1678-58782012000300012&lng=en&nrm=iso","timestamp":"2014-04-20T07:30:34Z","content_type":null,"content_length":"130604","record_id":"<urn:uuid:6ce8a7bc-2ada-46f5-9cbd-4b2b032e627f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Beating a Dead Horse, pt. 3 So far we have considered several different means of adding OBA and SLG together to measure offensive productivity. Now we will look at other means of combining OBA and SLG, namely multiplication. OBA*SLG (OTS) is a relationship that has been known in sabermetrics for many years. It has been independently developed at least three times. The first was by Earnshaw Cook, who in Percentage Baseball and the Computer presented a model that was essentially OBA*SLG. Then Dick Cramer developed his Batter’s Run Average, which was published in SABR’s Baseball Research Journal at some point in the 1970s; BRA actually was defined as OBA*SLG. Cramer later developed this into Batter’s Win Average, which was based around a model of BRA*PA. The third is the most famous, the Runs Created formula of Bill James, originally written as (H + W)*TB/(AB + W). Given the assumptions about calculating OBA that I have been making throughout this series, RC is precisely equal to OBA*SLG*AB. I am not going to go too in-depth in discussing the flaws of these methods, because I and many others have noted the flaws of Runs Created elsewhere. Briefly, RC attempts to model run scoring, but fails to take into account many logical properties that such a formula should have (Base Runs addresses several of these problems, but while less flawed, is also not perfect). (Basic) RC underweights walks and overweights all types of hits but particularly extra base hits. As a team scoring model, it is inappropriate for application to individual players, and because of the design shortcomings, can boomerang out of control for extreme environments. Another OTS-based method comes from David Smyth, who has advocated the use of OTS*34 as a quick estimator for runs/game. You may look at that and wonder how multiplying by a constant helps anything. First, we have to recognize what estimated unit OTS is expressed in. As you can see, RC = OTS*AB; so OTS is an estimate of runs/at bat. So Smyth’s formula converts runs/at bat to runs/game, and is essentially assuming 34 at bats/game. The average major league game does have something around 34 at bats (for the data I have used throughout this series, the average is within .15 of 34). Of course, we know that the number of at bats any team gets will depend on their own BA and OBA. By holding AB/game constant, the formula is attempting to counterbalance the fact that OBA*SLG goes overboard in predicting the run creation rate of good teams. When you apply the R/G estimate to actual major league teams by assuming 25.2 outs/game, the RMSE is 25.25 (I used OBA*SLG*1.36*(AB - H)). This is actually more accurate than Basic RC (OBA*SLG*AB), which comes in at 26.09, and presumably the benefits are greater when applied to extreme players. Another twist on the idea of multiplying OBA and SLG is a method posted by “dq” on the Inside the Book blog. In it, OBA and SLG are each raised to powers, with the SLG power being dependent on OBA. This is designed to alleviate the issues caused by applying OBA*SLG to extremely high offense situations. OTSE, for “Onbase Time Slugging, Exponential” is defined as: OTSE = PA*OBA^.85*SLG^(1-OBA/2)*.652 For the data we’ve been working with here, a multiplier of .668 will be used, since our OBA does not include HB or SF. This will still throw off the entire equation a bit, though, since the OBA version we are using is not the same as the one it was designed to work with. Throughout this series, I have not discussed what happens at extreme levels of performance too much. Do not think for a minute that this is because I feel that the extremes are unimportant; I am always concerned about theoretical accuracy in addition to empirical accuracy. However, in the case of the cruder metrics being considered (OPS, OPS+, OTS, etc.), we can see their flaws even at normal levels of offensive performance. It is the better constructed metrics for which a more thorough investigation is warranted. Of course, for a formula like OTSE, it is at the extremes where it shows its superiority. As shown in the link above, OTSE matches our expectations for linear weights in extreme environments better than the more simple OBA/SLG combinations. However, when used with average teams using OBA = (H + W)/(AB + W), it leaves a little bit to be desired on the linear weight level: LW = .52S + .78D + 1.05T + 1.31HR + .36W - .103(AB - H) The big issue is that homers are underweighted by around a tenth of a run. However, OTSE is a lot more robust than other OBA/SLG combinations. With that being said, though, is it really worth it? The appeal of OBA/SLG combinations is rooted in the idea that the two stats are readily available. But when you have to resort to two non-linear operations, I think that any claim of “simplicity” is dead on arrival. If you do not consider OBA and SLG to be known, and have to figure them and then plug into OTSE, it is far, far more complex than Base Runs. Furthermore, OTSE is not be applicable to individual batters for the same reasons that multiplicative run estimators like RC and BsR are not. So you may as well just figure Base Runs for the team and be done with it. I fail to see any practical application for which you would want to use OTSE. I did not give the OTSE linear weight formula before the results that it generates, because I was hoping that someone would still be around to read those. The formula for the OTSE weights will scare everyone that’s left off: LW = .668*((OBA*PA*.81*SLG^(-.19)*dSLG + SLG^.81*(OBA*p + PA*dOBA)) 5 comments: 1. david smythFebruary 12, 2008 at 8:03 PM alysis of why OTS*34 works as well as it does. As Tango would say, it works 'by accident'. I've also noticed casually that the 34 number seems to be gradually decreasing, to 33 point something currently. I suppose this reflects primarily fielding improvement, and secondarily more conservative baserunning. 2. David, I don't know if you'll see this comment, but I was wondering if you'd like to write an article about your Base Wins methodology. I think that your thoughts about the theoretical value of runs and outs being equal to the reciprocal of their frequencies per game are interesting. Most of the approaches for run-win conversions use the marginal value of runs and thus BsW is unique and worthy of more exposure. If you wanted to write an article, I'd be happy to post it here, or it could be for Tango's wiki, etc. re: OTS*34, another factor could be the increasing frequency of hit batters, if one uses the full version of OBA in the formula. The RC relationship OBA*SLG*AB does not include HB, so even though there are more at bats/game than in the past, the HB factor is working in the other direction. And sorry for taking so long to acknowledge your comment. 3. The whole point of OTSE is that it is not linear. Scoring is a combination of getting on base (OBP) and advancing the batters (SLG) You would use it because it is most accurate, 4. It's the most accurate only among measures in which you handcuff yourself by only using OBA and SLG. You and I will probably not agree, which is fine, but my position is the same now as it was when I wrote the above. The OTSE equation is not simple, it's not directly applicable to players, and it's not the most accurate if you're allowed to consider inputs other than OBA and SLG. 5. It is not simple - agreed It's not directly applicable to players - not sure I agree; it works well at extremes, which is similar to players (a team of Pujols, for example) It's not most accurate - this says it was most accurate -http://www.baseball-fever.com/showthread.php?48531-Correlation-Between-Stats-and-Runs-etc/page2 Comments are moderated, so there will be a lag between your post and it actually appearing. I reserve the right to reject any comment for any reason.
{"url":"http://walksaber.blogspot.com/2008/02/beating-dead-horse-pt-3.html","timestamp":"2014-04-18T10:36:09Z","content_type":null,"content_length":"109176","record_id":"<urn:uuid:741ff127-4bd2-4935-965d-e9d0810f9d35>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Theuwissen, Solid-State Imaging with Charge-Coupled Devices "... The reproduction of an edge and a high frequency bar pattern is examined for image sensors employing two different color sampling technologies: Bayer RGB color filter array, and Foveon X3 solid state full color. Simulations correlate well with actual images captured using sensors representing both t ..." Cited by 2 (0 self) Add to MetaCart The reproduction of an edge and a high frequency bar pattern is examined for image sensors employing two different color sampling technologies: Bayer RGB color filter array, and Foveon X3 solid state full color. Simulations correlate well with actual images captured using sensors representing both technologies. Color aliasing artifacts in the Bayer mosaic case depend on whether an anti-aliasing optical lowpass filter is used, and are severe without such a filter. For both the edge image and the bar pattern, the Foveon X3 direct image sensor generates few or no color aliasing artifacts associated with sampling. "... Most of today's video and digital cameras use CCD image sensors, where the electric charge collected by the photodetector array during exposure time is serially shifted out of the sensor chip resulting in slow readout speed and high power consumption. Recently developed CMOS image sensors, by compar ..." Cited by 1 (1 self) Add to MetaCart Most of today's video and digital cameras use CCD image sensors, where the electric charge collected by the photodetector array during exposure time is serially shifted out of the sensor chip resulting in slow readout speed and high power consumption. Recently developed CMOS image sensors, by comparison, are read out non-destructively and in a manner similar to a digital memory and can thus be operated at very high frame rates. A CMOS image sensor can also be integrated with other camera functions on the same chip ultimately leading to a single-chip digital camera with very compact size, low power consumption and additional functionality. CMOS image sensors, however, generally su#er from lower dynamic range than CCDs due to their high read noise and non-uniformity. Moreover, as sensor design follows CMOS technology scaling, well capacity will continue to decrease, eventually resulting in unacceptably low SNR. , 2013 "... the sensor raw data ..." - IEEE Journal of Solid-State Circuits , 2001 "... Temporal noise sets the fundamental limit on image sensor performance, especially under low illumination and in video applications. In a CCD image sensor, temporal noise is primarily due to the photodetector shot noise and the output amplifier thermal and 1 noise. CMOS image sensors suffer from hi ..." Add to MetaCart Temporal noise sets the fundamental limit on image sensor performance, especially under low illumination and in video applications. In a CCD image sensor, temporal noise is primarily due to the photodetector shot noise and the output amplifier thermal and 1 noise. CMOS image sensors suffer from higher noise than CCDs due to the additional pixel and column amplifier transistor thermal and 1 noise. Noise analysis is further complicated by the time-varying circuit models, the fact that the reset transistor operates in subthreshold during reset, and the nonlinearity of the charge to voltage conversion, which is becoming more pronounced as CMOS technology scales. The paper presents a detailed and rigorous analysis of temporal noise due to thermal and shot noise sources in CMOS active pixel sensor (APS) that takes into consideration these complicating factors. Performing time-domain analysis, instead of the more traditional frequency-domain analysis, we find that the reset noise power du... "... Analysw of 1/fnois in MOSFETcircuits is typically performed in the frequency domain usma these4((k) s4((k))4. 1/fnois model. Recent experimentalres-)fifi however, have s4 wn that thees4(zwzfi us4( this model can be quite inaccuratees ecially forsr4- hed circuits In the cas of a periodicallyscal hedt ..." Add to MetaCart Analysw of 1/fnois in MOSFETcircuits is typically performed in the frequency domain usma these4((k) s4((k))4. 1/fnois model. Recent experimentalres-)fifi however, have s4 wn that thees4(zwzfi us4( this model can be quite inaccuratees ecially forsr4- hed circuits In the cas of a periodicallyscal hedtrans-w4.z meass- 1/fnois powers pectral densa y(ps] was s( wn to be s))][)4. tly lower than thees4fi[)k us4fi these4-[(- 1/f nois model. For a ring os4)w()(4. meas)w( 1/f-inducedphas nois ps was s- wn to be s]-wfi4.- tly lower than thees4)fiw( us4) these4fifiww 1/fnois model. For aszwfifollowerres ( circuit, measit, 1/fnois power was als ss wn to be lower than thees4-kfi] us4 these4((]fi 1/f model. In analyzingnois in the followerrescircuituscu frequency domain analyswk a low cuto# frequency thatis invers-z proportional to the circuit ontimeis as4k-[# The choice ofthis low cuto# frequencyis quite arbitrary and cancaus sausw[k t inaccuracy inesk[w]4.- nois power. Moreover, duringresn the circuitis not ins4]([ s]([ and thus frequency domainanalys) does not apply. The paper prop osp a nons4.fikfi-)4 extensfik of these4)fikk 1/fnois model, which allows us to analyze 1/f nois ins4(( hed MOSFETcircuits more accurately.Us-) our model we analyzenois for the three aforementionedsone hed circuitexamples and obtain resin4 that arecons]-[) t with the reportedmeas(#)w] ts Researchparti-[3 supported under the ProgrammableDiogra Camera Project byAgi'f t, Canon, HP, Interval Research, and Kodak. Key ords: 1/fnoisz phas noisz nons-z)4.#fi# nois model, time domainnois analys-# CMOS imagesage4( periodicallyscal hedcircuits ringos4fiz][[4 1 "... Abstract—Analysis of 1 noise in MOSFET circuits is typically performed in the frequency domain using the standard stationary 1 noise model. Recent experimental results, however, have shown that the estimates using this model can be quite inaccurate especially for switched circuits. In the case of a ..." Add to MetaCart Abstract—Analysis of 1 noise in MOSFET circuits is typically performed in the frequency domain using the standard stationary 1 noise model. Recent experimental results, however, have shown that the estimates using this model can be quite inaccurate especially for switched circuits. In the case of a periodically switched transistor, measured 1 noise power spectral density (psd) was shown to be significantly lower than the estimate using the standard 1 noise model. For a ring oscillator, measured 1-induced phase noise psd was shown to be significantly lower than the estimate using the standard 1 noise model. For a source follower reset circuit, measured 1 noise power was also shown to be lower than the estimate using the standard 1 model. In analyzing noise in the follower reset circuit using frequency-domain analysis, a low cutoff frequency that is inversely proportional to the circuit on-time is assumed. The choice of this low cutoff frequency is quite arbitrary and can cause significant inaccuracy in estimating noise power. Moreover, during reset, the circuit is not in steady state, and thus frequency-domain analysis does not apply. This paper proposes a nonstationary extension of the standard 1 noise model, which allows us to analyze 1 noise in switched MOSFET circuits more accurately. Using our model, we analyze noise for the three aforementioned switched circuit examples and obtain results that are consistent with the reported measurements. Index Terms—1 noise, CMOS image sensor, nonstationary noise model, periodically switched circuits, phase noise, ring oscillator, time-domain noise analysis. I. , 2013 "... Vol. xx, pp. x ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2315916","timestamp":"2014-04-19T00:30:30Z","content_type":null,"content_length":"28148","record_id":"<urn:uuid:d7a2a607-1091-4fc5-9264-93c305d2c413>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
An Application of Monte-Carlo-Based Sensitivity Analysis on the Overlap in Discriminant Analysis Journal of Applied Mathematics Volume 2012 (2012), Article ID 315868, 14 pages Research Article An Application of Monte-Carlo-Based Sensitivity Analysis on the Overlap in Discriminant Analysis Department of Mathematics, Science and Research Branch, Islamic Azad University, Tehran, Iran Received 22 June 2012; Revised 21 September 2012; Accepted 25 September 2012 Academic Editor: George Jaiani Copyright © 2012 S. Razmyan and F. Hosseinzadeh Lotfi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Discriminant analysis (DA) is used for the measurement of estimates of a discriminant function by minimizing their group misclassifications to predict group membership of newly sampled data. A major source of misclassification in DA is due to the overlapping of groups. The uncertainty in the input variables and model parameters needs to be properly characterized in decision making. This study combines DEA-DA with a sensitivity analysis approach to an assessment of the influence of banks’ variables on the overall variance in overlap in a DA in order to determine which variables are most significant. A Monte-Carlo-based sensitivity analysis is considered for computing the set of first-order sensitivity indices of the variables to estimate the contribution of each uncertain variable. The results show that the uncertainties in the loans granted and different deposit variables are more significant than uncertainties in other banks’ variables in decision making. 1. Introduction The classification problem of assigning observations to one of different groups plays an important role in decision making. When observations are restricted to one of two groups, the Binary classification has wide applicability in business environments. Discriminant analysis (DA) is a classification method that can distinguish the group membership of a new observation. A group of observations for which the memberships have already been identified is used for the estimation of a discriminant function by some criteria, such as the minimization of misclassification. A new sample is classified into one of the groups based on the gained results [1]. Mangasarian [2] identified that linear programming (LP) could be used to determine separating hyperplanes, namely, when two set of observations are linearly separable use linear discriminant function. Freed and Glover [3] and Hand [4] using objectives such as minimization of the sum of deviations (MSD) or maximization of the minimum deviation (MMD) of misclassified observations from the separating hyperplane, when sets of observations that are not necessarily linearly separable, proposed LP methods for generating linear discriminant function. Then a model is based on the goal programming (GP) extension of LP by choosing different criteria, such as minimizing the maximum deviation, maximizing the minimum deviation, minimizing the sum of interior deviation, minimizing the sum of deviations, minimizing misclassified observations, minimizing external deviations, maximizing internal deviations, maximizing the ratio of internal to external, and hybrid models for which there are both advantages and deficiencies [5–9]. In DA, LP and other mathematical programming (MP) based approaches are non-parametric and more flexible than statistical methods [6, 10]. Retzlaff-Roberts [11, 12] and Tofallis [13] proposed the use of DEA-ratio model for DA. Sueyoshi [14] using a data envelopment analysis (DEA) additive model, described a goal programming formulation of DA in which the proposed model is more directly linked to minimizing the sum of deviations from the separating hyperplane; this method was named DEA-DA to distinguish it from other DA and DEA approaches. The original GP version of DEA-DA could not deal with negative data. Therefore, Sueyoshi [15] extended DEA-DA to overcome this deficiency. This approach was designed to minimize the total distance of misclassified observations and formulated by two-stage GP formulations. The number of misclassifications can, however, be considered as measures of misclassification, in which binary variables indicate whether observations are correctly or in correctly classified. Bajgier and Hill [16] proposed a Mixed Integer Programming (MIP) model that included the number of misclassifications in the objective function for the two-group discriminant problem. Gehrlein [17] and Wilson [18] introduced a MIP approach for minimizing the number of misclassified observations in multigroup problems. Chang and Kuo [19] proposed a procedure based on benchmarking model of DEA to solve the two-group problems. Sueyoshi [20] reformulated DEA-DA by MIP to minimize the total number of misclassified observations. When an overlap between two groups is not a serious problem, dropping the first stage of the two-stage MIP approach simplifies the estimation process [21]. Sensitivity analysis provides an understanding of how the model outputs is affected by changes in the inputs, therefore it can assist to increase the confidence in the model and its predictions. sensitivity analysis can use in deciding whether inputs estimates are sufficiently precise to give reliable predictions or we can find the model parameters that can be eliminated. Two classes in sensitivity analysis have been distinguished [22]. Local Sensitivity Analysis Studies how some small variations of inputs around a given value, change the value of the output. This approach is practical when the variation around a baseline of the input-output relationship to be assumed linear. Global Sensitivity Analysis Takes into account all the variation range of the inputs, and has for aim to apportion output uncertainty to inputs’ ones. It quantified the output uncertainty due to the uncertainty in the input parameters. Global sensitivity analysis apportions the output uncertainty to the uncertainty in the input factors, described typically by probability distribution functions that cover the factors’ ranges of existence. Local methods are less helpful when sensitivity analysis is used to compare the effect of various factors on the output, as in this case the relative uncertainty of each input should be weighted. A global sensitivity analysis technique thus incorporates the influence of the whole range of variation and the form of the probability density function of the input. The variance-based methods can be considered as a quantitative method for global sensitivity analysis. In this Study, the Sobol’ decomposition in the framework of Monte Carlo simulations (MCS), [22], which is from the family of quantitative methods for global sensitivity analysis, is applied to study of the effect of the variability in DA due to the uncertainty in the variables. The results of the sensitivity analysis can determine which of the variables have a more dominant influence on the uncertainty in the model output. This paper is organized as follows: Section 2 briefly introduces the DEA-DA model; Section 3 describes the sensitivity analysis based on a Monte-Carlo simulation. Section 4 contains an example and the conclusion is provided in Section 5. 2. Data Envelopment Analysis-Discriminant Analysis (DEA-DA) The two-stage MIP approach [20] is used in this study to describe DEA-DA. We considered two groups ( and ) for which the sum of the two groups has observations . Each observation has independent factors , denoted by . It is necessary to identify the group membership of each observation before its computation. In the two stage approach, the computation process consists of classification and overlap identification, and handling overlap. The first stage is formulated as follows [20]. Stage 1. Here “” indicates a discriminant score for group classification and “” indicates the size of an overlap between two groups. Let (=) and and be an optimal solution of the model (2.1). Then, the original data set () is classified into the following subsets , where Then, we determine that observations in belong to and the observations of belong to because their location is identified from model (2.1). The two subsets and consist of the observations have not yet been classified in the first stage. Stage 2. If , then the existence of an overlap is identified in the fist stage. In this stage, we reclassify all of the observations belonging to the overlap because the group membership of these observations is still undetermined. The second stage is reformulated as follows [20]: Here, the binary variable () counts the number of observations classified incorrectly. The objective function minimizes the number of such misclassifications. The weight () identifies the importance between and in terms of the number of observations. In the presented model (2.3), it is necessary to prescribe a large number () and a small number (). The equation indicates that some pairs avoid the occurrence of and . After gaining an optimal solution on and , the second stage classifies observations in the overlap as follows: if , then the th observation belongs to , or if , then it belongs to . Thus, all of the observations in are classified into or at the end of the second stage. 3. Sensitivity Analysis Based on Monte-Carlo Simulation (MCS) Sensitivity analysis was created to deal simply with uncertainties in the input variables and model parameters [22]. The results of an sensitivity analysis can determine which of the input parameters have a more dominant influence on the uncertainty in the model output [23]. A variance-based sensitivity analysis, which addresses the inverse problem of attributing the output variance to uncertainty in the input, quantifies the contribution that each input factor makes to the variance in the output quantity of interest. A global sensitivity analysis of complex numerical models can be performed by calculating variance-based importance measures of the input variables, such as the Sobol’ indices. These indices are calculated by evaluating a multidimensional integral using a Monte-Carlo technique. This approach allows analyzing the influence of different variables and their subsets, the structure of , and so forth. It is assumed that a mathematical model having input parameters gathered in an input vector with a joint probability density function (pdf) can be presented as a model function: where . Because of the variables are affected by several kinds of heterogeneous uncertainties that reflect the imperfect knowledge of the system, it is assumed that input variables are independent and that the probability density function is known, even if the are not actually random variables. The Sobol’ sensitivity method explores the multidimensional space of the unknown input parameters with a certain number of MC samples. The sensitivity indices are generated by a decomposition of the model function in an -dimensional factor space into summands of increasing dimensionality [22]: where the constant is the mean value of the function, and the integral of each summand over any of its independent variables is zero. Due to this property, the summands are orthogonal to each other in the following form: The sensitivity index, , represents the fractional contribution of a given factor to the variance in a given output variable, . To calculate the sensitivity indices, the total variance, , in the model output, , is apportioned to all of the input factors, ,…, , as follows: By integrating the square of (2.2) and with (2.3), it is possible to decompose the total variance (3.1) as follows [24]: where , and so on. is referred to as the variance of the conditional expectation and is the variance over all of the values of in the expectation of given that has a fixed value. This is an intuitive measure of the sensitivity of to a factor , as it measures the amount by which ) varies with the value of whilst averaging over the ’s, . Following the above definition for the partial variances, the sensitivity indices are defined as Higher order indices can be calculated with a similar approach. With regard to (3.2), the decomposition of the sensitivity indices can be written in the following form: The Sobol’ indices are usually computed with a MC simulation. The mean value and total and partial variance can be derived with samples in the following [22]: In the later equations, is a sampled variable in , and The superscripts (1) and (2) indicate that two different samples are generated and mixed. 4. Illustrative Examples Classification methods are widely used in economic and finance. They are useful for classifying the sectors based upon their performance in different groups and predict the group memberships of new firms. Most of researchers used classification methods to classifying the firms based upon their performance assessment. DA is a classification method that is used in this study. The purpose of the first stage in DEA-DA is to determine whether there is an overlap between the two groups. The existence of an overlap is the main source of misclassification in DA. By identification of the overlap between two groups, it is possible to increase the number of observations classified correctly. If there is no overlap, any DA method may produce an almost perfect classification. However, if there is an overlap, an additional computation process is needed to deal with such an overlap [20]. So, there is a tradeoff between computational effort/time and a high level of classification capability. Misclassification can result as a consequence of an intersection between two groups. Many researchers have proposed approaches that try to reveal the advantage of identifying the minimized overlap of two groups for risk management on the classification problem [19, 20, 25, 26]. Given the importance of the banking sector for, in general, the whole economy and, in particular, for the financial system, in this section, we present an application of the sensitivity analysis to overlaps, , on data from a commercial bank of Iran. This assertion is illustrated numerically for bank branches that have more than 20 and 30 personnel, in two different examples. If we wish to take into account the inherent randomness with respect to what the criteria might experience, we have to bring stochastic characterization into play. The stochastic efficiency assessment of banking branches normally requires performing a set of analyses on DMUs with a suit of variables as criteria. At first, we use the additive model to discriminate banking branches. Most models need to examine both a DEA efficiency score and slacks or an efficiency score measured by them, which depends upon input-based or output-based measurement. The additive model [27] aggregates input-oriented and output-oriented measures to produce the efficiency score. Consequently, the efficiency status is more easily determined by the additive model than by the radial model. In two different examples, the real data set consists of 78 and 18 banking branches. This study selects 31 and 8 branches as inefficient branches, and 47 and 10 branches as efficient branches, respectively, in example 1 and 2, documented in Tables 1 and 2. For determining the classifications based on the additive model, three variables of personnel, payable interest, and non-performing loans are considered as inputs. Nine variables of loans granted, long-term deposit, current deposit, non-benefit deposit, short-term deposit, received interest, and received fee are assumed as Then, for sensitivity analysis in DEA-DA, each observation is modeled as random parameters as follows [28]: where and are the mean value and the coefficients of variation (COV) of the random parameters, respectively, and is generated random parameter with a zero mean that is used in MC simulation. The determination of the bank branches’ parameters carries a high degree of uncertainty, and the specification of these parameters can involve a significant degree of expert judgment. Additionally, the COV of these variables plays an important role in the variation of the efficiency. Here, the COV of the all of the parameters are assumed to equal 0.05. To compute the sensitivity indices, the Sobol’ sampling scheme has been used. Sequences of Sobol’ sampling vectors are essentially quasi-random sequences that are defined as sequences of points that have no intrinsic random properties. In this study, a sensitivity analysis is applied to assess the influence of the banks’ variables on the overlap. After one hundred estimates with a sample size of 5000, convergence was seen in the first-order Sobol’ indices derived by a Sobol’ sampling of the uniform criteria spaces for different banks’ variables. The sensitivity indices, , are depicted in Figures 1 and 2. These figures present the comparison of the first-order indices of the banks’ variables. The total fraction of the variance captured by the first order functions is approximately 99%. This indicates that, for this problem, higher order contributions to the Sobol’ series are relatively small. The overall variance in banks’ efficiency is affected by the variances in each of the random variables. Figure 1 indicates that 54% and 33% of the overall variance in overlap in DEA-DA is attributable to the variance in loans granted and different deposits, respectively, while the personnel, received interest, fee, and non-performing loans variables have little effect. Also, Figure 2 indicates that uncertainties in loans granted and different deposits are the dominant variables in overlap in DEA-DA. 5. Conclusions Due to the inherent complexity and randomness of the data in DEA and problems involving unpredictable or stochastic variables, a probabilistic analysis may be the most rational method of analysis. Therefore, in a probabilistic-based approach, the results open the door to understanding the appropriate estimation of the deciding variables in DEA. In the overlap in DA, the analytical results show that the loans granted and different deposit variables are the main sources of uncertainty, while other variables have a relatively small effect. The main advantage of the used sensitivity analysis approach is that it provides quantified evaluation of the influence of individual variables in DA, and results may be used for decision making. The authors would like to thank the anonymous reviewers for comments which helped to improve the paper. 1. T. Sueyoshi and M. Goto, “Can R&D expenditure avoid corporate bankruptcy? Comparison between Japanese machinery and electric equipment industries using DEA-discriminant analysis,” European Journal of Operational Research, vol. 196, no. 1, pp. 289–311, 2009. View at Publisher · View at Google Scholar · View at Scopus 2. O. L. Mangasarian, “Linear and nonlinear separation of patterns by linear programming,” Operations Research, vol. 13, pp. 444–452, 1965. 3. N. Freed and F. Glover, “A linear programming approach to the discriminant problem,” Decision Sciences, vol. 12, pp. 68–74, 1981. 4. D. J. Hand, Discrimination and Classification, John Wiley & Sons, Chichester, UK, 1981. 5. N. Freed and F. Glover, “Simple but powerful goal programming models for discriminant problems,” European Journal of Operational Research, vol. 7, no. 1, pp. 44–60, 1981. View at Scopus 6. N. Freed and F. Glover, “Evaluating alternative linear programming models to solve the two-group discriminant problem,” Decision Sciences, vol. 17, pp. 151–162, 1986. 7. W. J. Banks, Abad, and P. L, “An efficient optimal solution algorithm for the classification problem,” Decision Sciences, vol. 22, pp. 1008–1023, 1991. 8. F. Glover, “Improved linear programming models for discriminant analysis,” Decision Sciences, vol. 21, pp. 771–785, 1990. 9. D. L. Retzlaff-Roberts, “A ratio model for discriminant analysis using linear programming,” European Journal of Operational Research, vol. 94, no. 1, pp. 112–121, 1996. View at Publisher · View at Google Scholar · View at Scopus 10. E. A. Joachimsthaler and E. A. Stain, “Four approaches to the classification problem in discriminant analysis: an experimental study,” Decision Sciences, vol. 19, pp. 322–333, 1988. 11. D. L. Retzlaff-Roberts, “Relating discriminant analysis and data envelopment analysis to one another,” Computers and Operations Research, vol. 23, no. 4, pp. 311–322, 1996. View at Publisher · View at Google Scholar · View at Scopus 12. D. L. Retzla-Roberts, “A ratio model for discriminant analysis using linear programming,” European Journal of Operational Research, vol. 94, pp. 112–121, 1996. 13. C. Tofallis, “Improving discernment in DEA using profiling,” Omega, vol. 24, no. 3, pp. 361–364, 1996. View at Publisher · View at Google Scholar · View at Scopus 14. T. Sueyoshi, “DEA-discriminant analysis in the view of goal programming,” European Journal of Operational Research, vol. 115, no. 3, pp. 564–582, 1999. View at Publisher · View at Google Scholar · View at Scopus 15. T. Sueyoshi, “Extended DEA-discriminant analysis,” European Journal of Operational Research, vol. 131, no. 2, pp. 324–351, 2001. View at Publisher · View at Google Scholar 16. S. M. Bajgier and A. V. Hill , “An experimental comparison of statistical and linear programming approaches to the discriminant problem,” Decision Sciences, vol. 13, pp. 604–618, 1982. 17. W. V. Gehrlein, “General mathematical programming formulations for the statistical classification problem,” Operations Research Letters, vol. 5, no. 6, pp. 299–304, 1986. View at Publisher · View at Google Scholar 18. J. M. Wilson, “Integer programming formulations of statistical classification problems,” Omega, vol. 24, no. 6, pp. 681–688, 1996. View at Publisher · View at Google Scholar · View at Scopus 19. D. S. Chang and Y. C. Kuo, “An approach for the two-group discriminant analysis: an application of DEA,” Mathematical and Computer Modelling, vol. 47, no. 9-10, pp. 970–981, 2008. View at Publisher · View at Google Scholar 20. T. Sueyoshi, “Mixed integer programming approach of extended DEA-discriminant analysis,” European Journal of Operational Research, vol. 152, no. 1, pp. 45–55, 2004. View at Publisher · View at Google Scholar 21. T. Sueyoshi, “Financial ratio analysis of electric power industry,” Asia-Pacific Journal of Operational Research, vol. 22, pp. 349–376, 2005. 22. A. Saltelli, K. Chan, and E.M. Scott, Sensitivity Analysis, Wiley Series in Probability and Statistics, John Wiley & Sons, Chichester, UK, 2000. 23. A. Saltelli, M. Ratto, T. Andress, F. Campolongo, J. Cariboni, and F. Gatelli, Global Sensitivity Analysis Guiding the Worth of Scientific Models, John Wiley and Sons, New York, NY, USA, 2007. 24. I. M. Sobol', “Sensitivity estimates for nonlinear mathematical models,” Mathematical Modeling and Computational Experiment, vol. 1, no. 4, pp. 496–515, 1993. 25. J. J. Glen, “A comparison of standard and two-stage mathematical programming discriminant analysis methods,” European Journal of Operational Research, vol. 171, no. 2, pp. 496–515, 2006. View at Publisher · View at Google Scholar 26. P. C. Pendharkar, “A hybrid radial basis function and data envelopment analysis neural network for classification,” Computers & Operations Research, vol. 38, no. 1, pp. 256–266, 2011. View at Publisher · View at Google Scholar 27. A. Charnes, W. W. Cooper, B. Golany, L. Seiford, and J. Stutz, “Foundations of data envelopment analysis for Pareto-Koopmans efficient empirical production functions,” Journal of Econometrics, vol. 30, no. 1-2, pp. 91–107, 1985. View at Publisher · View at Google Scholar 28. A. Yazdani and T. Takada, “Probabilistic study of the influence of ground motion variables on response spectra,” Structural Engineering & Mechanics, vol. 39, 2011.
{"url":"http://www.hindawi.com/journals/jam/2012/315868/","timestamp":"2014-04-17T19:19:55Z","content_type":null,"content_length":"221410","record_id":"<urn:uuid:86848189-fa0c-4c24-9606-49ea00b9b3d3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Three point charges lie along a straight line as shown in the figure below Three point charges lie along a straight line as shown in the figure below, where µC, and µC. The separation distances are = 3.00 cm and = 2.00 cm. Calculate the magnitude and direction of the net electric force on each of the charges. Best answer To calculate the net electric force on each of the charges this is what you do: For Part A: The force acting on charge 1 is as follows, F1 = F21 + F31, (in other words the force acting on charge 1 is equal the force of charge 2 on 1, plus the force of charge 3 on 1) After establishing this you go on to use the Electric Force Equation, which is: F=(ke * q1*q2)/ r^2 where ke=8.988*10^9 N*m^2/C^2 and r is the distance between the two charges. Do not forget to first conver the units of the charge and distance to the proper SI units of Coulomb and meter. The direction can be determined based on the diagram, if two charges are alike then the force would point towards the other force, and if the charge are not alike then the force would point towards
{"url":"http://www.answermenu.com/216/three-point-charges-along-straight-line-shown-figure-below","timestamp":"2014-04-24T13:31:38Z","content_type":null,"content_length":"45869","record_id":"<urn:uuid:79e098ee-ec11-48f0-9d7f-db4a79cd0c7b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
HELP....mxCallMATLAB output assignment to variable??? I've spent a week on the following problem and I can't for the life of me figure it out! I'm going to be as brief as possible with the code and chop out irrelevant lines but it should be clear as to my problem. For starters, I'm using Matlab in combination with C, which communicates via mex files. Without further ado... void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) static double *U plhs[4] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); U = (double*)mxGetPr(plhs[4]); /* C code which solves for "U" based on a number of other input variables*/ /* C code which solves for "U" based on a number of other input variables*/ After execution, everything works fine and I have the value for the derivative of "U". I then wanted to compare solvers so I'm swapping out the "solve(U)" for a Matlab function which I call via "mexCallMATLAB". Here is where I get lost (Again I removed irrelevant variables) void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) static double *U plhs[4] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); U = (double*)mxGetPr(plhs[4]); /* Call MATLAB solver */ mxArray *Uin[8],*Uout[2]; Uin[0] = mxCreateNumericArray(2,dims,mxSINGLE_CLASS,mxREAL); memcpy(mxGetPr(Uin[0]),(some variable),m*n*sizeof(float)); yes there are 8 inputs...I just removed for simplicity I then check the results of "Uout" with the following: Everything works out great, but the "C" code that later calls on the variable "U" to find it's derivative does not work. plhs[4] = Uout[0]; /* C code which solves for "U" based on a number of other input variables*/ I can not figure out how to assign "Uout[0]" to "U". I thought by setting plhs[4] = Uout[0] then U would point to the results from "my_matlab_solver" but it does not. There are no compile errors. Is there easier way where I can assign the output of "my_matlab_solver" directly to "U" with out having to make a mxArray for the output? This whole MEX thing seems a lot more complicated than it needs to be. Thanks for you help! ************************************************************************** ************************************************************************** EDIT: I'm going to be very blunt I have a variable called "U" which is defined as follows U = (double*)mxGetPr(plhs[4]); How do I assign the output from this mexCallMATLAB to "U"? I tried this; plhs[4] = Uout[0]; but it does not work. Don't worry about the rest of the code it's fine. My problem is strictly related to assigning the output of mexCallMATLAB to a variable defined as "U" is above. 5 Comments Show 2 older comments @James Tursa- That is not the issue. Everything works fine there. See my edit above to help zero in on the problem. Thanks for your help! Yep ... I was mislead by the mxGetPr (returns a double *) and failed to notice the mxSINGLE_CLASS in the construction. You cannot and should assign the output of the Matlab call to U. The output of the Matlab call is an mxArray variable and U is a pointer to a double array. Perhaps you want to obtain another pointer to the data of the output. It seems, like the concept of the mxArray variables and the pointers to their contents is not clear to you already. It is hard for me to give advices of the code, because the code is posted in parts and distributed over several comments and sections. I do not think that the rest of the code is fine, because the confusion with the mxArray plhs[4] and pointers to data appear repeatedly. No products are associated with this question. 3 Answers Edited by James Tursa on 26 May 2013 Accepted answer Try this. It works for me with some simple test code for FILLAB, my_matlab_solver, and plot_variable. I added some comments where you have memory leaks and type issues, and put in some code for variable class and size checks. /* Did you include string.h? (for memcpy) */ void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) static double *A,*B; mwSize dims[2] = {5,10}; // Use mwSize here, not int. static double *U,*V; mxArray *APPLE[1]; mxArray *Uin[2],*Uout[2]; if( nlhs < 2 ) { mexErrMsgTxt("Not enough outputs."); A = mxGetPr(mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL)); // MEMORY LEAK! B = mxGetPr(mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL)); // MEMORY LEAK! // In the above two lines, you lose the pointers to the mxArrays! Bad practice! /* C code which fills A and B */ Uin[0] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); Uin[1] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); Uout[0] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); // MEMORY LEAK! Uout[1] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); // MEMORY LEAK! // In the above two lines, the mxArrays created are lost by the subsequent // call to mexCallMATLAB, which OVERWRITES the pointers in Uout[0] and Uout[1]. // This is not good programming practice! Just DELETE these two lines entirely! mexCallMATLAB(2,Uout,2,Uin,"my_matlab_solver"); // This CREATES new outputs in Uout. // Note, you could use plhs in the 2nd argument above and skipped the following // two lines assigning Uout to plhs. plhs[0] = Uout[0]; plhs[1] = Uout[1]; // Need nlhs >= 2 in order for this to work. U = mxGetPr(plhs[0]); V = mxGetPr(plhs[1]); // Need nlhs >= 2 in order for this to work. APPLE[0] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); if( !mxIsDouble(plhs[1]) || mxIsSparse(plhs[1]) || mxGetNumberOfElements(plhs[1]) < 50 ) { mexErrMsgTxt("plhs[1] is not full double class with 50 or more elements"); if( !mxIsDouble(plhs[0]) || mxIsSparse(plhs[0]) || mxGetNumberOfElements(plhs[0]) < 50 ) { mexErrMsgTxt("plhs[0] is not full double class with 50 or more elements"); /*APPLE[0] = plhs[1]; */ // Note, you can just use APPLE in the 3rd argument above. No need to use &APPLE[0]. 1 Comment OK...I found out my problem! Your insight to check the following helped me understand: if( !mxIsDouble(Uout[1]) || mxIsSparse(Uout[1]) || mxGetNumberOfElements(Uout[1]) < 50 ) { mexErrMsgTxt("Uout[1] is not full double class with 50 or more elements"); Here was the problem: I was originally casting everything as a float or a mxSINGLE_CLASS which worked fine up until I tried to assign another pointer to the output of the mxCallMATLAB, in this case "U" and "V". But in the call mxCallMATLAB to "my_matlab_solver", the output variables I was assigning to "Uout" (within the function) were actually doubles. This is why the following two statements weren't working: U = mxGetPr(Uout[0]); V = mxGetPr(Uout[1]); I then went into the actual matlab script called "my_matlab_solver" and cast the outputs to a single, to be consistent with the rest of the code and it worked. NOTE to self...be sure matlab script uses same casting as C-code! While this is apparently trivial now, it was not so before. Thanks for all your help James!!!! Edited by James Tursa on 24 May 2013 Your words from above: EDIT: I'm going to be very blunt I have a variable called "U" which is defined as follows U = (double*)mxGetPr(plhs[4]); How do I assign the output from this mexCallMATLAB to "U"? I tried this; plhs[4] = Uout[0]; but it does not work. Don't worry about the rest of the code it's fine. My problem is strictly related to assigning the output of mexCallMATLAB to a variable defined as "U" is above. If Uout[0] is indeed assigned by the mexCallMATLAB call, then the plhs[4] = Uout[0] line should have worked, assuming that you do the U = mxGetPr(plhs[4]) line after assigning plhs[4]. In orher words, this order should work: plhs[4] = Uout[0]; U = mxGetPr(plhs[4]); And, as Jan has already pointed out, you need to delete a couple of lines related to your first plhs[4] = etc to avoid a memory leak. 5 Comments Show 2 older comments void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) static double *A,*B int dims[2] = {5,10}; A = mxGetPr(mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL)); B = mxGetPr(mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL)); /* C code which fills A and B */ mxArray *Uin[2],*Uout[2]; Uin[0] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); Uin[1] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); Uout[0] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); Uout[1] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); plhs[0] = Uout[0]; plhs[1] = Uout[1]; static double *U,*V; U = mxGetPr(plhs[0]); V = mxGetPr(plhs[1]); mxArray *APPLE[1]; APPLE[0] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); /*APPLE[0] = plhs[1]; */ In the code above, the plot I get is incorrect. If I interchange these two lines towards the end to look like this: APPLE[0] = plhs[1]; the plot comes out correct. Something with the "memcpy"? What is the difference between these two: V = mxGetPr(plhs[1]); APPLE[0] = plhs[1]; The difference between the two above is as follows: > V = mxGetPr(plhs[1]); This gets the data pointer from the mxArray plhs[1], which must have a valid mxArray in it or the routine will probably crash MATLAB. This data pointer value is assigned to the variable V. I.e., this statement is just a pointer value assignment to a variable. The data itself is not examined or copied. > memcpy(mxGetPr(APPLE[0]),V,5*10*sizeof(double)); This copies 50 double values from V into the data memory of the mxArray APPLE[0]. V must point to valid double memory of at least 50 elements and APPLE[0] must already be created to have at least 50 double elements of data or MATLAB will crash. > APPLE[0] = plhs[1]; This copies the pointer value of plhs[1] into the pointer value of APPLE[0]. It doesn't do anything more. Nothing is checked, no attempt is made to examine the underlying mxArray, and no data is examined or copied. The statement is simply the assignment of a pointer value to a variable. I do not understand, what you want to achieve: plhs[4] = mxCreateNumericArray(2,dims,mxDOUBLE_CLASS,mxREAL); U = (double*)mxGetPr(plhs[4]); Now U is the pointer to the data of the 5th output argument. plhs[4] = Uout[0]; Now the 5th output is overwritten, which means a memory leak (which is caught automatically when the mex function is left fortunately). But what does >>assign "Uout[0]" to "U"<< mean now? Do you want to get the pointer to the data of Uout[0] again? U = mxGetPr(Uout[0]); Btw., mxGetPr replies a double * already, such that you do not have to cast it. 2 Comments U = (double*)mxGetPr(plhs[4]); is a pointer to the 5th output argument of the function. Originally I would pass that argument into a C function and solve for U. The variable U could then be passed into other functions and everything works fine. Now I am replacing the C function that solves for U with a mexCallMATLAB to solve for U. I need to assign the output from the mexCallMATLAB to the variable U, which is defined in the first line of this comment. Not sure how to do that? No, U is not a pointer to the 5th output. It points to the data of the 5th output and this is an important difference.
{"url":"http://www.mathworks.com/matlabcentral/answers/76750-help-mxcallmatlab-output-assignment-to-variable","timestamp":"2014-04-16T11:33:27Z","content_type":null,"content_length":"55161","record_id":"<urn:uuid:956930fa-cf70-42ae-ab75-376301ac2d81>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Bell's theorem From Wikiquote Bell's theorem is a no-go theorem famous for drawing an important line in the sand between quantum mechanics (QM) and the world as we know it classically. This theme article is a stub. You can help Wikiquote by expanding it. • Bell’s theorem is the most profound discovery of science. □ Henry P. Stapp, "Bell's Theorem and World Process", Nuovo Cimento, Vol. 29B, No. 2, p. 270 (1975). • The gist of Bell's theorem is this: no local model of reality can explain the results of a particular experiment. □ Nick Herbert Quantum Reality - Beyond The New Physics Chapter 11, The Einstein-Podolsky-Rosen Paradox, p. 199 • Bell himself managed to devise such a proof which rejects all models of reality possessing the property of "locality". This proof has since become known as Bells theorem. It asserts that no local model of reality can underlie the quantum facts. Bell's theorem says that reality must be non-local. □ Nick Herbert Quantum Reality - Beyond The New Physics Chapter 12, Bell's Interconnectedness Theorem, p. 212 • Physicists continue to debate whether Bell's theorem is airtight or not. However, the real question is not whether Bell can prove beyond doubt that reality is non-local, but whether the world is in fact non-local. □ Nick Herbert Quantum Reality - Beyond The New Physics Chapter 13, The Future Of Quantum Reality, p. 238 • There's an interesting scientific principle that a wrong answer can be much more stimulating to the field than just sort of finding the answer that's in the back of the book. A wrong result gets people excited. Worried. Obviously, you don't really want that to be happening—it's OK for a theorist to come up with a speculative new theory that gets shot down, but experimentalists are supposed to be very careful and their error limits are supposed to be realistic. Unfortunately, with this experiment, whenever you're looking for a stronger correlation, any kind of systematic error you can imagine typically weakens it and moves it toward the hidden-variable range. It was a hard experiment. In those days, at any rate, with the kind of equipment I had, and ... well, what can I say? I screwed up. □ Michael Horne (physicist), , as quoted by Louisa Gilder, in The Age of Entanglement, Vintage Books, 2008, p. 286: Quote regarding his wrong experimental results that implied that quantum mechanics yielded the wrong prediction regarding Bell's theorem. • No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. □ C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). McGraw Hill. p. 542. ISBN 0-07-051400-3. See also[edit] External links[edit] • Bell's Theorem by Abner Shimony (2004) in the Stanford Encyclopedia of Philosophy.
{"url":"http://en.wikiquote.org/wiki/Bell's_theorem","timestamp":"2014-04-21T10:00:17Z","content_type":null,"content_length":"27052","record_id":"<urn:uuid:f0c3b30d-5f82-4074-801f-259a4898d821>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Joke In high school, my friend Gabe showed this to the Calculus teacher, Mr. Sherry: Mr. Sherry glared for a moment, and then silently made the following change… 18 thoughts on “Calculus Joke” 1. This reminds me of my favorite math joke. Person A: What is the integral of one over cabin d cabin? Person B: Log cabin! Ha ha ha! Person A: No, actually it’s a houseboat. You forgot to add the C. □ Ooh, I like that one. Unlike the “devil” one, it actually makes punny use of the constant of integration. I spent a few minutes trying to think of another, but this is sadly my least-feeble Q: Why does cookie monster prefer indefinite integrals to definite ones? A: Because +C is for +Cookie, and that’s good enough for him. □ Genuine lol! 2. A friend sent me the following “proof”: girls = time * money and… time = money, so… girls = (money)^2 and since… money = root(evil), girls = ( root(evil))^2 so… girls = evil. I saw an error in his proof, and sent the following correction: girls = time + money time = money, so… girls = 2(money) and since… money = root(evil), girls = 2(root(evil)) so… (1/2)girls = root(evil), for non-negative values of evil (and I’m pretty sure that evil is negative) □ Hmm! Clever. Who says math isn’t relevant to teenagers’ lives? I wonder, though, whether either proof withstands unit analysis. After all, if girls = time * money, then girls are measured in “hour-dollars,” which is an unusual unit. But if girls = time + money, then we’ve got to find a common unit for time and money, which seems equally difficult. ☆ Well, since time = money, then they could both be in dollars… or hours. And it still kinda works. ○ Mmm, true. There’s something very American about measuring time in those units. (“C’mon, I’ve been waiting in this line for almost $30…”) 3. Ah, math humor. 4. Three math professors walk into a bar. Two of them complain to each other that the level of education has come down so much lately. The third wants to play a joke on them, excuses himself, walks to a waitress and says to her: “When you come to my table, and I ask you a question, please answer ‘one third x to the third’”. A few minutes later, the third professor asks for the bill, and says to the waitress: “Oh, by the way, do you know the integral of x squared?”. She answers “one third x to the third”. The two professors agree that, perhaps, the general base of knowledge is perhaps not so bad after all. The three professors exit the bar, and the waitress mumbles under her breath “…plus a contant…” □ That’s an awesome one. □ Proof that math professors cheat on tests! 5. Ah… Devil and the C word :D 6. Hey Ben! I saw this joke, and thought it an incredible coincidence, because my math teach was also named Mr. Sherry. After some investigating, though, I heard that you went to the Commonwealth school (I myself am class of 2012) – so this Sherry is the same Rob Sherry that I knew during my time there! (Our Gabes, however, were different.) Hi! I love your blog! I certainly hope you’ve shown it Mr. Sherry. He’s always been an amazing teacher, but I’m sure he can still empathize with a lot of this. □ Hey Yonadav, thanks for reading! Always nice to hear from another commie. I haven’t talked to Sherry since starting the blog, but I’ll see if I can find his email address. I always loved his classes – it was a wild stroke of educational good fortune that I got to have him as a teacher 3 out of 4 years at Commonwealth. 7. I don’t really get it, can you explain? □ Probably easiest to just take an intro calculus course! (But just in case: Gabe’s original joke depends on the fact that the integral of “d[variable]” is “[variable]” and he’s chosen the rather silly variable name “evil.” And Sherry’s follow-up depends on the fact that students (including Gabe, here) always forget to add +C, which is necessary for this kind of indefinite ☆ and it’s a deep blue C… 8. it was so amazing but only for mathematics student ! do you accept me?
{"url":"http://mathwithbaddrawings.com/2013/05/27/calculus-joke/","timestamp":"2014-04-16T13:03:12Z","content_type":null,"content_length":"75884","record_id":"<urn:uuid:1b85ca17-15d6-4cc7-8007-5ecef764da76>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Problems with odeint Fernando Perez fperez.net at gmail.com Thu Jun 15 04:07:23 CDT 2006 On 6/9/06, Víctor Martínez-Moll <victor.martinez at uib.es> wrote: > Hi all, > I've been a SciLab user for some time and I'm evaluating SciPy as a > development tool. > The first thing I tried is to solve a simple second order diferential > equation using odeint(). The problem is that depending on the function I > want to integrate I get nice results, but for most of them I get simply > nothing or nonsense answers. Is not a problem of the function having a > strange behaviour or having singularity points. For example if I try to > solve: > d2y/dt2 = 1-sin(y) > either I get nothing or wrong solutions (the best thing I got was > setting:hmin=0.01,atol=.001), while If I do about the same procedure in > SciLab I get a nice and smooth set of curves. The strangest thing is > that if I use exactly the same procedure to solve: > d2y/dt2 = 1-y > then I get the right solution, which seems to indicate that I'm doing > the right thing (although of course I know I'm not because I do not > belive that odeint is not able to solve such a silly thing). > I've only checked it with the last enthon distribution I found: > enthon-python2.4-1.0.0.beta2.exe > The simple procedure I wrote in Python and its equivalent in SciLab that > does the right thing in are: I'm sorry, but I get basically the same results with both. I'm attaching a slightly modified version of the python script you wrote, which prints some debug information. On my system, this information In [7]: run odebug numerix flag : Numeric mpl version: 0.87.3 scipy version: 0.5.0.1940 I've attached a png of the resulting plot. I've never used scilab before, but just brute-pasting your example into a scilab window and exporting the resulting plot gave me the attached file. >From what I can see, both results look more or less consistent (I haven't done numerical accuracy checks, I'm just looking at the -------------- next part -------------- A non-text attachment was scrubbed... Name: odebug.py Type: text/x-python Size: 528 bytes Desc: not available Url : http://www.scipy.net/pipermail/scipy-user/attachments/20060615/901fcc95/odebug-0001.py -------------- next part -------------- A non-text attachment was scrubbed... Name: odeplot_scipy.png Type: image/png Size: 35304 bytes Desc: not available Url : http://www.scipy.net/pipermail/scipy-user/attachments/20060615/901fcc95/odeplot_scipy-0001.png -------------- next part -------------- A non-text attachment was scrubbed... Name: odeplot_scilab.png Type: image/png Size: 3438 bytes Desc: not available Url : http://www.scipy.net/pipermail/scipy-user/attachments/20060615/901fcc95/odeplot_scilab-0001.png More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2006-June/008284.html","timestamp":"2014-04-16T17:18:36Z","content_type":null,"content_length":"5717","record_id":"<urn:uuid:5fbde569-29cb-4a44-86b7-e6eb6a83f243>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
in particular in a sentence Example sentences for in particular Americans, in particular, are considered exceptional if they know another language. Natural selection had fallen out of favor, in particular over the matter of animal coloration. Americans have long been used to buying food products, meat in particular, in sterilized anonymous conditions. The vocabulary, in particular, is notably dissimilar. And the first two essays, in particular, struck a note of challenge to all the popular critics of the day. In particular, there was fraud and cozenage in the law, injustice and oppression. In particular, panic has returned to the credit markets, yet no new rescue plan is in sight. New science and technology are revolutionizing marine archaeology in particular. Students can identify and research local forest issues-in particular, threats to forests, such as fires and insect infestation. These days in particular, it's good to live in the adventure capital of the world. In particular, eutherians have fewer molars than metatherians. Scientists have reported similar webs in other parts of the world, the tropics in particular. In particular, think before driving with the windows open or using a car-top cargo box. Human migration affects the distribution of population and the size of the population in particular areas. Coal and nuclear power plants, in particular, draw many times more water. Cattle, in particular, have an importance that goes beyond meat production. In other ways, the gathering disappointed, in particular by the absence of some important stakeholders. They wondered in particular if the mere presence of a canine in the office might make people collaborate more effectively. In particular, they said that they were getting more energy out of the process than they put into it. But now the attention has switched from the developing to the developed world, and in particular the euro zone. In particular, they supply liquidity and price information that makes futures markets more efficient. In particular, sudden changes in its height made the job of flying safely above it a tricky one. Metals in particular experienced some of the biggest moves in decades. They found that the patterns of activity elicited by faces, in particular, were different in literate and illiterate brains. Two things in particular have compromised the accountants' vision of what is true and fair. In particular, they disrupt the immune systems of animals such as fish, cause hormonal imbalances and promote tumours. In particular, they do so in those regions of the brain that control behaviour. The latter in particular has been accused of stealing newspapers' content and undermining their attempts to charge for it. In particular, researchers have long debated whether the apes fight for land, or for females. In particular, they are working on a group of chemicals called chalcogenides. The turbulent economy has shaken lucrative news-stand sales in particular. In particular, earning a degree and marrying before having children can help someone climb to a higher rung. He has looked, in particular, at the interactions between zoo-held apes and local birds and small mammals. Investment bankers in particular have been sounding brighter, thanks to a healthy start to the year. In particular, the politicians decided against an outright ban on the use of great apes. In particular, they looked at the sizes of bones of the toes and heel. Also, think about the specificities of the department in particular. Host organizations, in particular, say they are better able to use volunteers with language skills. Your experience, in particular, does not speak for itself. For large universities in particular, the stakes are pretty high. Two terms in particular get college lawyers nervous. What concerns me are the comments from one of her students in particular. In particular, the iconic yellow-bordered cover shots that opened our eyes to new corners of the world. But one in particular is poised to rupture sooner than later. Three digital search topics in particular are converging in interesting, and foreboding, ways. In particular, they have contributed much to the development of sound boards. In particular, many investors placed their hopes in cellulosic ethanol. Several features in particular help to make endospores resistant to environmental stress. In particular it will examine the role-predicted to be a critical one-that water plays in the process. The face, in particular, appears to play a big role. In particular, they help muscle cells respond to insulin. There are many examples of experiments that were searching for something in particular and found something completely contrary. In particular, any virtual gravitons emitted by the singularity will be trapped inside that singularity. In particular, they tracked the positions of transposable elements. In particular they hardly ever had people come in to their house for social reasons, they were basically excluded. Intestinal worm eggs also are found on raw vegetables ascarides in particular. As there are situations where this behavior is acceptable, boxing and similar sports in particular. In particular, he cautions parents about the possible effects of cell phone radiation on children. Hence, the combination of over-eating and living a sedentary lifestyle is not a winning strategy for mammals in particular. Large companies, in particular, are increasing head count for environmental and sustainability roles. In particular, gas prices have come down slightly since peaking in the spring. Professors in particular are drawn from a rather narrow segment of the population. But more than mere solace is to be gained from reading good stories-short stories in particular. In particular, they need to stop taking secular taxpayer money. In particular, the consequences of going wrong-and all these systems go wrong sometimes-are rarely considered. What they really seem to have in mind is punishment-in particular, physical punishment. In this case, in particular, the white south made a giant mistake. He'd been sitting there, doing nothing in particular. It is there only because someone in particular particularly liked it. Fresh herbs, in particular, are my biggest weakness. Veronica, in particular, was full of ideas for the magazines. The talent agencies in particular are feeling the financial squeeze as jobs diminish along with packaging fees. In the hacker scene, in particular, there are quite a few extreme characters. Apparently there is still ample room for error-five errors, in particular. When you're stammering, in particular, your eyes close. In particular, two different mechanisms have been proposed: imitation and emulation. In particular, he examined a duct through which the silk flows before exiting the spider. In particular, three new techniques under development could be the answer should another pandemic occur. Axons in particular can travel spectacular distances to reach astonishingly precise targets. In particular, they focus on genes related to immune function and pigmentation. Children, in particular, grew up as perpetual lawbreakers. In particular, it can be a feedstock to make ammonia and urea, which are used to manufacture fertilizer. In particular, the old wood shows less variation in density within growth rings, researchers say. In particular, they looked at enamel, the tough covering that caps the teeth of humans and other vertebrates. With lantern consciousness you are vividly aware of everything without being focused on any one thing in particular. In particular, the quality of the layered thin films used to make organic transistors often varies at the molecular level. In particular, she says, they're wary of products that would be difficult to recall should they prove defective. In particular, they have the potential to change the way consumers interact with their televisions. The paper does not in particular aim on a test of string theory. In particular, a lot of the information generated in real-time relates to advertising. In particular, streaming audio and video tasks can hog microprocessor resources. They can be depressed if nothing in particular is happening. The longevity in particular doesn't work for people. In particular, this quick electrical disconnect method precludes any thermal management plumbing to the battery. Quite right: he left out electronics in general and semiconductors in particular. The problem is to determine the nature of the interaction between individuals and in particular, who influences whom. Primates in particular are able to do a lot of the mental tasks that are essential to grasping language. And for smaller museums in particular, it may not be a problem they can afford to solve. There is one photograph in particular where she is looking straight at the camera and she's got shoulder-length hair and a fringe. When it comes to prostate cancer in particular, tomatoes may yet offer some health benefits. They can take place anywhere or nowhere in particular. Still, a couple of scenes in particular are deeply moving. For small amounts in particular, carrying around a few dollar bills is no problem. The court may dispense with their use in particular cases. Famous quotes containing the word in particular In New York—whose subway trains in particular have been "tattooed" with a brio and an energy to put our o... We have to give ourselves—men in particular —permission to really be with and get to know our children. ... Explore Dictionary.com Synonyms for in particular Nearby Words
{"url":"http://www.reference.com/example-sentences/in-particular","timestamp":"2014-04-18T18:49:00Z","content_type":null,"content_length":"41651","record_id":"<urn:uuid:d9df82bf-bef2-40aa-ad12-53b4fd12cf49>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Going through each observation of a variable Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Going through each observation of a variable From David Kantor <kantor.d@att.net> To statalist@hsphsun2.harvard.edu Subject Re: st: Going through each observation of a variable Date Fri, 07 Jun 2013 12:09:06 -0400 At 10:52 AM 6/7/2013, Derya wrote: Dear Statalist users, I am newbie in Stata programming and I am stuck on something that is probably very simple - but I could not find the answer on the list server. I would like to compute the mean and standard deviation of an expression (P1*os1+P2*os2) that is computed for prices under different scenarios. I have 500 scenarios, so the variables price1 and price2 have 500 observations. I came up with the program below, which works fine, but it chooses random values of price1 and price2 variables. I would like like the program go through each observation of price1 price2 one by one. How would I do that? Any help will be appreciated! Thanks a lot! gen k=0 gen wmean=0 gen wsum=0 gen wsqdev=0 gen wsd=0 forv k=1/500 { gen r = uniform() sort r gen select =_n==1 scalar P1=price1 scalar P2=price2 drop r select gen Y_`k'=P1*os1+P2*os2 replace k=`k' replace wsum=wsum+Y_`k' replace wmean=wsum/`k' replace wsqdev=wsqdev+((wmean-Y_`k')^2) replace wsd=sqrt(wsqdev/(k-1)) It is not clear what you are trying to do and why you are sorting on a random order 500 times. (And even without the random ordering, there is usually no need to step through the observations.) Just a few minor points to start with: You indicated that variables price1 and price2 have 500 observations. It is more accurate to say that your dataset has 500 observations. The commands scalar P1=price1 scalar P2=price2 take their values from the first observation. The variable select is unused. Putting those matters aside, is all you want just the mean and standard deviation of price1*os1+price2*os2 ? Then generate a variable containing that value, and summarize it: gen newvariable = price1*os1+price2*os2 summ newvariable * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-06/msg00281.html","timestamp":"2014-04-18T13:19:32Z","content_type":null,"content_length":"9988","record_id":"<urn:uuid:b67f261f-6375-470a-9681-680c8a9a6e3c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Life on the lattice In the previous post in this series parallelling our local discussion seminar on this review , we reminded ourselves of some basic ideas of Markov Chain Monte Carlo simulations. In this post, we are going to look at the Hybrid Monte Carlo algorithm. To simulate lattice theories with dynamical fermions, one wants an exact algorithm that performs global updates, because local updates are not cheap if the action is not local (as is the case with the fermionic determinant), and which can take large steps through configuration space to avoid critical slowing down. An algorithm satisfying these demands is Hybrid Monte Carlo (HMC). HMC is based on the idea of simulating a dynamical system with Hamiltonian H = 1/2 p + S(q), where one introduces fictitious conjugate momenta p for the original configuration variables q, and treats the action as the potential of the fictitious dynamical system. If one now generates a Markov chain with fixed point distribution e , then the distribution of q ignoring p (the "marginal distribution") is the desired e To build such a Markov chain, one alternates two steps: Molecular Dynamics Monte Carlo (MDMC) and momentum refreshment. MDMC is based on the fact that besides conserving the Hamiltonian, the time evolution of a Hamiltonian system preserves the phase space measure (by Liouville's theorem). So if at the end of a Hamiltonian trajectory of length τ we reverse the momentum, we get a mapping from (p,q) to (-p',q') and vice versa , thus obeying detailed balance: e P((-p',q'),(p,q)) = e P((p,q),(-p',q')), ensuring the correct fixed-point distribution. Of course, we can't actually exactly integrate Hamilton's in general; instead, we are content with numerical integration with an integrator that preserves the phase space measure exactly (more about which presently), but only approximately conserves the Hamiltonian. We make the algorithm exact nevertheless by adding a Metropolis step that accepts the new configuration with probability e , where δH is the change in the Hamiltonian under the numerical integration. The Markov step of MDMC is of course totally degenerate: the transition probability is essentially a δ-distribution, since one can only get to one other configuration from any one configuration, and this relation is reciprocal. So while it does indeed satisfy detailed balance, this Markov step is hopelessly non-egodic. To make it ergodic without ruining detailed balance, we alternate between MDMC and momentum refreshment, where we redraw the fictitious momenta at random from a Gaussian distribution without regard to their present value or that of the configuration variables q: P((p',q),(p,q))=e ^-1/2 p'^2 . Obviously, this step will preserve the desired fixed-point distribution (which is after all simply Gaussian in the momenta). It is also obviously non-ergodic since it never changes the configuration variables q. However, it does allow large changes in the Hamiltonian and breaks the degeneracy of the MDMC step. While it is generally not possible to prove with any degree of rigour that the combination of MDMC and momentum is ergodic, intuitively and empirically this is indeed the case. What remains to see to make this a practical algorithm is to find numerical integrators that exactly preserve the phase space measure. This order is fulfilled by symplectic integrators. The basic idea is to consider the time evolution operator exp(τ d/dt) = exp(τ(-∂ H ∂ H ∂ )) = exp(τh) as the exponential of a differential operator on phase space. We can then decompose the latter as h = -∂ H ∂ H ∂ = P+Q, where P = -∂ H ∂ and Q = ∂ H ∂ . Since ∂ H = S'(q) and ∂ H = p, we can immediately evaluate the action of e and e on the state (p,q) by applying Taylor's theorem: e (p,q) = (p,q+τp), and e = (p-τS'(q),q). Since each of these maps is simply a shear along one direction in phase space, they are clearly area preserving; so are all their powers and mutual products. In order to combine them into a suitable integrator, we need the Baker-Campbell-Hausdorff (BCH) formula. The BCH formula says that for two elements A,B of an associative algebra, the identity ) = A + (∫ ((x log x)/(x-1)) [{x=e^ad Ae^t ad B}] dt) (B) holds, where (ad A )(B) = [A,B], and the exponential and logarithm are defined via their power series (around the identity in the case of the logarithm). Expanding the first few terms, one finds ) = A + B + 1/2 [A,B] + 1/12 [A-B,[A,B]] - 1/24 [B,[A,[A,B]]] + ... Applying this to a symmetric product, one finds ^1/2 A ^1/2 A ) = A + B + 1/24 [A+2B,[A,B]] + ... where in both cases the dots denote fifth-order terms. We can then use this to build symmetric products (we want symmetric products to ensure reversibility) of e and e that are equal to e up to some controlled error. The simplest example is ^δτ/2 P ^δτ Q ^δτ/2 P = e + O((δτ) and more complex examples can be found that either reduce the order of the error (although doing so requires one to use negative times steps -δτ as well as positive ones) or minimize the error by splitting the force term P into pieces P that each get their own time step δτ to account for their different sizes. Next time we will hear more about how to apply all of this to simulations with dynamical fermions.
{"url":"http://latticeqcd.blogspot.com/2007_12_01_archive.html","timestamp":"2014-04-18T13:15:53Z","content_type":null,"content_length":"88260","record_id":"<urn:uuid:203f50d2-4002-43e2-aa15-0ab831e1413b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Decadal predictionsDecadal predictions Decadal predictions It’s worth pointing out that ‘skill’ is defined relative to climatology (i.e. do you do a better job at estimating temperature or rainfall anomalies than if you’d just assumed that the season would be just like the average of the last ten years for instance). Some skill doesn’t necessarily mean that the predictions are great – it simply means that they are slightly better than you could do before. We should also distinguish between skillful (in a statistical sense) and useful in a practical sense. An increase of a few percent in variance explained would show up as improved skill, but that is unlikely to be of good enough practical value to shift any policy decisions. So given that we know roughly what we are looking for, what is needed for this to work? First of all, we need to know whether we have enough data to get a reasonable picture of the ocean state right now. This is actually quite hard since you’d like to have subsurface temperature and salinity data from a large part of the oceans. That gives you the large scale density field which is the dominant control on the ocean dynamics. Right now this is just about possible with the new Argo float array, but before about 2003, subsurface data in particular was much sparser outside a few well travelled corridors. Note that temperature data are not sufficient on their own for calculating changes in the ocean dynamics since they are often inversely correlated with salinity variations (when it is hot, it is often salty for instance) which reduces the impact on the density. Conceivably if any skill in the prediction is simply related to surface temperature anomalies being advected around by the mean circulation, it could be possible be useful to do temperature only initializations, but one would have to be very wary of dynamical changes and that would limit the usefulness of the approach to a couple of years perhaps. Next, given any particular distribution of initialization data, how should this be assimilated into the forecasting model? This is a real theoretical problem given that models all have systematic deviations from the real world. If you simply force a model temperature and salinity to look exactly like the observations, then you risk having any forecast dominated by model drift when you remove the assimilation. Think of a elastic band being pulled to the side by the ‘observations’, but having it snap back to it’s default state when you stop pulling. (A likely example of this is the ‘coupling shock’ phenomena possibly seen in the Keenlyside et al simulations). A better way to do this is via anomaly forcing – that is you only impose the differences from the climatology on the model. That is guaranteed to have less model drift, but at the expense of having the forecast potentially affected by systematic errors in, say, the position of the Gulf Stream. In both methods of course, the better the model, the less bad the problems. There is a good discussion of the Hadley Centre methods in Haines et al (2008) (no sub reqd.). Assuming that you can come up with a reasonable methodology for the initialization, the next step is to understand the actual predictability of the system. For instance, given the inevitable uncertainties due to sparse coverage or short term variability, how fast do slightly differently initialized simulations diverge? (Note that we aren’t talking about the exact path of the simulation which will diverge as fast as weather forecasts – a couple of weeks, but the larger scale statistics of ocean anomalies). This appears to be a few years to a decade in “perfect model” tests (where you try and predict how a particular model will behave using the same model but with an initialization that mimics what you’d have to do in the real world). Finally, given that you can show that the model with its initialization scheme and available data has some predictability, you have to show that it gives a useful increase in the explained variance in any quantities that someone might care about. For instance, perfect predictability of the maximum overturning streamfunction might be scientifically interesting, but since it is not an observable quantity, it is mainly of academic interest. Much more useful is how any surface air temperature or rainfall predictions will be affected. This kind of analysis is only just starting to be done (since you needed all the other steps to work first). | Previous page | Next page
{"url":"http://www.realclimate.org/index.php/archives/2009/09/decadal-predictions/comment-page-1/?wpmp_switcher=mobile&wpmp_tp=1","timestamp":"2014-04-17T12:34:36Z","content_type":null,"content_length":"15050","record_id":"<urn:uuid:4223eace-892a-4b44-9daf-458ea1ef88e9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Rotation (Deskewing) OpenCV - Rotation (Deskewing) Posted by Félix on October 4, 2011 Leave a comment (13) Go to comments In a previous article I presented how to compute the skew angle of a digitized text document by using the Probabilistic Hough Transform. In the last article I presented how to compute a bounding box using OpenCV, this method was also used to compute the skew angle but with a reduced accuracy compared to the first method. Test Set We will be using the same small test set as before: The naming convention for those images is simple, the first letter stands for the sign of the angle (p for plus, m for minus) and the following number is the value of the angle. m8.jpg has therefore been rotated by an angle of -8 degrees. Bounding Box In this article I will assume we have computed the skew angle of each image with a good accuracy and we now want to rotate the text by this angle value. We therefore declare a function called deskew that takes as parameters the path to the image to process and the skew angle. void deskew(const char* filename, double angle) cv::Mat img = cv::imread(filename, 0); cv::bitwise_not(img, img); std::vector<cv::Point> points; cv::Mat_<uchar>::iterator it = img.begin<uchar>(); cv::Mat_<uchar>::iterator end = img.end<uchar>(); for (; it != end; ++it) if (*it) cv::RotatedRect box = cv::minAreaRect(cv::Mat(points)); This code is similar to the previous article: we load the image, invert black and white and compute the minimum bounding box. However this time there is no preprocessing stage because we want the bounding box of the whole text. We compute the rotation matrix using the corresponding OpenCV function, we specify the center of the rotation (the center of our bounding box), the rotation angle (the skew angle) and the scale factor (none here). cv::Mat rot_mat = cv::getRotationMatrix2D(box.center, angle, 1); Now that we have the rotation matrix, we can apply the geometric transformation using the function warpAffine: cv::Mat rotated; cv::warpAffine(img, rotated, rot_mat, img.size(), cv::INTER_CUBIC); The 4th argument is the interpolation method. Interpolation is important in this situation, when applying the transformation matrix, some pixels in the destination image might have no predecessor from the source image (think of scaling with a factor 2). Those pixels have no defined value, and the role of interpolation is to fill those gaps by computing a value using the local neighborhood of this pixel. The quality of the output and the execution speed depends on the method chosen. The simplest (and fastest) interpolation method is INTER_NEAREST, but it yields awful results: There are four other interpolation methods: INTER_NEAREST, INTER_AREA, INTER_CUBIC and INTER_LANCSOZ4. For our example those 4 methods yielded visually similar results. The rotated image using INTER_CUBIC (bicubic interpolation): We should now crop the image in order to remove borders: cv::Size box_size = box.size; if (box.angle < -45.) std::swap(box_size.width, box_size.height); cv::Mat cropped; cv::getRectSubPix(rotated, box_size, box.center, cropped); As mentioned in the previous article, if the skew angle is positive, the angle of the bounding box is below -45 degrees because the angle is given by taking as a reference a "vertical" rectangle, i.e . with the height greater than the width. Therefore, if the angle is positive, we swap height and width before calling the cropping function. Cropping is made using getRectSubPix, you must specify the input image, the size of the output image, the center of the rectangle and finally the output image. We use the original center because the center of a rotation is invariant through this transformation. This function works at a sub-pixel accuracy (hence its name): the center of the rectangle can be a floating point value. The cropped image: To better understand the problem we have with positive angles, here what you would get without the correction: We can immediately see that we just need to swap the height and the width of the rectangle. This is a small demo so let's display the original image, the rotated image and the cropped image: cv::imshow("Original", img); cv::imshow("Rotated", rotated); cv::imshow("Cropped", cropped); That's it ! It's really simple to rotate an image with OpenCV ! Your posts on skew/deskew are amazing. Thank you! I'm glad I could help ! Hey Felix! In the most recent version of OpenCV, the atomic variables that are CvArr and IPlImage , are mostly used in newer functions. When I tried to convert these data structures between them , I got runtime For example, in the computing skew function, I had to do this cv::Mat MATT; MATT = cvarrToMat(IMG,0,1,0); // which IMG is an IPlImage* everything looks fine , but just after the execution of this function I receive error. And the application crashes. Then I changed the whole thing like that : double Compute_Skew(IplImage* IMG) double angle = 0.0; IplImage* dst = cvCreateImage( cvGetSize(IMG), 8, 1 ); IplImage* color_dst = cvCreateImage( cvGetSize(IMG), 8, 3 ); CvMemStorage* storage = cvCreateMemStorage(0); CvSeq* lines = 0; int i; cvCanny( IMG, dst, THRES0-25, THRES0+25, 3 ); cann = cvCloneImage(dst); lines = cvHoughLines2( dst, 10 ); for( i = 0; i total,100); i++ ) float* line = (float*)cvGetSeqElem(lines,i); float rho = line[0]; float theta = line[1]; CvPoint pt1, pt2; double a = cos(theta), b = sin(theta); double x0 = a*rho, y0 = b*rho; pt1.x = cvRound(x0 + 1000*(-b)); pt1.y = cvRound(y0 + 1000*(a)); pt2.x = cvRound(x0 - 1000*(-b)); pt2.y = cvRound(y0 - 1000*(a)); cvLine( dst, pt1, pt2, CV_RGB(255,0,0), 3, 8 ); angle += atan2((double)(pt2.y - pt1.y), (double)(pt2.x - pt1.x)); angle /= lines->total; return angle; Do you have these kind of newer functions to do the deskewing or rotating ? Do you suggest using the cvMat structs? Thank you, Unfortunately I have never used the C API of OpenCV so I cannot help you with the error you are encountering. But are you sure of the parameter you have used for 'cvarrToMat' ? Have you tried using the default parameters ? i.e. just 'cvarrToMat(IMG)'. Personally I prefer to use the C++ interface, I think it's easier to write code with it. I am trying to find a program to deskew music scores, and specifically one that will do so without altering image size. I do not want recropping and am not worried about borders. I am very frustrated because since 2010 all deskew programs seem to want to crop. Can you help, or even write one? We would pay for one... This is a good article. Very helpful. Is this method based on literature books or your own experiments? There is a mistake in your code. In this expression: cv::Mat rot_mat = cv::getRotationMatrix2D(box.center, angle, 1); angle is defined in degrees, your previous code (detect skew angle) returns the angle in radians. Took me an hour to find out.
{"url":"http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/","timestamp":"2014-04-21T04:45:26Z","content_type":null,"content_length":"39507","record_id":"<urn:uuid:07672d77-ab63-4db2-989b-4d94858d98ec>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalizing the square theorem up vote 0 down vote favorite Let $X$ and $Y$ be connected quasi-projective varieties over $\mathbf{C}$. Let $\mathcal{L}$ be an algebraic vector bundle over $X\times Y$. Let $p_2:X\times Y\rightarrow Y$ be the projection. ($\star$) Assume that for all $y_0\in Y$ one has that $\mathcal{L}|_{X\times \{y_0\}}$ is trivial. In this case, the first (naive) intuition is that $\mathcal{L}\simeq p_2^*\mathcal{F}$ for some algebraic vector bundle $\mathcal{F}$ over $Y$. Q1: Give an example where $(\star)$ is satified but where $\mathcal{L}$ is not the pullback of any algebraic vector bundle $\mathcal{F}$ over $Y$. Q2: Under what (interesting) additional assumptions on $X$, $Y$ and $\mathcal{L}$ is it possible to conclude that $\mathcal{L}$ is the pullback of a vector bundle over $Y$? For example, if $X$ is complete then the square theorem says that $\mathcal{L}$ is the pullback of a vector bundle over $Y$. ag.algebraic-geometry complex-geometry add comment 1 Answer active oldest votes There are many examples. Take $X := \mathbb A^1$ and $Y := \mathbb A^2 \smallsetminus \{(0,0)\}$; since all vector bundles on $X$ and $Y$ are trivial, it is sufficient to give an example of a vector bundle on $U := X \times Y$ that is not trivial. up vote Consider the embedding $j \colon U \subseteq \mathbb A^3$. The are reflexive sheaves $F$ on $\mathbb A^3$ that are locally free on $\mathbb A^3 \smallsetminus \{(0,0, 0)\}$, but not at the 2 down origin. For example, you can take the sheaf associated with a second syzygy of the $\mathbb C[x,y,z]$-module $\mathbb C[x,y,z]/(x,y,z)$; this is not projective, because the projective vote dimension of $\mathbb C[x,y,z]/(x,y,z)$ over $\mathbb C[x,y,z]$ is 3, but it is reflexive, because every second syzygy is. The restriction $G$ of $F$ to $U$ is a non-trivial locally free sheaf on $U$. In fact, since $F$ is reflexive we have $F = j_*G$, so if $G$ were trivial $F$ would also be a free sheaf, which is not the case. Hi @Angelo, thanks for the example. I never played with reflexive $\mathcal{O}_X$-modules. They seem to enjoy nice properties like $j_*G=F$. Do you have a reference for this last property? – Hugo Chapdelaine Mar 22 '13 at 15:54 A good introduction to reflexive sheaves is "Stable reflexive sheaves", by Hartshorne, Mathematische Annalen (1980). – Angelo Mar 22 '13 at 19:39 thanks for the reference! – Hugo Chapdelaine Mar 23 '13 at 17:49 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry complex-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/125097/generalizing-the-square-theorem","timestamp":"2014-04-18T13:47:15Z","content_type":null,"content_length":"53734","record_id":"<urn:uuid:2e64ada5-2a66-4a29-b5f6-013474e688e5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
by Christoph Hauert, Version 1.1, September 2005. The chess-board is the world; the pieces are the phenomena of the universe; the rules of the game are what we call the laws of Nature. The player on the other side is hidden from us. We know that his play is always fair, just, and patient. But also we know, to our cost, that he never overlooks a mistake, or makes the smallest allowance for ignorance. T. H. HUXLEY 1825–1895, Lay Sermons: A Liberal Education The phenomenon of cooperative interactions among animals has puzzled biologists since Darwin. Nevertheless, theoretical concepts to study cooperation appeared only a century later and originated in economics and political sciences rather than biology. John von Neumann and Oskar Morgenstern developed a mathematical framework termed Game Theory to describe interactions between individuals. This theory emerged in the wake of World War II and was mainly intended to provide a basis to prevent a nuclear holocaust. John Nash, working at the post-WWII US military think tank, the RAND Corporation, augmented the theory by developing and introducing the concept of equilibria, the so-called Nash equilibria: An equilibrium is reached as soon as no party can increase its profit by unilaterally deciding differently. Another generation later, John Maynard-Smith and George R. Price ingeniously related the economic concept of payoff functions with evolutionary fitness as the only relevant currency in evolution. Furthermore, Maynard-Smith refined the concept of Nash equilibria in an evolutionary context and introduced the notion of evolutionarily stable strategies (ESS). All ESS represent a subset of the Nash equilibria because an ESS applies only at the population level and adds stability requirements. A strategy is called evolutionary stable if a population of individuals homogenously playing this strategy is able to outperform and eliminate a small amount of any mutant strategy introduced into the population. These achievements mark the advent of an entirely new, approach to behavioral ecology where theoretical models and predictions inspired and continue to inspire numerous experiments and field studies. Figure 1: The forefathers of game theory and the theory of the evolution of cooperation. John von Neumann Oskar Morgenstern John Nash John Maynard-Smith The Prisoner's Dilemma is probably the most famous mathematical methaphor for modelling the evolution of cooperation. This section gives a brief introduction into the Prisoner's Dilemma and provides examples of the game dynamics in well-mixed populations with random interactions as well as structured populations with limited local interactions. A closely related game, which is also addressing the problem of cooperation but under slightly relaxed conditions, is called the Snowdrift game. This section briefly introduces the Snowdrift game and then, similarly to the Prisoner's dilemma, exemplifies the game dynamics in well-mixed populations as well as structured populations. The fascination of the Rock-Scissors-Paper game is not restricted to children but is equally thrilling for evolutionary biologists. The cyclic dominance of the three strategies can lead to very interesting dynamics both in well-mixed as well as structured populations. Selected publications on evolutionary game theory: • von Neumann, J. & Morgenstern, O. (1944) Theory of Games and Economic Behaviour, Princeton: Princeton University Press. • Nash, J. (1950) The bargaining problem, Econometrica 18 155-162. • Maynard-Smith, J. & Price, G. R. (1973) The logic of animal conflict, Nature 246 15-18. • Hofbauer, J. & Sigmund, K. (1998) Evolutionary Games and Population Dynamics, Cambridge: Cambridge University Press. • Sigmund, K. (1995) Games of Life, Harmondsworth, UK: Penguin • Axelrod, R. (1984) The Evolution of Cooperation, New York: Basic Books. The development of these pages would not have been possible without the encouragement and the insightful advice of Karl Sigmund. Financial support for the first version of these pages of the Swiss National Science Foundation is gratefully acknowledged.
{"url":"http://www.univie.ac.at/virtuallabs/Introduction/","timestamp":"2014-04-17T19:01:20Z","content_type":null,"content_length":"8136","record_id":"<urn:uuid:d4905f3f-0b36-4134-a368-249541bd935e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
AIB Stock Price, EGARCH-M, and rgarch May 17, 2011 By timeseriesireland This post examines conditional heteroskedasticity models in the context of daily stock price data for Allied Irish Banks (AIB), specifically how to test for conditional heteroskedasticity in a series, how to approach model specification and estimation when time-varying volatility is present, and how to forecast with these models; all of this is done in R, with relevant R code provided at the bottom of the post. Now follows a necessary disclaimer: This is not intended to be financial advice, and should not in any way be interpreted as such; this is purely an academic post, and the subtle irony of analysing price data on a bank that no longer trades on the ISEQ index should not go unnoticed by the reader. I am not responsible for any losses, financial or otherwise, incurred by anyone who dives head-first into stock/options trading with no understanding and/or appreciation of the financial risks involved, especially with any leveraged investments/products. By continuing to read this post you agree to release me of any legal liability and responsibility for your actions based on the content of this post. Vanilla Black-Scholes-Merton option pricing formulae are based on underlying asset prices being driven by geometric Brownian motion (GBM), which implies log-normally distributed prices and normally distributed returns. The strict assumptions imposed by GBM are often a point of amusement by those looking in from outside of finance and economics, but it’s worth noting that the field does not end with this simple model. Forecasting implied volatility from pricing data is a point of interest in applied financial econometrics, and a class of models utilised for this purpose are conditional heteroskedasticity models. For those not familiar with this term, ‘heteroskedasticity’ simply means non-constant variance, where the case of constant variance called ‘homoskedasticity’. If a stock, $S$, follows a random walk (without drift), we can denote this process by $s_t = s_{t-1} + \epsilon_t$, where lower case letters denote logs and $\epsilon_t \sim IID(0, \sigma_{\epsilon}^ 2)$. This implications of this model are ubiquitious: our best guess of tomorrow’s stock price, $E_t(s_{t+1})$, is today’s price $s_t$, as $E_t (\epsilon_{t+1}) = 0$. If we take first-differences, $\ Delta s_t = \epsilon_t$, where the log change approximates the growth rate in the stock price, or the return, we can re-write as $r_t = \epsilon_t$, where $r_t$ is the return at time $t$. This implies that the return of an asset is an IID process with mean zero and variance $\sigma_{\epsilon}^2$. Tests of the random walk hypothesis come in various form, and also depend on how strict our assumptions are for the model, for example $\epsilon_t \sim IID(0, \sigma_{\epsilon}^2)$. Independent increments in the error term can be easily tested by means of Box-Ljung Q-statistics. Some other features of the random walk model without drift are worth noting: $s_{t-1} = s_{t-2} + \epsilon_{t-1}$, so $s_t = (s_{t-2} + \epsilon_{t-1}) + \epsilon_t$. Continuing with this process of backward-iteration, we see that $s_t = s_{0} + \sum_{j=1}^{t} \epsilon_{j}$. The mean is given by $E(s_t) = s_0$, as $\epsilon_t \sim (0,\sigma_{\epsilon}^2)$ by definition, and the variance is $V (s_t) = E[ s_t - E(s_t) ]^2$, or $V(s_t) = E [\sum_{j=1}^{t} \epsilon_{j}]^2 = t \sigma_{\epsilon}^2$. Moving to an empirical example, below is a graph of Allied Irish Banks’ (AIB) daily stock price, from the beginning of 2003 to the end of 2010: For those unfamiliar with this bank, it is essentially a government-owned entity of the Irish State, where government ownership occurred due to large losses on loans extended to property developers. Below is a graph of the log daily returns of the stock: The marked drop in the series occurred on January 19, 2009; an astonishing 88% fall over one trading day. Volatility clustering is evident in the series, with relatively higher volatility from the middle of 2008 than the comparatively tranquil 2003 to 2008 period. Our first task is to identify the correct (mean) model specification, here from the autoregressive moving average (ARMA) suite of models. An ARMA(p,q) model is given by $r_t = \delta + \sum_{i = 1}^p \phi_i r_{t - i} + \epsilon + \sum_{j = 1}^q \theta_{j} \epsilon_{t - j}$, where I’ve included a drift term, $\delta$, for maximum generality. When dealing with time-varying volatility, Engle’s autoregressive conditional heteroskedasticity (ARCH) model, and its generalised cousin, Bollerslev’s generalised ARCH (GARCH), are staples of model financial econometrics and are frequently deployed to forecast volatility in financial time-series. If we take the simplest form of the ARMA class, ARMA(0,0), $r_t = \epsilon_t$, we specify the error term as $\epsilon_t = u_t h_t$, $u_t \sim IID(0,1)$, $h_t^2 = \omega + \alpha_1 \epsilon_{t-1}^2$. $h_t^2$ is the conditional variance of $\epsilon_t$. This is an ARCH(1) model, which easily generalises to ARCH(q), $h_t^2 = \omega + \sum_{k=1}^q \alpha_k \epsilon_{t-k}^2$ The GARCH(p,q) model is given by, $h_t^2 = \omega + \sum_{k=1}^q \alpha_k \epsilon_{t-k}^2 + \sum_{l=1}^p \beta_l h_{t-l}^2$ where we have simply added autoregressive terms to the conditional variance process. Low-order GARCH(p,q) models have been shown to fit data better than high-order ARCH(q) models. The GARCH(1,1) specification, for example, is frequently used in finance, and one can find this model discussed in popular options textbooks, such as John Hull’s Options, Futures, and Other Derivatives. If ARCH effects are not present in a series, we would expect past squared values of the residuals to be insignificant covariates of $\epsilon_t^2$, today’s squared residual. Thus, a simple and popular test for ARCH effects is to, first, estimate an ARMA model, then take the residuals, $\widehat{\epsilon_{t}}$, where the circumflex denotes an estimate, square them and estimate an autoregressive model of order q for the squared residual series, including a constant. To identify the appropriate ARMA model, the autocorrelation function (ACF) and partial autocorrelation (PACF) of the log-returns series are given below. We see significant sample lags out to 50 lags in both the ACF and PACF, a rather unfortunate fact that will hinder estimation of a parsimonious model. (The erudite reader might point out that this is a candidate for fractional integration, with the series exhibiting so-called long-memory, even after integer-order differencing.) It’s arbitrary, really, what model is chosen here, due to the indefinite form of the ACF and PACF graph above. However, the oscillatory nature of the ACF and positive to negative move from the first to second sample lag in the PACF are indicative of a pure AR model; though, again, any model can be justified given the ACF and PACF above. I have selected an AR(9) model, based on minimising information criteria and analysing the ACF of the resulting With the mean model estimated to control for linear dependence, we now look to the squared residuals produced by fitting our AR(9) model. Two tests are commonly used to check for ARCH effects. The first is to estimate an autoregressive model of the squared residuals, and employ a test of joint insignificance for all the coefficients on lagged values in the model, i.e., $\widehat{\epsilon_t}^2 = \alpha_0 + \alpha_1 \widehat{\epsilon}_{t-1}^2 + \cdots + \alpha_m \widehat{\epsilon}_{t-m}^2 + error$. The null hypothesis is such that $\alpha_1 = \alpha_2 = \cdots = \alpha_m = 0$. An ‘F-test’ is automatically printed along with a linear regression in most (good) statistical packages, and R being an awesome statistical package simple use of the ‘lm’ function will suffice. A regression of the form $\widehat{\epsilon_t}^2 = \alpha_0 + \sum_{i = 1}^8 \alpha_i \widehat{\epsilon}_{t-i}^2 + error$ gives an F-statistic of 15.48, with 8 and 2048 degrees of freedom (a p-value of roughly zero), so we reject the null of no ARCH effects. The second test involves the Box-Ljung Q-statistics to test for independence in the squared residuals: we reject the null of independence at 8 lags, with a $\chi_{8}^2 = 170.2$, a p-value of roughly zero. The PACF of the squared residuals can be used to identify the order of an ARCH model, which is given below: This implies a high-order ARCH(q) model, with significant sample lags at 19 and beyond. Hence, the GARCH model is preferred to estimating an AR(9)-ARCH(19) model. Computationally, this may be a non-trivial exercise and convergence may not occur depending on the suite of algorithms used. However, identifying the order of a GARCH model is essentially a guess-and-go process, with GARCH(1,1), GARCH(1,2), GARCH (2,2) (and higher) being plausible specifications. One could use information criteria here to determine the correct model specification, though some authors do caution on the exact meaning of these for GARCH processes. Now that we have identified the presence of ARCH effects, and determined that GARCH is a preferable approach than pure ARCH, we proceed to estimate our ARMA-GARCH model. This can be easily achieved in the ‘fGarch’ package, which is part of the wider Rmetrics project. However, I will use ‘rgarch’, a relatively more flexible (beta) package available on R-Forge, that also allows for estimation of GARCH in mean models (GARCH-M) and asymmetric GARCH specifications. Whatever model one estimates, the standardised residuals produced by the estimated model should be white noise, if correctly Before I estimate the model, it’s interesting to note that the relatively more volatile period in the log-return series occurred when the stock price was falling. This can be incorporated by using a GARCH in mean, or GARCH-M, model, where the condicational variance appears in the mean equation; e.g., $r_t = \delta + \mu h_t^2 + \sum_{i = 1}^p \phi_i r_{t - i} + \epsilon + \sum_{j = 1}^q \theta_{j} \epsilon_{t - j}$, $\epsilon_t = u_t h_t$, $u_t \sim IID(0,1)$, $h_t^2 = \omega + \sum_{k=1}^q \alpha_k \epsilon_{t-k}^2 + \sum_{l=1}^p \beta_l h_{t-l}^2$. The $\mu$ coefficient is interpreted as a risk premium parameter. Our a priori expectation is that this will be negative, i.e., as the return increases, volatility decreases, and vice versa. I’ve also chosen to use an exponential GARCH model, or EGARCH, to account for asymmetric effects in the return series by means of a parameter $\gamma$ that allows for a leverage effect in the lagged residuals in $h_t^2$, which can now be rewritten as: $\ln(h_t^2) = \omega + \sum_{k=1}^q \alpha_k \frac{ | \epsilon_{t-k} | + \gamma_k \epsilon_{t-k}}{h_{t - k}} + \sum_{l=1}^p \beta_l \ln (h_{t-l}^2)$. I estimate an AR(9)-EGARCH(4,2)-M model based on several factors. First, the stability of the model is undermined with higher-order speficications, noticeably so for any EGARCH(5,q). The coefficients and their respective standard errors are given in the tables below, the first table being the mean equation and the second being the variance equation. Nyblom’s parameter stability test yield a joint test statistic of 3.3267, so we cannot reject the null hypothesis of parameter stability at the 10, 5 or 1% significance levels. Mean Equation: Estimate Standard Error $\delta$ 0.001166 0.003648 $\phi_1$ 0.043452 0.038624 $\phi_2$ -0.063144 0.023096 $\phi_3$ -0.012754 0.025271 $\phi_4$ -0.026951 0.024755 $\phi_5$ -0.064749 0.024209 $\phi_6$ -0.024585 0.048999 $\phi_7$ -0.028065 0.021314 $\phi_8$ 0.020022 0.037065 $\phi_9$ 0.024686 0.020654 $\mu$ -0.101332 0.176526 Variance equation: $\omega$ -0.097284 0.633358 $\alpha_1$ -0.136455 0.035170 $\alpha_2$ -0.087314 0.060027 $\gamma_1$ 0.373658 0.131616 $\gamma_2$ 0.407474 0.165395 $\beta_1$ -0.541935 0.038899 $\beta_2$ 0.772521 0.003665 $\beta_3$ 0.585102 0.062753 $\beta_4$ 0.164900 0.020075 To analyse if the model is correctly specified, we first construct the standardised residuals, $\widetilde{\epsilon}_t$, where $\widetilde{\epsilon}_t = \epsilon_t/h_t$. If the standardised residuals are white noise, the Q-statistics should be statistically insignificant, which is the case for our model (see the first table below), so we continue under the assumption that the mean model is correctly specified. Similarly, a test for remaining ARCH effects can be conducted by testing the significance of Q-statistics on the squared standardised residuals. ARCH LM tests are significant for two lags at the 5% significance level, though only marginally so. Q-Statistics on Standardized Residuals: Statistic p-value Q(10) 8.361 0.5936 Q(15) 11.348 0.7275 Q(20) 20.644 0.4184 Q-Statistics on Standardized Squared Residuals: Statistic p-value Q(10) 13.42 0.2013 Q(15) 19.26 0.2024 Q(20) 22.78 0.2996 Below is an empirical density plot of the standardised residuals, which isn’t too far from what we would expect for correct model specification. Below is the daily return series with 2 conditional standard deviations imposed on either side. Forecasting is easily done with the rgarch package, using the ‘ugarchforecast’ function. If one re-estimates the model leaving out the last ten observations, evaluation of volatility forecasts are based on four mean loss functions: the mean square error (MSE); mean absolute deviation (MAD); QLIKE; and R2Log. I invite the reader to estimate another model of their choice and compare 10-day ahead forecasts with: MSE MAD QLIKE R2Log 0.00021026 0.009247 -4.0351 4.0524 One might prefer to use, say, weekly or monthly data rather than daily data, which is usually easier to work with, and higher-order models are rarely necessary with weekly and monthly frequencies. R Code: dates <- as.Date(a$Date, "%d/%m/%Y") ### Plot AIB stock price: gg1.1<-ggplot(df1,aes(dates,AIB)) + xlab(NULL) + ylab("AIB Stock Price in Euro") gg1.2<-gg1.1+geom_line(colour="darkblue") + opts(title="Daily AIB Stock Price") png(filename = "aib1.png", width = 580, height = 400, units = "px", pointsize = 12, bg = "white") ### Plot AIB stock price returns: gg2.1<-ggplot(df2,aes(dates2,ReturnsAIB)) + xlab(NULL) + ylab("Log Changes") gg2.2<-gg2.1+geom_line(colour="darkred") + opts(title="Daily AIB Stock Price Return") png(filename = "aib2.png", width = 580, height = 400, units = "px", pointsize = 12, bg = "white") ### ACFs and PACFs png(filename = "acfpacf.png", width = 580, height = 700, units = "px", pointsize = 12, bg = "white") acf(retAIB, main="ACF of AIB Log Returns", lag = 50) pacf(retAIB, main="PACF of AIB Log Returns", lag = 50) ar9<-arima(retAIB, order=c(9,0,0)) Box.test(ressq, lag = 8, type = "Ljung-Box") png(filename = "pacfressq.png", width = 580, height = 350, units = "px", pointsize = 12, bg = "white") pacf(ressq, main="PACF of Squared Residuals", lag = 30) # Note that the GARCH order is revered from what I have discussed above specm1 <- ugarchspec(variance.model=list(model="eGARCH", garchOrder=c(2,4), submodel = NULL), mean.model=list(armaOrder=c(9,0), include.mean=TRUE, garchInMean = TRUE)) fitm1 <- ugarchfit(data = retAIB2, spec = specm1) fittedmodel <- fitm1@fit gg3.1<-ggplot(df2,aes(dates2)) + xlab(NULL) + ylab("Log Changes") gg3.2<-gg3.1+geom_line(aes(y = ReturnsAIB, colour="Log Returns")) + opts(title="Daily Log Return with 2 Conditional Standard Deviations") gg3.3<-gg3.2 + geom_line(aes(y = sigma1*2, colour="2 S.D.")) + geom_line(aes(y = sigma1*-2, colour="2 S.D.")) + scale_colour_hue("Series:") + opts(legend.position=c(.18,0.8)) png(filename = "aib3.png", width = 580, height = 400, units = "px", pointsize = 12, bg = "white") fitm2 <- ugarchfit(data = retAIB2,out.sample = 10, spec = specm1) pred <- ugarchforecast(fitm2, n.ahead = 10,n.roll = 1) pred.fpm <- fpm(pred) for the author, please follow the link and comment on his blog: TimeSeriesIreland » R daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/aib-stock-price-egarch-m-and-rgarch/","timestamp":"2014-04-16T16:39:56Z","content_type":null,"content_length":"69258","record_id":"<urn:uuid:064d00c8-b225-4cd9-9529-3e2834c735ba>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Computational excursions in analysis and number theory. (English) Zbl 1020.12001 CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC. 10. New York, NY: Springer. x, 220 p. EUR 69.95/net; sFr 116.00; £49.00; $ 69.95 (2002). The author considers a series of problems about topics on the boundary between analysis and number theory. They are all related to the properties of certain polynomials, they are at least 40 years old, and none of them is completely solved. They are also suited for computational experiments. The book contains a variety of techniques from several branches of mathematics to enable the reader to carry out his own research. In the introduction various sets of polynomials are defined as well as several measures. This is followed by a few results from complex analysis. Then the author gives a list of 17 open problems around which the later chapters are centered. (Those are again listed in Appendix D.) In Chapter 2 (shortly) and Appendix B (at length) lattice basis reduction techniques and integer relation finding methods are discussed. Then the applicability of these methods for finding polynomials with prescribed properties of their coefficients is developed. The next chapters contain Pisot and Salem numbers and special series of polynomials. They are followed by a discusion of the location of the zeros and Diophantine approximation of those. Then several open problems are considered: – The Integer Chebyshev Problem of finding non-zero polynomials of $ℤ\left[x\right]$ of smallest supremum norm in the interval $\left[0,1\right]$ and analyzing the asymptotic behaviour when the degree tends to infinity. – The Prouhet-Tarry-Escott Problem of finding polynomials of $ℤ\left[x\right]$ divisible by ${\left(x-1\right)}^{n}$ and minimal sum of the absolute values of the coefficients. – The Erdős-Szekeres Problem of minimizing ${∥\prod _{i=1}^{n}\left(1-{x}^{{a}_{i}}\right)∥}_{\infty }$ for ${a}_{i}\in ℕ$ and proving that these minima grow faster than ${n}^{\beta }$ for any positive constant $\beta$. – The Littlewood Problem of finding polynomials with coefficients in $\left\{±1\right\}$ with smallest ${L}_{p}$-norm on the unit disk. Additional chapters deal with a variant of Waring’s problem, Barker polynomials and Golay pairs, and spectra of values of classes of polynomials evaluated at a fixed number $q$. Besides the aforementioned Appendices B and D, Appendix A contains a compendium of inequalities and Appendix C on explicit merit factor formulae. Each chapter ends with lists of computational and research problems and selected additional references. The treatment of the various topics is very concise. Clearly, the book is written by one of the leading experts in this area of mathematics. 12-02 Research monographs (field theory) 11-02 Research monographs (number theory) 11R09 Polynomials over global fields 11C08 Polynomials (number theory) 11Y99 Computational number theory 12D05 Factorization of real or complex polynomials 12D10 Algebraic theorems of location of zeros of polynomials over R or C 12E05 Polynomials over general fields 12Y05 Computational aspects of field theory and polynomials 42C05 General theory of orthogonal functions and polynomials
{"url":"http://zbmath.org/?format=complete&q=an:1020.12001","timestamp":"2014-04-19T09:38:15Z","content_type":null,"content_length":"25944","record_id":"<urn:uuid:87864a2e-ac1d-4319-b425-f5650cd10087>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Characterization of Hermitian symmetric spaces by fundamental forms "... I prove that any complex manifold that has a projective second fundamental form isomorphic to one of a rank two compact Hermitian symmetric space (other than a quadric hypersurface) at a general point must be an open subset of such a space. This contrasts the non-rigidity of all other compact Hermit ..." Cited by 8 (3 self) Add to MetaCart I prove that any complex manifold that has a projective second fundamental form isomorphic to one of a rank two compact Hermitian symmetric space (other than a quadric hypersurface) at a general point must be an open subset of such a space. This contrasts the non-rigidity of all other compact Hermitian symmetric spaces observed in [12, 13]. A key step is the use of higher order Bertini type theorems that may be of interest in their own right. - 2006 IMA WORKSHOP “SYMMETRIES AND OVERDETERMINED SYSTEMS OF PARTIAL DIFFERENTIAL EQUATIONS" , 2006 "... These are lecture notes on the rigidity of submanifolds of projective space “resembling” compact Hermitian symmetric spaces in their homogeneous embeddings. The ..." "... Abstract. We classify codimension two analytic submanifolds of projective space X n ⊂ CP n+2 having the property that any line through a general point x ∈ X having contact to order two with X at x automatically has contact to order three. We give applications to the study of the Debarre-de Jong conj ..." Cited by 2 (1 self) Add to MetaCart Abstract. We classify codimension two analytic submanifolds of projective space X n ⊂ CP n+2 having the property that any line through a general point x ∈ X having contact to order two with X at x automatically has contact to order three. We give applications to the study of the Debarre-de Jong conjecture and of varieties whose Fano variety of lines has dimension 2n − 4. 1. , 2006 "... Abstract. I prove that the adjoint variety of SLm+1C in P(slm+1C) is rigid to order three. The principle result of this paper is Theorem 4.6 (page 8) which asserts that the adjoint variety of the simple Lie group SLm+1C is rigid to order three. The result is extrinsic; roughly speaking, if a variety ..." Cited by 1 (1 self) Add to MetaCart Abstract. I prove that the adjoint variety of SLm+1C in P(slm+1C) is rigid to order three. The principle result of this paper is Theorem 4.6 (page 8) which asserts that the adjoint variety of the simple Lie group SLm+1C is rigid to order three. The result is extrinsic; roughly speaking, if a variety Y ⊂ P(slm+1C) = P m2 +2m−1, of dimension n = 2m − 1, resembles the adjoint variety to third order at a 3-general point y ∈ Y, then there is a transformation in GL m 2 +2mC mapping Y onto the adjoint variety. The conclusion is significant because it is the first rigidity result for a variety with non-vanishing Fubini cubic F3 (a third order invariant). And it is striking that this is the first example of k-th order rigidity for which the (k + 1)-th order Fubini invariant is nonzero: F4 can not be normalized to zero. The proof is based on the E. Cartan’s method of moving frames. The reader may find similar applications of the technique to the study of submanifolds of CP N in [4, 9, 10, 11, 12], and their references. The paper is organized as follows §1 Notation is set. The first-order adapted frame bundle associated to a variety is introduced, and the relative differential invariants Fk, or Fubini forms, are discussed. (These invariants "... Abstract. These are expository notes from the 2008 Srni Winter School. They have two purposes: (1) to give a quick introduction to exterior differential systems (EDS), which is a collection of techniques for determining local existence to systems of partial differential equations, and (2) to give an ..." Add to MetaCart Abstract. These are expository notes from the 2008 Srni Winter School. They have two purposes: (1) to give a quick introduction to exterior differential systems (EDS), which is a collection of techniques for determining local existence to systems of partial differential equations, and (2) to give an exposition of recent work (joint with C. Robles) on the study of the Fubini-Griffiths-Harris rigidity of rational homogeneous varieties, which also involves an advance in the EDS technology. , 2006 "... Abstract. These are lecture notes on the rigidity of submanifolds of projective space “resembling” compact Hermitian symmetric spaces in their homogeneous embeddings. The results of ..." Add to MetaCart Abstract. These are lecture notes on the rigidity of submanifolds of projective space “resembling” compact Hermitian symmetric spaces in their homogeneous embeddings. The results of , 2005 "... Abstract We propose a unified computational framework for the problem of deformation and rigidity of submanifolds in a homogeneous space under geometric constraint. A notion of 1-rigidity of a submanifold under admissible deformations is introduced. It means every admissible deformation of the subma ..." Add to MetaCart Abstract We propose a unified computational framework for the problem of deformation and rigidity of submanifolds in a homogeneous space under geometric constraint. A notion of 1-rigidity of a submanifold under admissible deformations is introduced. It means every admissible deformation of the submanifold osculates a one parameter family of motions up to 1st order. We implement this idea to the question of rigidity of CR submanifolds in spheres. A class of submanifolds called Bochner rigid submanifolds are shown to be 1-rigid under type preserving CR deformations. 1-rigidity is then extended to a rigid neighborhood theorem, which roughly states that if a CR submanifold M is Bochner rigid, then any pair of mutually CR equivalent CR submanifolds that are sufficiently close to M are congruent by an automorphism of the sphere. A local characterization of Whitney submanifold is obtained, which is an example of a CR submanifold that is not 1-rigid. As a by product, we give a simple characterization of the proper holomorphic maps from the unit ball B n+1 to B 2n+1. , 2005 "... 3.2 Bochner rigid submanifolds ..." , 802 "... Abstract. These are expository notes from the 2008 Srni Winter School. They have two purposes: (1) to give a quick introduction to exterior differential systems (EDS), which is a collection of techniques for determining local existence to systems of partial differential equations, and (2) to give an ..." Add to MetaCart Abstract. These are expository notes from the 2008 Srni Winter School. They have two purposes: (1) to give a quick introduction to exterior differential systems (EDS), which is a collection of techniques for determining local existence to systems of partial differential equations, and (2) to give an exposition of recent work (joint with C. Robles) on the study of the Fubini-Griffiths-Harris rigidity of rational homogeneous varieties, which also involves an advance in the EDS technology.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=5989822","timestamp":"2014-04-19T23:26:38Z","content_type":null,"content_length":"30291","record_id":"<urn:uuid:12647a8f-309f-4521-ae56-4780fd17366f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Chebyshev's inequality (mathematics) Chebyshev’s inequality Article Free Pass Chebyshev’s inequality, also called Bienaymé-Chebyshev inequality, in probability theory, a theorem that characterizes the dispersion of data away from its mean (average). The general theorem is attributed to the 19th-century Russian mathematician Pafnuty Chebyshev, though credit for it should be shared with the French mathematician Irénée-Jules Bienaymé, whose (less general) 1853 proof predated Chebyshev’s by 14 years. Chebyshev’s inequality puts an upper bound on the probability that an observation should be far from its mean. It requires only two minimal conditions: (1) that the underlying distribution have a mean and (2) that the average size of the deviations away from this mean (as gauged by the standard deviation) not be infinite. Chebyshev’s inequality then states that the probability that an observation will be more than k standard deviations from the mean is at most 1/k^2. Chebyshev used the inequality to prove his version of the law of large numbers. Unfortunately, with virtually no restriction on the shape of an underlying distribution, the inequality is so weak as to be virtually useless to anyone looking for a precise statement on the probability of a large deviation. To achieve this goal, people usually try to justify a specific error distribution, such as the normal distribution as proposed by the German mathematician Carl Friedrich Gauss. Gauss also developed a tighter bound, 4/9k^2 (for k > 2/√3), on the probability of a large deviation by imposing the natural restriction that the error distribution decline symmetrically from a maximum at 0. The difference between these values is substantial. According to Chebyshev’s inequality, the probability that a value will be more than two standard deviations from the mean (k = 2) cannot exceed 25 percent. Gauss’s bound is 11 percent, and the value for the normal distribution is just under 5 percent. Thus, it is apparent that Chebyshev’s inequality is useful only as a theoretical tool for proving generally applicable theorems, not for generating tight probability bounds. Do you know anything more about this topic that you’d like to share?
{"url":"http://www.britannica.com/EBchecked/topic/108218/Chebyshevs-inequality","timestamp":"2014-04-17T01:50:21Z","content_type":null,"content_length":"77792","record_id":"<urn:uuid:e7732411-c92c-4183-9c88-b6c28324225e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Douglasville SAT Math Tutor Find a Douglasville SAT Math Tutor ...I'm a mechanical engineering professor at a top university (Georgia Tech) and I did my PhD at MIT. I use the subject matter on the ACT Math exam virtually everyday to solve engineering problems. It is easy for me to teach or tutor on this subject matter. 12 Subjects: including SAT math, calculus, algebra 2, geometry ...My methods are simple, the more comfortable you are, the better you learn. I therefore make it a point to create a comfort zone for learning by assessing how best a student learns and tailoring the teaching methods to the student. My philosophy is also simple, education is easy, all you have to do is learn it! 11 Subjects: including SAT math, reading, biology, grammar As a person that went to two different high schools, I had trouble fitting in. I solved my own problem by going to study hall and help others in chemistry, which I did for at least a year. The school gave me an award for my efforts I did that year. 17 Subjects: including SAT math, chemistry, calculus, geometry ...I have developed a beginning curriculum for students interested primarily in learning to speak and understand French. Due to my love of French language and culture, I offer a reduced rate to my French students. Although many engineers are known for their lack of grammar, I have always been a fanatic about using the English language correctly. 27 Subjects: including SAT math, English, accounting, physics ...While I was at college, I used to help my friends with their homework, practice and concept development to perform better in their tests. I was consistently the best student. After completing Calculus sequence, I started tutoring the course to junior students at various levels. 36 Subjects: including SAT math, calculus, GED, GRE
{"url":"http://www.purplemath.com/douglasville_ga_sat_math_tutors.php","timestamp":"2014-04-17T01:33:07Z","content_type":null,"content_length":"24048","record_id":"<urn:uuid:56e02746-396f-4a1f-9c19-a5666f44f394>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Gradient of parabola and specific max of y with 4 unknowns April 5th 2008, 04:22 PM Gradient of parabola and specific max of y with 4 unknowns i am needing help with a Maths equation that i cannot figure out. i need the gradient at x=45 to equal 3.21609 (y'=-25.593*cos(pi/25x - 4pi/5)). i also need the max of the parabola to equal (x,55) where x is unknown. a , b and c are not given. it is really bugging me because i have wasted about 20 pages and still cannot find it. if i have missed something please ask me. where y'=0, x=-B/2A and y=(-AB^2)/(4A^2)+C therefore 55 + (AB^2)/(4A^2)=C y = Ax^2 +BX +C y'= 2Ax + B therefore at x = 45 3.21609 - 90A=B using these i altered the equations but got nowhere. is the problem there (equations) if anyone can help? April 5th 2008, 06:02 PM equation found if it helps anyone, i set a new axis to (0,0) and solved for y=AX^2 where y= drop in my case -40.383 (14.617-55). i let its y' = 3.21609 and got x = in terms of A. used that in y=AX^2 and got A. (3.21609^2)/4A = -40.383, a's cancel. then used y=AX^2 + BX +C and y' = 2AX + B where x was 45 and A was found value. this got B. then i used y=AX^2 + BX +C where y = 55 and x = -B/2A and got final equation = -0.064x^2 + 8.98X -259.773 April 5th 2008, 06:53 PM I am sorry this is incoherent what aer you looking for?...what is $a$ in $ax^2+bx+c$? April 6th 2008, 12:12 AM i am looking for a, b and c in parabola general equation. if you know a bit about parabola's im sure that if you read the whole thing you will understand what i am saying. i had to get the gradient of the derivitive to equal that of another equation, and have a maximum of a particular y-value. that was all i was given. no x value for the max, no a,b or c for the general equation...nothing! April 6th 2008, 01:31 AM i am needing help with a Maths equation that i cannot figure out. i need the gradient at x=45 to equal 3.21609 (y'=-25.593*cos(pi/25x - 4pi/5)). i also need the max of the parabola to equal (x,55) where x is unknown. a , b and c are not given. it is really bugging me because i have wasted about 20 pages and still cannot find it. if i have missed something please ask me. where y'=0, x=-B/2A and y=(-AB^2)/(4A^2)+C therefore 55 + (AB^2)/(4A^2)=C y = Ax^2 +BX +C y'= 2Ax + B therefore at x = 45 3.21609 - 90A=B using these i altered the equations but got nowhere. is the problem there (equations) if anyone can help? I'm sorry... I also have trouble understanding your actual question. From what I can tell... * You want the derivative of y'=-25.593*cos(pi/25x - 4pi/5) when x = 45 to equal 3.21609 * You also want the local maximum point of a parabola to be (x, 55) The problem here is we are confused at what you are actually trying to do, could you please post the questions and given information ONLY. Note: This may help. The general equations for a parabola are: General form: $y=a(x-h)^2+k$ Turning Point form: $y=ax^2+bx+c$
{"url":"http://mathhelpforum.com/pre-calculus/33316-gradient-parabola-specific-max-y-4-unknowns-print.html","timestamp":"2014-04-19T03:15:29Z","content_type":null,"content_length":"8204","record_id":"<urn:uuid:e3e75e0d-f98f-49fd-84ba-88adcc3e0cd6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
HOW TO PLAY FREECELL by Sam Sloan FreeCell is a Solitaire-type game which can be downloaded from the Internet from the address: http://www.goodsol.com/freedown.html FreeCell is becoming increasingly popular because, unlike standard Solitaire, it is purely a game of skill. With perfect play, the player can win every game of FreeCell. At the start of the game, all the cards are placed face up, so it is theoretically possible for a player to calculate the solution from the starting position without moving even a single card. On the other hand, the game is hard enough that there are probably few people who can actually do that in their head. RULES OF THE GAME At the beginning of the game, all 52 cards are dealt face up, in 8 columns. There are 4 columns with 7 cards each. The other 4 columns have 6 cards each, for a total of 52 cards in all. Above the cards, to the left, are four empty cells. There are another four empty cells above the cards to the right. The four cells to the left are the "free cells". Any card on the bottom of the columns can be placed in and one of those cells, provided that the cell is empty. As to the four cells on the right, from the starting position, only aces may be placed in those cells in the starting position. After an ace has been placed in the cell, a duce of the same suit may be placed on top of it, then a three and so on, up until a king. The game is won when all 52 cards are stacked up in the four cells on the upper right. Once a card is placed in one of the cells on the upper right, it is out of the game and cannot be moved again. Regarding the cards on the bottom of the columns, the rule is that a red card on any of the bottom rows may be placed on a black card on another bottom row one rank higher. Similarly, a black card may be placed on a red card one rank higher. Thus, a red 7 of diamonds or hearts may be placed on a black 8 of spades or clubs. Also, a black Jack of spades or clubs may be placed on a red Queen of hearts or diamonds. In short, the game is conceptually similar to standard Solitaire, except that all the cards are face up. A card on a freecell may be brought down and placed on a card of the opposite color one rank higher. Cards on the freecells may also be placed on a card one rank lower of the same suit on the cells to the upper right. When a column is empty, then any card from the freecells or from the bottom of the other columns can be placed in the empty space. When a move is virtually forced, in that there is no reason not to make such a move, the computer will sometimes do it for you. For example, let's say that in the starting position, you have on the bottom row, an ace of spades, an ace of hearts, a 2 of hearts and a 3 of hearts. After you have made the first move to start the game by putting up the ace of spades, the computer by itself will move up the ace of hearts and the 2 of hearts. However, it will not move the 3 of hearts by itself, because you might possibly want to leave the 3 there and place a black 2 on it. As the game nears the end, the computer will at some point see that all the cards are in ascending order and you have a forced win. In that case, the computer will make all the remaining moves for you and declare you the winner. This speeds up the game considerably. As the game progresses, columns will be created with alternate black and red cards of progressively higher rank. Now, suppose you have a red 5 on a black 6 on a red 7 on a black 8 on a red 9 and, on the bottom of another column, you have a black 10. You will probably want to move all the cards in the sequence on the first column over to the second column. How do you do that? The answer is through the freecells. First you take the red 5 on the bottom and place it on a free cell. Then you take the black 6 which is now on the bottom and place it on another free cell. Then you take the red 7 on the bottom and place it on a third free cell and finally you take the black 8 which is now on the bottom and place it on the fourth free cell. Now all four free cells are occupied. Now you take the red 9 which is now on the bottom and place it on the black 10. Then you take the black 8 on the freecell and place it on the red 9. Then you take the red 7 on a free cell and place it on the black 8, following which the red 7 goes on the black 8, the black 6 goes on the red 7 and the red 5 goes on the black 6. To shorten this time consuming process, the computer will do it all for you. First, just click on the column which has the red 5 up to red 9. Then click on the black 10. Then the computer will make all the moves for you automatically. However, the above maneuver will be illegal if even one of the free cells is occupied. You will need for all 4 free cells to be free to make the above transfer. If even one freecell is occupied, the computer will tell you: "That move requires moving 5 cards. You have only enough free space to move 4." Thus, it can be seen that it is of vital importance to keep the free cells free. As the game progresses, it will eventually become necessary to put a card more or less permanently in a free cell to free up a card below it. However, each time a free cell is filled, you have less room to maneuver. Once all four free cells are full and you still have no move, you lose the game. Under "options" at the bar on the top of the screen, there are "statistics". This will keep track of how many games you have won and lost and the length of your longest winning and losing streaks, so you can mark your improvement in playing FreeCell. STRATEGY OF PLAYING FREECELL I had to lose nearly 100 games of FreeCell before I got to be any good at it. I now win about 90% of the games I play. I would win 100% if I concentrated deeply and took my time, as all FreeCell games are a win with correct play. During the week ending August 1, 1997, I won 26 games of FreeCell in a row, without a single loss. When starting a game of FreeCell, before making any moves at all, you should survey all the cards in the position. You may find clusters, a column where big cards like kings, queens or jacks are clustered, and another where aces, duces and threes are clustered. You will probably want to attack the column with a lot of small cards. However, be careful. You can get trapped making hasty moves without planning ahead. Obviously, if there is an ace at the bottom of a column, you move it up to where the aces are placed on the upper right hand side. Before making any other moves, move your eyes up each column of cards. Look at each card in each column and see where that card could possibly be moved now or later in the game. For example, if a red 7 is at the bottom of a column, look to see where both black 8s are. Go up every card in the column. You may find a column in which every card can be moved somewhere immediately and you can reach the top. If you can clear a column entirely, this gives a lot of additional room to maneuver, because you can start putting down a whole column of cards with alternating red and black colors, preferably with a king at the top. After that, look for moves which do not cost anything. For example, suppose that in one column there is a red jack and above it an ace and above it a black queen. Now, you can move the red jack to a free cell. Then, put the ace in a cell in the upper right hand side, and now bring the jack down and place it on the queen. Now, you have freed up one card at no cost. All four of your free cells are still free. You should make any moves like this you can find. Also, it generally does not hurt and usually helps to make any other moves which do not tie up a free cell. For example, if you see a red 4 on the bottom of one column and a black 5 on the bottom of another, it is probably safe to place the 4 on the 5. However, before doing this, look to see if this move will prevent any other move you are likely to want to make later on. Sometimes you can free up an entire column with one maneuver. For example, you might find a column with a red king on the bottom. Above it are a random assortment consisting on a black queen, a red jack a red ace, a black ace, a red 4 and a black 3. Also, you might see a black 5 at the bottom of the other column. In such as case, you need to calculate the moves in order, but you might be able to move the king, queen and jack plus the black 3 to the free cells. Then, put the red 4 on the black 5 and the black 3 on the red 4, put the two aces on the cells to the upper right, and then place the king, followed by the queen followed by the jack on the now empty column. Now you have freed up two aces plus improved your position by having a king, queen and jack at the top of one column plus a 5, 4 and 3 in the proper order in another column. You will need to see combinations like this and to be able to plan ahead to win consistently at free cell. However, it is usually not so perfect. Suppose that there was no black 5 in another column. Then, if you completed the above maneuver, you would be left with two cards, the red 4 and the black 3 in the freecells. This is a dangerous position to be in. You have only two more free cells left. Nevertheless, it is sometimes necessary to take risks like this to win the game. As you gain experience playing freecell, you will develop an idea of what risks are worth taking. Before leaving a card in a free cell, look around for future opportunities to move that card out of there. For example, suppose you are thinking of placing a red 10 in a free cell. Look around for the two black jacks. If it appears that they might soon be at the bottom of a column, it might be safe to place the 10 in a free cell. However, if the jacks are buried at the top of columns, it might take a long time to get the 10 out. Here are some principles based upon my experience: Do not be afraid of building big towers. Suppose you have a column with an ace at the top. Below that is a red king, followed by a black queen, followed by a red jack, followed by a black 10 and so on down to a black 2 at the bottom. You will think that your game is lost. How will you ever get the ace at the top out? Actually, if you have this you will have virtually won the game, because by having 12 cards in proper order, you have enough room left on the rest of the board to organize the remaining cards. Finally, when the board opens up, you will be able to move that big stack of cards 5 at a time and free up the ace on the top. In short, big towers are good. Build them as much and as high as you can. When given a choice, always put a card on the highest tower you can. If you have a red 6 on a black 7 on a red 8, but you can move the six to a black 7 which is on a red 8 which is on a red 9, make that move. The higher the tower, the better it is for you. The best place to move a card is to a column which has a king on the top. Always try to pile your cards on a column which the top card is a king. There will be plenty of opportunities to do this near the end of the game. Next priority is to put cards on a column in perfectly ascending order. For example, if a column has a red 10 at the top, and below it a black 9 and below it a red 8, that would be a good place to put a black 7. Failing that, put your cards on a column with the largest sequence of ascending order, even if unordered cards are at the top.. If given a choice between two otherwise equal places to put a card, always put the card on the column which has the largest sequence of ascending cards with alternating color. My experience has been that once you have 12 cards safely on the cells on the upper right hand side of the board, you always win, no matter how bad your position may otherwise be. So, when the computer says "Cards left: 40" or less, you can be confident of winning. However, it is not always easy to see the way to win. Sometimes I look 5 or ten minutes or even longer thinking that I am hopelessly stuck and there is no way out. Then, in a flash, I see the way to make progress and the game is solved. If you think you have a lost position, do not give up until you have considered all the possibilities. More often than not, there is a way to keep the game going and to win. If you lose a game, the computer gives you a choice either to replay the same game again, or play a new game. Personally, I always play the same game again to see where I went wrong. I will play the same game over and over again until I finally get it right. Remember, there is a solution to every game. However, I recommend that new players keep playing different games. Some positions are just too hard to solve and it might be better to try something easier for starters. In standard FreeCell, the hardest game to solve I have found so far is #28118. I had to lose at least 10 games and then sleep on it before solving it. I finally solved it by the exhaustive trial and error method. I attacked the first column and, after losing, started again and attacked the second, and so on. Here is the solution: Start by attacking both the second and fourth columns. Move the four of spades and the king of hearts on the second column to the freecells. This frees the ace of diamonds, which moves automatically to the top. Next, move the 4 of diamonds and the 8 of spades in the 4th column to the freecells. Now, move the queen of spades in column 4 to the king of diamonds in column 5. Move the jack of diamonds in column 7 to the queen of spades in column 5. Move the 5 of clubs in column 7 to the 6 of diamonds in column 2. Move the 6 of spades in column 3 to the 7 of diamonds in column 7. Move the 5 of diamonds in column 4 to the 6 of spades in column 7. Move the 4 of spades in the freecell to the 5 of diamonds in column 7. Move the queen of clubs in column 4 to a freecell. Move the 2 of spades in a freecell to the 3 of diamonds in column 7. Move the king of hearts in a freecell to the empty 4th col. Move the queen of clubs in a freecell to the king of hearts in column 4. Move the jack of diamonds in column 5 to the queen of clubs in column 4. Move the 4 of diamonds in the freecell to the 5 of clubs in column 2. Move the king of spades in column 8 to a freecell. Now, click on column 2 and then click on column 8. This will simultaneously move the 6 of diamonds, the 5 of clubs and the 4 of diamonds on column 2 to the 7 of spades on column 8. Next, move the queen of diamonds in column 2 to a freecell. Move the 10 of diamonds in column 2 to the jack of spades in column 3. Move the 3 of clubs in column 2 to the 4 of diamonds in column 8. Move the king of spades in the freecell to the empty column 2. Click on column 3 and then on column 2, which will move the jack of spades and the ten of diamonds in column 3 to the queen of diamonds in column 2. Move the 9 of clubs in column 6 to the 10 of diamonds in column 2. Move the 10 of clubs in column 3 to the jack of diamonds in column 4. Move the queen of spades in column 5 to a freecell. Move the king of diamonds in column 5 to a freecell. Now, the 2 of diamonds automatically goes on the ace at the top. Move the 8 of hearts on column 5 to the 9 of clubs in column 2. Now, the 2 of clubs automatically goes on the ace. Move the 9 of diamonds in column 5 to the 10 of clubs in column 4. Move the 7 of hearts in column 5 to the 8 of spades in column 4. Move the king of diamonds in the freecell to the empty column 5. Move the queen of spades in the freecell to the king of diamonds in column 5. Move the 6 of clubs in column 6 to the 7 of hearts in column 4. Move the 9 of spades in column 6 to a freecell. Move the 7 of clubs in column 6 to the queen of hearts in column 2. Move the jack of hearts in column 6 to the queen of spades in column 5. Move the jack of clubs in column 3 to a freecell. Move the 6 of hearts in column 3 to the 7 of clubs in column 2. Move the queen of hearts in column 3 to empty column 6. Move the jack of clubs in the freecell to the queen of hearts in column 6. Move the 10 of spades in column 3 to the jack of hearts in column 5. Click on column 8 and then on column 3. After that, the computer will ask you, "Move column" or "Move single card". Click on "Move column". This will move the six of diamonds, the five of clubs, the 4 of diamonds and the 3 of clubs all from column 8 to the empty column 3. Next, move the 7 of spades in column 8 to the freecell. Move the 9 of hearts in column 8 to the 10 of spades in column 5. Move the 10 of hearts in column 8 to the jack of clubs in column 6. The 3 of clubs in column 3 will automatically move up to the 2 of clubs on top. Move the 9 of spades in the freecell to the 10 of hearts in column 6. Now, the 3 of spades in column 1, the 4 of spades in column 7 and the 5 of diamonds in column 7 will automatically move up. Now, click on column 7 and then on column 5. This will move the 8 of clubs, the 7 of diamonds and the 6 of spades in column 7 to the 9 of hearts in column 5. Now, anything wins. Move the 5 of spades in column 1, the 8 of diamonds in column 1 and the king of clubs in column 1 to the freecells and all the remaining cards will be automatically moved by the computer to the concluding position. This is by far the most difficult game of FreeCell I have encountered. It took me more than ten tries to get it. Then, I forgot the solution and it took me ten more tries to get it again. Perhaps what makes it so difficult is the need to use up all the freecells on the first moves and later to bury two aces simultaneously under towers in columns 7 and 8. There are also other promising looking possible solutions, but these do not seem to lead anywhere. I believe that this is the only solution to this problem. If anyone can find another solution, please let me know. Good luck playing FreeCell and remember, you may have to lose a lot of games before you start winning regularly. To download FreeCell Here are some other games you can play: For the Rules of Japanese chess or Shogi, see: Basic Rules of Shogi . For the Rules of Chinese Chess or Xiangqi, see: Basics of Chinese chess . For the Rules of Thai chess or Makrook Thai, see: Rules of Thai chess . For How to Play Minesweeper, see: Minesweeper for Beginners . Contact address - please send e-mail to the following address: Sloan@ishipress.com
{"url":"http://www.anusha.com/freecell.htm","timestamp":"2014-04-20T11:12:13Z","content_type":null,"content_length":"19388","record_id":"<urn:uuid:4afc4a64-613f-4d91-bea0-6a8e60199b36>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Hi? how do we submit the assignments? • one year ago • one year ago Best Response You've already chosen the best response. You mean...add an attachment..? Best Response You've already chosen the best response. yah i just the ps0 file and i was wondering when are the assignments due? and how and where do we submit them? Best Response You've already chosen the best response. what? What do you mean when the attachments are due...? Best Response You've already chosen the best response. Hi @slizzy . Here on Openstudy, you can post your problems regarding different subjects. Like let's say if you want to post your math problems, go to Mathematics section and post your question on left where it says: Ask a question. Best Response You've already chosen the best response. slizzy, it sounds like you are talking about MIT OCW courses. You don't have to submit assignments anywhere for those. Those are done on yr own. We offer a place to get and give help on them, but they don't have to be submitted anywhere once complete. Best Response You've already chosen the best response. You may want to check out the http://mechanicalmooc.org/ which uses the OCWs and ourselves to create something a little more "official". Best Response You've already chosen the best response. Thanks everyone for responding to my question, i thought the mini tests were being submitted, so is anyone one of you taking the 6.00x course? Best Response You've already chosen the best response. Only some people here are taking that course. You can find them in the specific group for that course: http://openstudy.com/study#/groups/MIT%206.00%20Intro%20Computer%20Science%20(OCW) There are a bunch of other groups here though and consequently all of our users are somewhere different along their learning journey. Some are tied to OCWs and others aren't. You can explore these other subjects by clicking "Find More Subjects" or by clicking the horizontal blue bar at the top of the page. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5069cf02e4b09e1e2457db6e","timestamp":"2014-04-21T07:56:42Z","content_type":null,"content_length":"45647","record_id":"<urn:uuid:1dcbf70a-839c-4723-9085-b14afe8091ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Automatic Circle Detection on Images Based on an Evolutionary Algorithm That Reduces the Number of Function Evaluations Mathematical Problems in Engineering Volume 2013 (2013), Article ID 868434, 17 pages Research Article Automatic Circle Detection on Images Based on an Evolutionary Algorithm That Reduces the Number of Function Evaluations ^1Departamento de Electrónica, Universidad de Guadalajara, CUCEI, Avenue Revolución 1500, CP 44430, Guadalajara, JAL, Mexico ^2Departamento de Ingenierías, Universidad de Guadalajara, CUTONALA, Morelos 180, CP 45400, Tonalá, JAL, Mexico Received 15 July 2013; Accepted 2 September 2013 Academic Editor: Raul Rojas Copyright © 2013 Erik Cuevas et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper presents an algorithm for the automatic detection of circular shapes from complicated and noisy images with no consideration of the conventional Hough transform principles. The proposed algorithm is based on a newly developed evolutionary algorithm called the Adaptive Population with Reduced Evaluations (APRE). Our proposed algorithm reduces the number of function evaluations through the use of two mechanisms: (1) adapting dynamically the size of the population and (2) incorporating a fitness calculation strategy, which decides whether the calculation or estimation of the new generated individuals is feasible. As a result, the approach can substantially reduce the number of function evaluations, yet preserving the good search capabilities of an evolutionary approach. Experimental results over several synthetic and natural images, with a varying range of complexity, validate the efficiency of the proposed technique with regard to accuracy, speed, and robustness. 1. Introduction The problem of detecting circular features holds paramount importance in several engineering applications such as automatic inspection of manufactured products and components, aided vectorization of drawings, and target detection [1, 2]. Circle detection in digital images has been commonly solved through the Circular Hough Transform (CHT) [3]. Unfortunately, this approach requires a large storage space that augments the computational complexity and yields a low processing speed. In order to overcome this problem, several approaches which modify the original CHT have been proposed. One well-known example is the Randomized Hough Transform (RHT) [4]. As an alternative to Hough Transform-based techniques, the problem of shape recognition has also been handled through evolutionary methods. In general, they have demonstrated to deliver better results than those based on the HT considering accuracy and robustness [5]. Evolutionary methods approach the detection task as an optimization problem whose solution involves the computational expensive evaluation of objective functions. Such fact strongly restricts their use in several image processing applications; despite this, EA methods have produced several robust circle detectors which use different evolutionary algorithms like Genetic algorithms (GA) [5], Harmony Search (HSA) [6], Electromagnetism-Like (EMO) [7], Differential Evolution (DE) [8], and Bacterial Foraging Optimization (BFOA) [9]. However, one particular difficulty in applying any EA to real-world problems, such as image processing, is its demand for a large number of fitness evaluations before reaching a satisfactory result. Fitness evaluations are not always straightforward as either an explicit fitness function does not exist or the fitness evaluation is computationally expensive. The problem of excessively long fitness function calculations has already been faced in the field of evolutionary algorithms (EA) and is better known as evolution control or as fitness estimation [10 ]. For such an approach, the idea is to replace the costly objective function evaluation for some individuals by alternative models which are based on an approximate model of the fitness landscape. The individuals to be evaluated and those to be estimated are determined following some fixed criteria which depend on the specific properties of the approximate model [11]. The models involved at the estimation can be dynamically built during the actual EA execution, since EA repeatedly sample the search space at different points [12]. There are several alternative models which have been used in combination with popular EAs. Some examples include polynomial functions [13], kriging schemas [14], multilayer perceptrons [15], and radial basis-function networks [16]. In the practice, the construction of successful models which can globally deal with the high dimensionality, ill distribution, and limited number of training samples is very difficult. Experimental studies [17] have demonstrated that if an alternative model is used for fitness evaluation, it is very likely that the evolutionary algorithm will converge to a false optimum. A false optimum is an optimum of the alternative model, which does not coincide with the optimum of the original fitness function. Under such conditions, the use of the alternative fitness models degrade the search effectiveness of the original EAs, producing frequently inaccurate solutions [18]. In an EA, the population size has a direct influence on the solution quality and its computational cost [19]. Traditionally, population size is set in advance to a prespecified value and remains fixed through the entire execution of the algorithm. If the population size is too small, then the EA may converge too quickly affecting severely the solution quality [20]. On the other hand, if it is too large, then the EA may present a prohibitive computational cost [19]. Therefore, an appropriate population size allows maintaining a trade-off between computational cost and effectiveness of the algorithm. In order to solve such a problem, several approaches have been proposed for dynamically adapting the population size. These methods are grouped into three categories [21]: (i) methods that increment or decrement the number of individuals according to a fixed function; (ii) methods in which the number of individuals is modified according to the performance of the average fitness value, and (iii) algorithms based on the population diversity. In order to use either a fitness estimation strategy or an adaptive population size approach, it is necessary but not sufficient to tackle the problem of reducing the number of function evaluations. Using a fitness estimation strategy, during the evolution process with no adaptation of the population size to improve the population diversity, makes the algorithm defenseless against the convergence to a false minimum and may result in poor exploratory characteristics of the algorithm [18]. On the other hand, the adaptation of the population size omitting the fitness estimation strategy leads to increase in the computational cost [20]. Therefore, it does seem reasonable to incorporate both approaches into a single algorithm. Since most of the EAs have been primarily designed to completely evaluate all involved individuals, techniques for reducing the evaluation number are usually incorporated into the original EAs in order to estimate fitness values or to reduce the number of individuals being evaluated [22]. However, the use of alternative fitness models degrades the search effectiveness of the original EAs, producing frequently inaccurate solutions [23]. This paper presents an algorithm for the automatic detection of circular shapes from complicated and noisy images without considering the conventional Hough transform principles. The proposed algorithm is based on a newly developed evolutionary algorithm called the Adaptive Population with Reduced Evaluations (APRE). The proposed algorithm reduces the number of function evaluations through the use of two mechanisms: () adapting dynamically the size of the population and () incorporating a fitness calculation strategy which decides when it is feasible to calculate or only to estimate new generated individuals. The APRE method begins with an initial population which is to be considered as a memory during the evolution process. To each memory element, a normalized fitness value, called quality factor is assigned to indicate the solution capacity that is provided by the element. Only a variable subset of memory elements is considered to be evolved. Like all EA-based methods, the proposed algorithm generates new individuals considering two operators: exploration and exploitation. Both operations are applied to improve the quality of the solutions by: () searching through the unexplored solution space to identify promising areas that contain better solutions than those found so far and () successive refinement of the best found solutions. Once the new individuals are generated, the memory is accordingly updated. At such stage, the new individuals compete against the memory elements to build the final memory configuration. In order to save computational time, the approach incorporates a fitness estimation strategy that decides which individuals can be estimated or actually evaluated. The proposed fitness calculation strategy estimates the fitness value of new individuals using memory elements located in neighboring positions which have been visited during the evolution process. In the strategy, new individuals, that are located near the memory element whose quality factor is high, have a great probability to be evaluated by using the true objective function. Similarly, evaluated those new particles lying in regions of the search space with no previous evaluations are also evaluated. The remaining search positions are only estimated by assigning the same fitness value that is the nearest location element on the memory. The use of such a fitness estimation method contributes to saving computational time, since the fitness value of only very few individuals is actually evaluated whereas the rest is just estimated. Different to other approaches that use an already existent EA as framework, the APRE method has been completely designed to substantially reduce the computational cost, yet preserving good search In order to detect circular shapes, the detector is implemented by encoding three pixels as candidate circles over the edge image. An objective function evaluates if such candidate circles are actually present in the edge image. Guided by the values of this objective function, the set of encoded candidate circles are evolved using the operators defined by APRE so that they can fit into the actual circles on the edge map of the image. Comparisons to several state-of-the-art evolutionary-based methods and the Randomized Hough Transform (RHT) approach on multiple images demonstrate a better performance of the proposed method in terms of accuracy, speed, and robustness. The paper is organized as follows. In Section 2, the APRE algorithm and its characteristics are both described. Section 3 formulates the implementation of the circle detector. Section 4 shows the experimental results of applying our method to the recognition of circles in different image conditions. Finally, Section 5 discusses several conclusions. 2. The Adaptive Population with Reduced Evaluations (APRE) Algorithm In the proposed algorithm, a population of candidate solutions to an optimization problem is evolved toward better solutions. The algorithm begins with an initial population which will be used as a memory during the evolution process. To each memory element, it is assigned a normalized fitness value called quality factor that indicates the solution capacity provided by the element. As a search strategy, the proposed algorithm implements two operations: “exploration” and “exploitation.” Both necessary in all EAs [24]. Exploration is the operation of visiting entirely new points of a search space, whilst exploitation is the process of refining those points of a search space within the neighborhood of previously visited locations in order to improve their solution quality. Pure exploration degrades the precision of the evolutionary process but increases its capacity to find new potential solutions [25]. On the other hand, pure exploitation allows refining existent solutions but adversely drives the process to fall in local optimal solutions [26]. Therefore, the ability of an EA to find a global optimal solution depends on its capacity to find a good trade-off between the exploitation of so far found elements and the exploration of the search space. The APRE algorithm is an iterative process in which several actions are executed. First, the number of memory elements to be evolved is computed. Such number is automatically modified at each iteration. Then, a set of new individuals is generated as a consequence of the execution of the exploration operation. For each new individual, its fitness value is estimated or evaluated according to a decision taken by a fitness estimation strategy. Afterwards, the memory is updated. In this stage, the new individuals produced by the exploration operation compete against the memory elements to build the final memory configuration. Finally, a sample of the best elements contained in the final memory configuration is undergone to the exploitation operation. Thus, the complete process can be divided in six phases: initialization, selecting the population to be evolved, exploration, fitness estimation strategy, memory updating, and exploitation. 2.1. Initialization Like in EA, the APRE algorithm is an iterative method whose first step is to randomly initialize the population which will be used as a memory during the evolution process. The algorithm begins by initializing () a set of elements (). Each element is an -dimensional vector containing the parameter values to be optimized. Such values are randomly and uniformly distributed between the prespecified lower initial parameter bound and the upper initial parameter bound , just as it described by the following expression: where and are the parameter and element indexes, respectively. Hence, is the th parameter of the th element. Each element has two associated characteristics: a fitness value and a quality factor . The fitness value assigned to each element can be calculated by using the true objective function or only estimated by using the proposed fitness strategy . In addition to the fitness value, it is also assigned to , a normalized fitness value called quality factor (), which is computed as follows: where is the fitness value obtained by evaluation or by estimation of the memory element . The values and are defined as follows (considering a maximization problem): Since the mechanism by which an EA accumulates information regarding the objective function is an exact evaluation of the quality of each potential solution, initially, all the elements of are evaluated without considering the fitness estimation strategy proposed in this paper. This fact is only allowed at this initial stage. 2.2. Selecting the Population to Be Evolved At each iteration, it must be selected which and how many elements from will be considered to build the population in order to be evolved. Such selected elements will be undergone by the exploration and exploitation operators in order to generate a set of new individuals. Therefore, two things need to be defined: the number of elements to be selected and the strategy of selection. 2.2.1. The Number of Elements to Be Selected One of the mechanisms used by the APRE algorithm for reducing the number of function evaluations is to modify dynamically the size of the population to be evolved. The idea is to operate with the minimal number of individuals that guarantee the correct efficiency of the algorithm. Hence, the method aims to vary the population size in an adaptive way during the execution of each iteration. At the beginning of the process, a predetermined number of elements are considered to build the first population; then, it will be incremented or decremented depending on the algorithm’s performance. The adaptation mechanism is based on the lifetime of the individuals and on their solution quality. In order to compute the lifetime of each individual, it is assigned a counter () to each element of . When the initial population is created, all the counters are set to zero. Since the memory is updated at each generation, some elements prevail and others will be substituted by new individuals. Therefore, the counter of the surviving elements is incremented by one whereas the counter of new added elements is set to zero. Another important requirement to calculate the number of elements to be evolved is the solution quality provided by each individual. The idea is to identify two classes of elements, those that provide good solutions and those that can be considered as bad solutions. In order to classify each element, the average fitness value produced by all the elements of is calculated as where represents the fitness value corresponding to . These values are evaluated either by the true objective function or by the fitness estimation strategy . Considering the average fitness value, two groups are built: the set constituted by the elements of whose fitness values are greater than and the set which groups the elements of whose fitness values are equal or lower than . Therefore, the number of individuals of the current population that will be incremented or decremented at each generation is calculated by the following model: where the floor function maps a real number to the previous integer. and represent the number of elements of and , respectively, whereas and indicate the sum of the counters that correspond to the elements of and , respectively. The factor is a term used for fine tuning. A small value of implies a better algorithm’s performance at the price of an increment in the computational cost. On the other hand, a big value of involves a low computational cost at the price of a decrement in the performance algorithm. Therefore, the value must reflex a compromise between performance and computational cost. In our experiments such compromise has been found with . Therefore, the number the elements that define the population to be evolved is computed according to the following model: Since the value of can be positive or negative, the size of the population may be higher or lesser than . The computational procedure that implements this method is presented in Algorithm 1, in form of pseudocode. 2.2.2. Selection Strategy for Building Once the number of individuals has been defined, the next step is the selection of elements from for building . A new population which contains the same elements that is generated but sorted according to their fitness values. Thus, presents in its first positions the elements whose fitness values are better than those located in the last positions. Then, is divided in two parts: and . The section corresponds to the first elements of whereas the rest of the elements constitute the part . Figure 1 shows this process. In order to promote diversity, in the selection strategy, the 80% of the individuals of are taken from the first elements of and named as as shown in Figure 2, where (). The remaining 20% of the individuals are randomly selected from section . Hence, the last set of Se elements (where ) is chosen considering that all elements of have the same possibility of being selected. Figure 2 shows a description of the selection strategy. The computational procedure that implements this method is presented in Algorithm 2, in form of pseudocode. 2.3. Exploration Operation The first main operation applied to the population is the exploration operation. Considering as the input population, APRE mutates to produce a temporal population of vectors. In the exploration operation two different mutation models are used: the mutation employed by the Differential Evolution algorithm (DE) [27] and the trigonometric mutation operator [28]. 2.3.1. DE Mutation Operator In this mutation, three distinct individuals , , and are randomly selected from the current population . Then, a new value considering the following model is created: where , , and are randomly selected individuals such that they satisfy , to (population size), and to (number of decision variable). Hence, is the th parameter of the th individual of . The scale factor, (0,1+), is a positive real number that controls the rate at which the population evolves. 2.3.2. Trigonometric Mutation Operator The trigonometric mutation operation is performed according to the following formulation: where , , and represent the individuals , , and randomly selected from the current population whereas represents the fitness value (calculated or estimated) corresponding to . Under this formulation, the individual to be perturbed is the average value of three randomly selected vectors (, , and ). The perturbation to be imposed over such individual is implemented by the sum of three weighted vector differentials. , , and are the weights applied to these vector differentials. Notice that the trigonometric mutation is a greedy operator since it biases the strongly in the direction where the best one of three individuals is lying. Computational Procedure. Considering as the input population, all its individuals are sequentially processed in cycles beginning by the first individual . Therefore, in the cycle (where it is processed the individual ), three distinct individuals , , and are randomly selected from the current population considering that they satisfy the following conditions . Then, it is processed each dimension of beginning by the first parameter 1 until the last dimension has been reached. At each processing cycle, the parameter considered as a parent, creates an offspring in two steps. In the first step, from the selected individuals , , and , a donor vector is created by means of two different mutation models. In order to select which mutation model is applied, a uniform random number is generated within the range . If such number is less than a threshold MR, the donor vector is generated by the DE mutation operator; otherwise, it is produced by the trigonometric mutation operator. Such process can be modeled as follows: In the second step, the final value of the offspring is determined. Such decision is stochastic; hence, a second uniform random number is generated within the range . If this random number is less than , ; otherwise, . This operation can be formulated as follows: The complete computational procedure is presented in Algorithm 3, in form of pseudocode. 2.4. Fitness Estimation Strategy Once the population has been generated by the exploration operation, it is necessary to calculate the fitness value provided by each individual. In order to reduce the number of function evaluations, a fitness estimation strategy that decides which individuals can be estimated or actually evaluated is introduced. The idea of such a strategy is to find the global optimum of a given function considering only very few number of function evaluations. In this paper, we explore a local approximation scheme that estimates the fitness values based on previously evaluated neighboring individuals, stored in the memory during the evolution process. The strategy decides if an individual is calculated or estimated based on two criteria. The first one considers the distance between and the nearest element contained in (where ) whereas the second one examines the quality factor provided by the nearest element (). In the model, individuals of that are near the elements of holding the best quality values have a high probability to be evaluated. Such individuals are important, since they will have a stronger influence on the evolution process than other individuals. In contrast, individuals of that are also near the elements of but with a bad quality value maintain a very low probability to be evaluated. Thus, most of such individuals will only be estimated, assigning it the same fitness value that the nearest element of . On the other hand, those individuals in regions of the search space with few previous evaluations (individuals of located farther than a distance ) are also evaluated. The fitness values of these individuals are uncertain since there is no close reference (close points contained in ). Therefore, the fitness estimation strategy follows two rules in order to evaluate or estimate the fitness values.(1)If the new individual is located closer than a distance with respect to the nearest element stored in , then a uniform random number is generated within the range . If such number is less than , is evaluated by the true objective function (). Otherwise, its fitness value is estimated assigning it the same fitness value that (). Figures 3(a) and 3(b) draw the rule procedure.(2)If the new individual is located longer than a distance with respect to the nearest individual location stored in , then the fitness value of is evaluated using the true objective function (). Figure 3(c) outlines the rule procedure. From the rules, the distance controls the trade off between the evaluation and estimation of new individuals. Unsuitable values of result in a lower convergence rate, longer computation time, larger function evaluation number, convergence to a local maximum, or unreliability of solutions. Therefore, the value is computed considering the following equation: where and represent the prespecified lower bound and the upper bound of the -parameter, respectively, within an -dimensional space. Both rules show that the fitness estimation strategy is simple and straightforward. Figure 3 illustrates the procedure of fitness computation for a new candidate solution considering the two different rules. In the problem the objective function is maximized with respect to two parameters (). In all figures (Figures 3(a), 3(b), and 3(c)) the memory contains five different elements (, and ) with their corresponding fitness values (, and ) and quality factors (, and ). Figures 3(a) and 3(b) show the fitness evaluation () or estimation () of the new individual following the rule1. Figure 3(a) represents the case when holds a good quality factor whereas Figure 3(b) when maintains a bad quality factor. Finally, Figure 3(c) presents the fitness evaluation of considering the conditions of rule2. The procedure that implements the fitness estimation strategy is presented in Algorithm 4, in form of pseudocode. 2.5. Memory Updating Once the operations of exploration and fitness estimation have been applied, it is necessary to update the memory . In the APRE algorithm, the memory is updated considering the following procedure. (1)The elements of and are merged into ().(2)From the resulting elements of , it is selected the best elements according to their fitness values to build the new memory .(3)The counters must be updated. Thus, the counter of the surviving elements is incremented by 1 whereas the counter of modified elements is set to zero. 2.6. Exploitation Operation The second main operation applied by the APRE algorithm is the exploitation operation. Exploitation, in the context of EA, is the process of refining the solution quality of existent promising solutions within a small neighborhood. In order to implement such a process, a new memory ME is generated, which contains the same elements that but sorted according to their fitness values. Thus, ME presents in its first positions the elements whose fitness values are better than those located in the last positions. Then, the 10% of the () individuals are taken from the first elements of ME to build the set (, where ). To each element of a probability which express the likelihood of the element to be exploited is assigned. Such a probability is computed as follows: Therefore, the first elements of E have a better probability to be exploited than the last ones. In order to decide if the element must be exploited, a uniform random number is generated within the range . If such a number is less than , then the element will be modified by the exploitation operation. Otherwise, it remains without changes. If the exploitation operation over is verified, the position of is perturbed considering a small neighborhood. The idea is to test if it is possible to refine the solution provided by modifying slightly its position. In order to improve the exploitation process, the proposed algorithm starts perturbing the original position within the interval [] (where is the distance defined in (11)) and then gradually is reduced as the process evolves. Thus, the perturbation over a generic element is modeled as follows: where is the current iteration and is the total number of iterations from which consists the evolution process. Once has been calculated, its fitness value is computed by using the true objective function (). If is better than according to their fitness values, the value of in the original memory is updated with ; otherwise the memory remains without changes. The procedure that implements the exploitation operation is presented in Algorithm 5, in form of pseudocode. In order to demonstrate the exploitation operation, Figure 4(a) illustrates a simple example. A memory of ten different 2-dimensional elements is assumed (). Figure 4(b) shows the previous configuration of the proposed example before the exploitation operation takes place. Since only the 10% of the best elements of will build the set E, is the single element that constitutes (). Therefore, according to (12), the probability assigned to is 1. Under such circumstances, the element is perturbed considering (13), generating the new position . As is better than according to their fitness values, the value of in the original memory is updated with . Figure 4(c) shows the final configuration of after the exploitation operation has been achieved. 2.7. Computational Procedure The computational procedure for the proposed algorithm can be summarized in Algorithm 6. The APRE algorithm is an iterative process in which several actions are executed. After initialization (lines 2-3), the number of memory elements to be evolved are computed. Such number is automatically modified at each iteration (lines 5-6). Then, a set of new individuals is generated as a consequence of the execution of the exploration operation (line 7). For each new individual, its fitness value is estimated or evaluated according to a decision taken by a fitness estimation strategy (line 8). Afterwards, the memory is updated. In this stage, the new individuals produced by the exploration operation compete against the memory elements to build the final memory configuration (lines 9–11). Finally, a sample of the best elements contained in the final memory configuration is undergone to the exploitation operation (line 12). This cycle is repeated until the maximum number of the iterations Max has been reached. 3. Implementation of APRE-Based Circle Detector 3.1. Individual Representation In order to detect circle shapes, candidate images must be preprocessed first by the well-known Canny algorithm which yields a single-pixel edge-only image. Then, the coordinates for each edge pixel are stored inside the edge vector , with being the total number of edge pixels. Each circle uses three edge points as individuals in the optimization algorithm. In order to construct such individuals, three indexes , , and , are selected from vector , considering the circle’s contour that connects them. Therefore, the circle that crosses over such points may be considered as a potential solution for the detection problem. Considering the configuration of the edge points shown by Figure 5, the circle center and the radius of can be computed as follows: Considering where is the determinant and . Figure 5 illustrates the parameters defined by (14) to (17). 3.2. Objective Function In order to calculate the error produced by a candidate solution , a set of test points is calculated as a virtual shape which, in turn, must be validated, that is, if it really exists in the edge image. The test set is represented by , where is the number of points over which the existence of an edge point, corresponding to , should be validated. In our approach, the set A is generated by the Midpoint Circle Algorithm (MCA) [29]. The MCA is a searching method which seeks the required points for drawing a circle digitally. Therefore, MCA calculates the necessary number of test points to totally draw the complete circle. Such a method is considered the fastest because MCA avoids computing square-root calculations by comparing the pixel separation distances among them. The objective function represents the matching error produced between the pixels A of the circle candidate (individual) and the pixels that actually exist in the edge image, yielding where is a function that verifies the pixel existence in , with and being the number of pixels lying on the perimeter corresponding to currently under testing. Hence, function is defined as A value of near to one implies a better response from the “circularity” operator. Figure 6 shows the procedure to evaluate a candidate solution by using the objective function (). Figure 6(a) shows the original edge map E, while Figure 6(b) presents the virtual shape A representing the particle . In Figure 6(c), the virtual shape A is compared to the edge image, point by point, in order to find coincidences between virtual and edge points. The particle has been built from points , , and which are shown by Figure 6(a). The virtual shape A, obtained by MCA, gathers 56 points () with only 17 of them existing in both images (shown as white points in Figure 6(c)) and yielding , therefore . 3.3. The Multiple Circle Detection Procedure In order to detect multiple circles, the APRE-detector is iteratively applied. At each iteration, two actions are developed. In the first one, a new circle is detected as a consequence of the execution of the APRE algorithm. The detected circle corresponds to the candidate solution with the best found value. In the second one, the detected circle is removed from the original edge map. The processed edge map without the removed circle represents the input image for the next iteration. Such process is executed over the sequence of images until the value would be lower than a determined threshold that is considered as permissible. 4. Results on Multicircle Detection In order to achieve the performance analysis, the proposed approach is compared to the GA-based algorithm [5], the BFAO detector [9] and the RHT method [4] over an image set. The GA-based algorithm follows the proposal of Ayala-Ramirez et al. [5], which considers the population size as 70, the crossover probability as 0.55, the mutation probability as 0.10, and the number of elite individuals as 2 and 200 generations. The roulette wheel selection and the 1-point crossover operator are both applied. The parameter setup and the fitness function follow the configuration suggested in [5]. The BFAO algorithm follows the implementation from [9] considering the experimental parameters as , , , , , , , , , , and . Such values are found to be the best configuration set according to [9]. Both, the GA-based algorithm and the BAFO method use the same objective function that is defined by (16). Likewise, the RHT method has been implemented as it is described in [4]. Finally, Table 1 presents the parameters for the APRE algorithm used in this work. They have been kept for all test images after being experimentally defined. Images rarely contain perfectly-shaped circles. Therefore, with the purpose of testing accuracy for a single-circle, the detection is challenged by a ground-truth circle which is determined from the original edge map. The parameters ,, and representing the testing circle are computed using the (6)–(9) for three circumference points over the manually-drawn circle. Considering the centre and the radius of the detected circle are defined as and , the Error Score () can be accordingly calculated as The central point difference represents the centre shift for the detected circle as it is compared to a benchmark circle. The radio mismatch accounts for the difference between their radii. and represent two weighting parameters which are to be applied separately to the central point difference and to the radio mismatch for the final error . At this work, they are chosen as and . Such a choice ensures that the radius difference would be strongly weighted in comparison to the difference of central circular positions between the manually detected and the machine-detected circles. Here, we assume that if is found to be less than 1, then the algorithm gets a success; otherwise, we say that it has failed to detect the edge-circle. Note that for and means the maximum difference of radius tolerated is 10, while the maximum mismatch in the location of the center can be 20 (in number of pixels). In order to appropriately compare the detection results, the Detection Rate (DR) is introduced as a performance index. DR is defined as the percentage of reaching detection success after a certain number of trials. For “success” it does mean that the compared algorithm is able to detect all circles contained in the image, under the restriction that each circle must hold the condition . Therefore, if at least one circle does not fulfil the condition of , the complete detection procedure is considered as a failure. In order to use an error metric for multiple-circle detection, the averaged Es produced from each circle in the image is considered. Such criterion, defined as the Multiple Error (ME), is calculated as follows: where represents the number of circles within the image according to a human expert. Figure 7 shows three synthetic images and the resulting images after applying the GA-based algorithm [5], the BFOA method [9], and the proposed approach. Figure 8 presents experimental results considering three natural images. The performance is analyzed by considering 35 different executions for each algorithm. Table 2 shows the averaged execution time, the averaged number of function evaluations, the detection rate in percentage, and the averaged multiple error (ME), considering six test images (shown by Figures 7 and 8). Close inspection reveals that the proposed method is able to achieve the highest success rate still keeping the smallest error and demanding less computational time and a lower number of function evaluations for all cases. In order to statistically analyze the results in Table 2, a nonparametric significance proof known as the Wilcoxon’s rank test [30–32] for 35 independent samples has been conducted. Such proof allows assessing result differences among two related methods. The analysis is performed considering a 5% significance level over the number of function evaluations and a multiple error (ME) data. Tables 3 and 4 report the values produced by Wilcoxon’s test for a pairwise comparison of the number of function evaluations and the multiple error (ME), considering two groups gathered as APRE versus GA and APRE versus BFOA. As a null hypothesis, it is assumed that there is no difference between the values of the two algorithms. The alternative hypothesis considers an existent difference between the values of both approaches. All values reported in Tables 3 and 4 are less than 0.05 (5% significance level) which is a strong evidence against the null hypothesis, indicating that the best APRE mean values for the performance are statistically significant which has not occurred by chance. Figure 9 demonstrates the relative performance of APRE in comparison with the RHT algorithm as it is described in [4]. All images belonging to the test are complicated and contain different noise conditions. The performance analysis is achieved by considering 35 different executions for each algorithm over the three images. The results, exhibited in Figure 9, present the median-run solution (when the runs were ranked according to their final ME value) obtained throughout the 35 runs. On the other hand, Table 5 reports the corresponding averaged execution time, detection rate (in %), and average multiple error (using (10)) for APRE and RHT algorithms over the set of images. Table 5 shows a decrease in performance of the RHT algorithm as noise conditions change. Yet the APRE algorithm holds its performance under the same circumstances. 5. Conclusions In this paper, a novel evolutionary algorithm called the Adaptive Population with Reduced Evaluations (APRE) is introduced to solve the problem of circle detection. The proposed algorithm reduces the number of function evaluations through the use of two mechanisms: () adapting dynamically the size of the population and () incorporating a fitness calculation strategy which decides when it is feasible to calculate or only estimate new generated individuals. The algorithm begins with an initial population which will be used as a memory during the evolution process. To each memory element, it is assigned a normalized fitness value called quality factor that indicates the solution capacity provided by the element. From the memory, only a variable subset of elements is considered to be evolved. Like other population-based methods, the proposed algorithm generates new individuals considering two operators: exploration and exploitation. Such operations are applied to improve the quality of the solutions by () searching the unexplored solution space to identify promising areas containing better solutions than those found so far and () successive refinement of the best found solutions. Once the new individuals are generated, the memory is updated. In such stage, the new individuals compete against the memory elements to build the final memory configuration. In order to save computational time, the approach incorporates a fitness estimation strategy that decides which individuals can be estimated or actually evaluated. As a result, the approach can substantially reduce the number of function evaluations, yet preserving its good search capabilities. The proposed fitness calculation strategy estimates the fitness value of new individuals using memory elements located in neighboring positions that have been visited during the evolution process. In the strategy, those new individuals, close to a memory element whose quality factor is high, have a great probability to be evaluated by using the true objective function. Similarly, it is also evaluated those new particles lying in regions of the search space with no previous evaluations. The remaining search positions are estimated assigning them the same fitness value that the nearest location of the memory element. By the use of such fitness estimation method, the fitness value of only very few individuals are actually evaluated whereas the rest is just estimated. Different to other approaches that use an already existent EA as framework, the APRE method has been completely designed to substantially reduce the computational cost but preserving good search To detect the circular shapes, the detector is implemented by using the encoding of three pixels as candidate circles over the edge image. An objective function evaluates if such candidate circles are actually present in the edge image. Guided by the values of this objective function, the set of encoded candidate circles are evolved using the operators defined by APRE so that they can fit to the actual circles on the edge map of the image. In order to test the circle detection accuracy, a score function is used (19). It can objectively evaluate the mismatch between a manually detected circle and a machine-detected shape. We demonstrated that the APRE method outperforms both the evolutionary methods (GA and BFOA) and Hough Transform-based techniques (RHT) in terms of speed and accuracy, considering a statistically significant framework (Wilconxon test). Results show that the APRE algorithm is able to significantly reduce the computational overhead as a consequence of decrementing the number of function The proposed algorithm is part of the vision system used by a biped robot supported under the Grant CONACYT CB 181053. 1. X. Bai, X. Yang, and L. J. Latecki, “Detection and recognition of contour parts based on shape similarity,” Pattern Recognition, vol. 41, no. 7, pp. 2189–2199, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 2. K. Schindler and D. Suter, “Object detection by global contour shape,” Pattern Recognition, vol. 41, no. 12, pp. 3736–3748, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 3. T. J. Atherton and D. J. Kerbyson, “Using phase to represent radius in the coherent circle Hough transform,” in Proceedings of the IEE Colloquium on the Hough Transform, IEE, London, UK, May 4. L. Xu, E. Oja, and P. Kultanen, “A new curve detection method: randomized hough transform (RHT),” Pattern Recognition Letters, vol. 11, no. 5, pp. 331–338, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 5. V. Ayala-Ramirez, C. H. Garcia-Capulin, A. Perez-Garcia, and R. E. Sanchez-Yanez, “Circle detection on images using genetic algorithms,” Pattern Recognition Letters, vol. 27, no. 6, pp. 652–657, 2006. View at Publisher · View at Google Scholar · View at Scopus 6. E. Cuevas, N. Ortega-Sánchez, D. Zaldivar, and M. Pérez-Cisneros, “Circle detection by harmony search optimization,” Journal of Intelligent & Robotic Systems, vol. 66, no. 3, pp. 359–376, 2011. View at Publisher · View at Google Scholar · View at Scopus 7. E. Cuevas, D. Oliva, D. Zaldivar, M. Pérez-Cisneros, and H. Sossa, “Circle detection using electro-magnetism optimization,” Information Sciences, vol. 182, no. 1, pp. 40–55, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 8. E. Cuevas, D. Zaldivar, M. Pérez-Cisneros, and M. Ramírez-Ortegón, “Circle detection using discrete differential evolution optimization,” Pattern Analysis&Applications, vol. 14, no. 1, pp. 93–107, 2011. View at Publisher · View at Google Scholar · View at MathSciNet 9. S. Dasgupta, S. Das, A. Biswas, and A. Abraham, “Automatic circle detection on digital images with an adaptive bacterial foraging algorithm,” Soft Computing, vol. 14, no. 11, pp. 1151–1164, 2010. View at Publisher · View at Google Scholar · View at Scopus 10. Y. Jin, “Comprehensive survey of fitness approximation in evolutionary computation,” Soft Computing, vol. 9, no. 1, pp. 3–12, 2005. View at Publisher · View at Google Scholar 11. Y. Jin, “Surrogate-assisted evolutionary computation: recent advances and future challenges,” Swarm and Evolutionary Computation, vol. 1, no. 2, pp. 61–70, 2011. View at Publisher · View at Google Scholar 12. J. Branke and C. Schmidt, “Faster convergence by means of fitness estimation,” Soft Computing, vol. 9, no. 1, pp. 13–20, 2005. View at Publisher · View at Google Scholar 13. Z. Zhou, Y. Ong, M. Nguyen, and D. Lim, “A Study on polynomial regression and gaussian process global surrogate model in hierarchical surrogate-assisted evolutionary algorithm,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '05), vol. 3, Edinburgh, UK, September 2005. View at Publisher · View at Google Scholar 14. A. Ratle, “Kriging as a surrogate fitness landscape in evolutionary optimization,” Artificial Intelligence for Engineering Design, Analysis and Manufacturing, vol. 15, no. 1, pp. 37–49, 2001. View at Publisher · View at Google Scholar · View at Scopus 15. D. Lim, Y. Jin, Y.-S. Ong, and B. Sendhoff, “Generalizing surrogate-assisted evolutionary computation,” IEEE Transactions on Evolutionary Computation, vol. 14, no. 3, pp. 329–355, 2010. View at Publisher · View at Google Scholar · View at Scopus 16. Y. S. Ong, K. Y. Lum, and P. B. Nair, “Hybrid evolutionary algorithm with Hermite radial basis function interpolants for computationally expensive adjoint solvers,” Computational Optimization and Applications, vol. 39, no. 1, pp. 97–119, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 17. Y. Jin, M. Olhofer, and B. Sendhoff, “A framework for evolutionary optimization with approximate fitness functions,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 5, pp. 481–494, 2002. View at Publisher · View at Google Scholar · View at Scopus 18. Y. S. Ong, P. B. Nair, and A. J. Keane, “Evolutionary optimization of computationally expensive problems via surrogate modeling,” AIAA Journal, vol. 41, no. 4, pp. 687–696, 2003. View at Publisher · View at Google Scholar · View at Scopus 19. D. Chen and C. Zhao, “Particle swarm optimization with adaptive population size and its application,” Applied Soft Computing, vol. 9, no. 1, pp. 39–48, 2009. View at Publisher · View at Google Scholar · View at Scopus 20. W. Zhu, Y. Tang, J. Fang, and W. Zhang, “Adaptive population tuning scheme for differential evolution,” Information Sciences, vol. 223, pp. 164–191, 2013. View at Publisher · View at Google 21. J. Brest and M. Sepesy Maučec, “Population size reduction for the differential evolution algorithm,” Applied Intelligence, vol. 29, no. 3, pp. 228–247, 2008. View at Publisher · View at Google Scholar · View at Scopus 22. S. Oh, Y. Jin, and M. Jeon, “Approximate models for constraint functions in evolutionary constrained optimization,” International Journal of Innovative Computing, Information and Control, vol. 7, no. 11, pp. 6585–6603, 2011. View at Scopus 23. C. Luo, S.-L. Zhang, C. Wang, and Z. Jiang, “A metamodel-assisted evolutionary algorithm for expensive optimization,” Journal of Computational and Applied Mathematics, vol. 236, no. 5, pp. 759–764, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 24. K. C. Tan, S. C. Chiam, A. A. Mamun, and C. K. Goh, “Balancing exploration and exploitation with adaptive variation for evolutionary multi-objective optimization,” European Journal of Operational Research, vol. 197, no. 2, pp. 701–713, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 25. E. Alba and B. Dorronsoro, “The exploration/exploitation tradeoff in dynamic cellular genetic algorithms,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 2, pp. 126–142, 2005. View at Publisher · View at Google Scholar · View at Scopus 26. B. O. Arani, P. Mirzabeygi, and M. S. Panahi, “An improved PSO algorithm with a territorial diversity-preserving scheme and enhanced exploration-exploitation balance,” Swarm and Evolutionary Computation, vol. 11, pp. 1–15, 2013. View at Publisher · View at Google Scholar 27. R. Storn and K. Price, “Differential evolution-a simple and efficient adaptive scheme for global optimisation over continuous spaces,” Technical Report, TR-95-012, Institute for Clinical Systems Improvement (ICSI), Berkeley, Calif, USA, 1995. 28. R. Angira and A. Santosh, “Optimization of dynamic systems: a trigonometric differential evolution approach,” Computers and Chemical Engineering, vol. 31, no. 9, pp. 1055–1063, 2007. View at Publisher · View at Google Scholar · View at Scopus 29. J. Bresenham, “Linear algorithm for incremental digital display of circular Arcs,” Communications of the ACM, vol. 20, no. 2, pp. 100–106, 1977. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 30. F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics Bulletin, vol. 1, no. 6, pp. 80–83, 1945. 31. S. Garcia, D. Molina, M. Lozano, and F. Herrera, “A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’2005 Special session on real parameter optimization,” Journal of Heuristics, vol. 15, no. 6, pp. 617–644, 2009. View at Publisher · View at Google Scholar 32. J. Santamaría, O. Cordón, S. Damas, J. M. García-Torres, and A. Quirin, “Performance evaluation of memetic approaches in 3D reconstruction of forensic objects,” Soft Computing, no. 8-9, pp. 883–904, 2009. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/mpe/2013/868434/","timestamp":"2014-04-16T22:35:10Z","content_type":null,"content_length":"448894","record_id":"<urn:uuid:f18186ce-6f97-49fc-a446-1dc09deaf4d6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Annuity of Business Mathematics 1 Answer Ryan wishes to borrow 10000.Meade and Halliday offer to lend the money with following term: Meade would be repaid with 10 equal annual payment made at the end of each year at 8% interest effective annually. Halliday charges an annual effective interest rate of I with Ryan accumulating the amount necessary to repay the loan by means of 10 annual deposits at the end of each year into sinking fund earning 7% interest effective annually. The total payment(principal and interest) is same for Maede and Halliday. Calculate I.
{"url":"http://www.askmehelpdesk.com/finance/annuity-business-mathematics-558214.html","timestamp":"2014-04-21T07:05:53Z","content_type":null,"content_length":"48399","record_id":"<urn:uuid:727c3c32-ccbf-4d31-9f04-b264eef271d0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: COS 522: Computational Complexity Princeton University, Spring 2001 Lecturer: Sanjeev Arora These are scribe notes from the above course. Eight grads and two undergrads took the course. They were supposed to have an undergrad level preparation in Theory of Computation, preferably using Mike Sipser's excellent text. I am grateful to these students, who eagerly attended these lectures, solved the home­ work problems (gamely trying their hands at the occasional open problem), and wrote up scribe notes. I have edited the scribe notes but many errors undoubtedly remain. These lecture notes omit many topics from complexity theory, most notably the PCP Theorem and computational pseudorandomness as developed in the 1990s (extractors, hard­ ness vs randomness, etc). Luca Trevisan and I plan to write a graduate textbook on Com­ putational Complexity that will cover these and other topics as well. Course homepage: http://www.cs.princeton.edu/courses/archive/spring01/cs522/ Homepage for Luca's course: http://www.cs.berkeley.edu/ luca/cs278/ 1 Introduction 5 2 Space Complexity 9
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/569/3949229.html","timestamp":"2014-04-18T18:23:39Z","content_type":null,"content_length":"8218","record_id":"<urn:uuid:2732ef1b-362d-4579-b559-10e01fb736e7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Find a Calculus Tutor ...B. Saul Agricultural and Murrell Dobbins High Schools in Philadelphia. My previous experiences with education have been through The Johns Hopkins University CTY Summer Program as a Resident Assistant and Teaching Assistant. 26 Subjects: including calculus, writing, statistics, geometry ...If you have the idea of what to do on a problem, you do not need to complete 10 similar problems. As such, I like to spend more time on the why than the what. If you choose me as your tutor, your math skills will improve. 26 Subjects: including calculus, physics, geometry, statistics ...I have been a private tutor for over 9 years and have also worked with other tutoring companies. I am willing to tutor any science, math, or reading topic such as biology, chemistry, organic chemistry, physics, algebra, geometry, SAT, ACT, precalculus...etc. I am very passionate about teaching ... 40 Subjects: including calculus, reading, English, chemistry ...I have had experience with topics and exams from grade school mathematics through calculus. It is my responsibility to make sure that the students improves in not only their grades, but their understanding of mathematics as well. For this reason, I do not mind traveling to a location that the students would feel the most comfortable at so they can have a clear mind to learn. 10 Subjects: including calculus, geometry, algebra 1, algebra 2 ...Francis College and Berkeley College, overall I have been teaching for 15 years. I have also been tutoring for the past 5 years Elementary Math, Algebra, Precalculus and Calculus students, amongst others, at Hunter College's Dolciani Math Learning tutoring center. I have taught Elementary Math,... 21 Subjects: including calculus, physics, statistics, geometry
{"url":"http://www.purplemath.com/mcchord_afb_wa_calculus_tutors.php","timestamp":"2014-04-18T15:41:11Z","content_type":null,"content_length":"23971","record_id":"<urn:uuid:1dba8633-c205-4b20-8746-74c5b25ab79d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Lockport, IL ACT Tutor Find a Lockport, IL ACT Tutor ...I have taught 2 semester-long courses in speech/public speaking: one at the high school level and one at the college level. My last place of employment used the TEAS as the admission test for the degree program. As the English and math teacher there, I led test prep workshops about six times a year for over four years. 17 Subjects: including ACT Math, reading, English, geometry ...More recently I worked as a Math Lead for one of the highest achieving Charter School Networks in the Nation. A school in which we continually outperformed out public school counterpart year after year. I currently work in the Mathematics Department for one of the most recognized scholastic public secondary districts in the State of Illinois. 20 Subjects: including ACT Math, physics, geometry, algebra 1 ...Being a voice instructor, ear training is one of the most important concepts to be mastered by any vocalist. I earned a Bachelor of Music degree from the Eastman School of Music and, shortly thereafter (in 1985) became the music director of the Chicago Philharmonic Orchestra - a position that I ... 37 Subjects: including ACT Math, English, geometry, biology ...I have prepared students in the math portion of the ACT test. I have worked with students who were failing math very late in the year and helped them attain a final grade of B or C. However, my greatest thrill is seeing the change in a student’s confidence in themselves as they go from failure to success. 14 Subjects: including ACT Math, geometry, algebra 1, GED ...As a physicist I work everyday with math and science, and I have a long experience in teaching and tutoring at all levels (university, high school, middle and elementary school). My son (a 5th grader) scores above 99 percentile in all math tests, and you too can have high scores.My PhD in Physics... 23 Subjects: including ACT Math, calculus, statistics, physics Related Lockport, IL Tutors Lockport, IL Accounting Tutors Lockport, IL ACT Tutors Lockport, IL Algebra Tutors Lockport, IL Algebra 2 Tutors Lockport, IL Calculus Tutors Lockport, IL Geometry Tutors Lockport, IL Math Tutors Lockport, IL Prealgebra Tutors Lockport, IL Precalculus Tutors Lockport, IL SAT Tutors Lockport, IL SAT Math Tutors Lockport, IL Science Tutors Lockport, IL Statistics Tutors Lockport, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/lockport_il_act_tutors.php","timestamp":"2014-04-18T13:53:07Z","content_type":null,"content_length":"23847","record_id":"<urn:uuid:ff801f2e-98dc-4b97-91b3-42a8f2db589b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
John Napier John Napier was born into a wealthy family, as his father was Master of the Mint in Scotland. Napier entered St. Andrews University at the age of 13, though he left to study in Europe before completing a degree. It is likely that he studied at the University of Paris, and perhaps Italy and the Netherlands as well. Napier took part in the religious controversies of the time. He had been a fanatical Protestant from his days as an undergraduate at St Andrews. He married, built himself a castle, and took over the job of running his estate. This task he took very seriously and, being a great genius as an inventor, he applied his skills to these tasks. He approached agriculture in a scientific way and he experimented with improving the manuring of his fields. Napier's study of mathematics was only a hobby, and in his mathematical works he writes that he often found it hard to find the time for the necessary calculations between working on theology. He is best known for his invention of logarithms. Napier's discussion of logarithms appears in a text in 1614. Two years later an English translation of Napier's original Latin text was published. Unlike the logarithms used today, Napier's logarithms are not really to any base although in our present terminology it is not unreasonable to say that they are to base 1/e. His notation was Nap.log x. Briggs had suggested to Napier that logs should be to base 10, and Napier suggested in return that Nap.log 1 should be zero. Briggs later made tables of these. Napier also presented a mechanical means of simplifying calculations in 1617. He described a method of multiplication using "numbering rods" with numbers marked off on them. To multiply numbers the bones were placed side by side and the appropriate products read off. They were made of ivory, and because they looked like bones, they are now known as Napier's bones. Napier's other mathematical contributions include a mnemonic for formulas used in solving spherical triangles, two formulas known as Napier's analogies used in solving spherical triangles, exponential expressions for trigonometric functions, and the decimal notation for fractions. It would be surprising if a man of such great an intellect as Napier did not appear rather strange to his contemporaries and, given the superstitious age in which he lived, strange stories began to circulate. Many traditions suggest that Napier was in league with the powers of darkness. Napier, however, will be remembered for making one of the most important contributions to the advance of knowledge. It was through the use of logarithms that Kepler was able to reduce his observations and make his breakthrough which then in turn underpinned Newton's theory of gravitation. Laplace, 200 year later, said that logarithms, by shortening the labors, doubled the life of the
{"url":"http://www2.stetson.edu/~efriedma/periodictable/html/Np.html","timestamp":"2014-04-16T21:52:10Z","content_type":null,"content_length":"3436","record_id":"<urn:uuid:e46119ff-76c7-4e7a-936c-33be11aae787>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
15 kilometers equals how many meters You asked: 15 kilometers equals how many meters Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/15_kilometers_equals_how_many_meters","timestamp":"2014-04-17T10:46:29Z","content_type":null,"content_length":"53347","record_id":"<urn:uuid:aec600f6-b2c7-4c12-a4f5-b49ffe7bb2b0>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling Reform-Style Teaching in a College Mathematics Class Modeling Reform-Style Teaching in a College Mathematics Class from the Perspectives of Professor and Students [Return to MCTP Research Page] Amy Roth-McDuffie, University of Maryland at College Park J. Randy McGinnis, University of Maryland at College Park Tad Watanabe, Towson State University A paper presented at the annual meeting of the American Educational Research Association, April 8 - 12, 1996, New York, New York. This research is funded by a grant from the National Science Foundation (NSF Cooperative Agreement No. DUE 9255745 Fundamental changes in teaching and learning have been proposed for mathematics education in the United States. As part of the reform effort, several publications directed at college mathematics teachers stress the importance of modeling reform-style teaching to undergraduate students (e.g., Mathematics Association of America, 1988; Mathematical Sciences Education Board, 1995; Tucker & Leitzel, 1995). This study presents the perceptions of five pre-service teachers and their mathematics professor as participants in a reform-style mathematics classroom. The following a priori research question is addressed: Do the instructor and the pre-service teachers perceive the instruction in their mathematics course as exemplifying the type of teaching and learning they would like to promote as upper elementary/middle level teachers of mathematics and science? And if so, how? An analysis of the data indicated that the professor and the teacher candidates perceived vast differences between traditional instruction and the teaching and learning they experienced in this class. Moreover, both the professor and the teacher candidates expressed a clear image of what they thought teaching in grades 4 through 8 should be. Their image of ideal teaching was quite consistent with the teaching and learning that they experienced in this class. The experiences of these teacher candidates and this professor has implications for teacher education programs interested in preparing pre-service teachers to achieve the standards for teaching and learning set forth in the reform documents. Fundamental changes in teaching and learning have been proposed for mathematics education in the United States. The National Council of Teachers of Mathematics [NCTM] (1989, 1991, 1995), the Mathematical Sciences Education Board [MSEB] (1990, 1991, 1995), the Mathematical Association of America [MAA] (Tucker & Leitzel, 1995) and the National Research Council [NRC](1991) have issued documents proposing a framework for change in mathematics education at all levels, elementary through college. The framework is based on the philosophy that students are active learners who construct knowledge through their interpretations of the world around them. The above reform documents present goals for mathematics education which state that all students should: learn to value mathematics, become confident in their ability to do mathematics, become mathematical problem solvers, learn to communicate mathematically, and learn to reason mathematically. Several publications directed at college mathematics teachers stress the importance of modeling reform-style teaching to undergraduate students, especially pre-service teachers (MAA, 1988; MSEB, 1995; NRC, 1991; National Science Foundation [NSF], 1993; Tucker & Leitzel, 1995). Modeling reform-style teaching at the college level is important for the following reasons. First, the result of modeling good teaching is a better education for all students, not just future teachers, in that all students benefit from good teaching (NRC, 1991; NSF, 1993). Also, since the literature on teacher education posits that teachers tend to teach as they have been taught when they were students (Brown & Borko, 1992; Kennedy, 1991), teachers (including college level teachers) should model the type of teaching that is consistent with the reform documents (MSEB, 1995). Moreover, as a consequence of this finding, there are implications specific to college teaching. While all teachers serve as role models for students who want to become teachers, college faculty are the people teaching pre-service teachers as they train for their careers; thus, college faculty should be especially concerned about modeling good teaching. "Unless college and university mathematicians model through their own teaching effective strategies that engage students in their own learning, school teachers will continue to present mathematics as a dry subject to be learned by imitation and memorization" (NRC, 1991, p. 29). However, in looking at the literature on reform-style teaching in mathematics available to college faculty, Brown and Borko (1992) state that existing research, provides limited evidence about the design and implementation of good mathematics teacher education programs. . .Careful documentation of the experiences of teachers in such programs and the resulting changes in their knowledge, beliefs, dispositions, thinking, and actions will provide further insight into the process of becoming a mathematics teacher (p. 235 - 236). The Maryland Collaborative for Teacher Preparation [MCTP] is addressing this need for the design, implementation, and documentation or a reform-based teacher education program at ten colleges and universities in Maryland. The MCTP is a National Science Foundation funded project with the mission to develop, implement, and evaluate an interdisciplinary mathematics and science, upper elementary/ middle level teacher preparation program consistent with the goals for reform in mathematics and science education as described above. MCTP involves college faculty from mathematics, science, and education departments who are collaborating to develop and implement the program. In designing the courses and field experiences, the following basic principles guide the faculty participating in the MCTP program. These principles are outlined in an MCTP abstract developed by the principle investigators of the project. 1. Preservice teachers should be actively involved in the learning of mathematics and science through instruction that models practices that they will be expected to employ in their teaching careers. 2. Courses and field experiences should reflect the integrated nature of mathematics and science so that prospective teachers can develop an understanding of the connections between mathematics and 3. The programs of all preservice teachers should include field internships that involve them in genuine research activities of business, industrial, or scientific research institutions and informal teaching activities of educational institutions such as science centers, zoos, or museums. 4. The courses and experiences of all preservice teachers should focus on developing their ability to use modern technologies as standard tools for problem solving. 5. The courses and experience of all preservice teachers should prepare them to deal effectively with the broad range of students who are in public schools today. 6. The teacher graduates should be given assistance and continued support during the critical first years in the teaching profession. These principles are consistent with the recommendations of the reform documents in that they emphasize active learning, mathematics and science connections, real-world experiences, the utilization of technology, teaching to diverse student populations, and on-going professional support. In addition to developing a teacher education program, the MCTP has dedicated significant efforts to teacher education research. The primary purpose of the research is gaining knowledge and understanding about the experiences of the pre-service teachers and the college faculty in the process of implementing a mathematics and science education program which is based on reform-style teaching and learning. More specifically, this study presents the perceptions of five pre-service teachers and their mathematics professor as participants in a reform-style mathematics classroom. The goal is to promote understanding which can inform future research on the teaching and learning practices of college level mathematics instructors from a constructivist perspective and thus contribute to the preparation of pre-service mathematics teachers. The purpose of this study is to provide a description and an interpretation of an MCTP professor and five MCTP teacher candidates who are attempting to teach and learn in a class consistent with the goals set forth by the reform documents. This study addresses the following a priori research question: Do the instructor and the pre-service teachers perceive the instruction in their mathematics course as exemplifying the kind of teaching and learning they would like to promote as upper elementary/middle level teachers of mathematics and science? And if so, how? Theoretical Perspective and Methodology Theoretical Perspective The research was conducted from a perspective which combines ideas of interactionism and constructivism. This perspective is consistent with the philosophy toward teaching and learning that underlies the framework for reform in mathematics education and with the philosophy of the Maryland Collaborative for Teacher Preparation [MCTP]. First, according to the perspective of interactionism, people invent symbols to communicate meaning and interpret experiences (Alasuutari, 1995; Blumer, 1986; Romberg, 1992); moreover, people create and sustain social life through interactions and patterns of conduct including discourse (Alasuutari, 1995; Gee, 1990; Hicks, 1995; Lave & Wenger, 1991). Furthermore, this position is in accordance with the constructivist perspective of learning in that individuals develop understandings based on their experiences and knowledge as it is socially constructed (Bruffee, 1986; Ernest, 1991; Gergen, 1985; Romberg, 1992). Simon and Schifter (1991) adopted the following view of constructivism which combines aspects of radical (e.g., von Glasersfeld, 1990) and social (e.g., Ernest, 1991) constructivism: 1. Constructivism is a belief that conceptual understanding in mathematics must be constructed by the learner. Teachers' conceptualizations cannot be given directly to students. 2. Teachers strive to maximize opportunities for students to construct concepts. Teachers give fewer explanations and expect less memorization and imitation. This suggests not only a perspective on how concepts are learned, but also a valuing of conceptual understanding. (p. 325) Cobb and Bauersfeld (1995) discuss the social aspects of learning and knowledge by advocating viewing mathematics education through the perspectives of interactionism and constructivism. Incorporating the interactionist perspective with constructivism, Cobb and Bauersfeld (1995) state that, [The authors of the book] draw on von Glasersfeld's (1987) characterization of students as active creators of their ways of mathematical knowing, and on the interactionist view that learning involves the interactive constitution of mathematical meanings in a (classroom) culture. Further, the authors assume that this culture is brought forth jointly (by teachers and students), and the process of negotiating meanings mediates between cognition and culture (p. 1). In regard to research based on a constructivist view, Noddings (1990) states, "We have to investigate our subjects' perceptions, purposes, premises, and ways of working things out if we are to understand their behavior . . .We have to look at their purposive interactions with those environments" (p. 15). Through such methods as participant observation, the ideas of interactionism and constructivism provide a strong framework within which the researcher constructs meanings to interpret and explain the observed and inferred perceptions, actions, and interactions of the study participants (Bogdan & Biklen, 1992, Cobb & Bauersfeld, 1995). Simon and Schifter's (1991) definition of constructivism along with the perspective presented by Cobb and Bauersfeld (1995) reflect what is both implied and stated in the reform documents and in the MCTP philosophy, and they reflect the researchers' perspective on teaching and learning; and thus, they represent the perspective from which the research was conducted. Since this study involves an in-depth examination of a phenomenon, the research strategy best suited to helping researchers understand the perceptions, actions and interactions of faculty and students is the case study with a qualitative methodology (Goetz & LeCompte, 1984; LeCompte, Millroy, & Preissle, 1992; Merriam, 1988; Romberg, 1992; Stake, 1995). While a case study in and of itself is not a methodology and has been applied to both quantitative and qualitative research methods, a "qualitative case study is characterized by the main researcher spending substantial time, on site, personally in contact with activities and operations of the case, reflecting, revising meanings of what is going on" (Stake, 1994, p. 242). In this research project, the case study methodology enables the researcher to develop an in-depth story about the selected professor and teacher candidates which might serve to provide a framework from which other educators can reflect on their experiences and to inform future research (Merriam, 1988; Romberg, 1992; Stake, 1995). It is a study of the participants' and the researchers' perceptions of their experiences teaching and learning in an MCTP course throughout the semester. For this study, the case is bounded in time by the academic semester (Fall, 1994). As part of this case study, the professor and the teacher candidates engaged in on-going interviews and observations throughout the semester to obtain data regarding their perceptions and actions toward teaching and learning and the extent to which the instruction modeled the kind of teaching and learning appropriate for grades 4 through 8, the focus of the MCTP program. The data were collected and analyzed through the use of the qualitative techniques of analytic induction, constant comparison, and discourse analysis for patterns of similarities and differences between the professor's and teacher candidates' perceptions (Bogdan & Biklen, 1992; Gee, 1990; Goetz & LeCompte, 1984; LeCompte, Millroy, & Preissle, 1992). Data Sources and Collection Methods The research setting was an undergraduate mathematics classroom at a large state university. The mathematics course was developed and taught by a university professor (pseudonymous Dr. Taylor) as part of the Maryland Collaborative for Teacher Preparation. The mathematics course was open to both MCTP teacher candidates (intending teachers who have been accepted into the MCTP program and plan to enroll in MCTP courses throughout their undergraduate program) and non-MCTP undergraduates. In addition to education majors, the course served departments such as English, business, theater, and Participants in this study were the course instructor, Dr. Taylor, and five MCTP teacher candidates in his mathematics class. Dr. Taylor was an experienced university professor with a joint appointment to the mathematics and education departments. The teacher candidates were first year undergraduates, and ranged in age from 17 - 19 years old. Because they were in their first semester, none of the teacher candidates previously had taken an MCTP course or an education course; however, they were all concurrently enrolled in an MCTP science course (either physics or chemistry), and a one-credit MCTP Seminar Course. (The purpose of the Semiar Course was to make connections between the mathematics and science courses that the MCTP teacher candidates were taking and to discuss issues related to their future teaching of these subjects.) Four teacher candidates were women, and one student was a man. Data Tools Research tools used included interviews with individual participants, group interviews, participant observation, and artifact collection. All participants were interviewed individually at the beginning and end of the semester, and the interviews were audio taped and transcribed (see Appendix for interview protocols). The interviews were semi-structured in that they contained a set of standard questions; however, additional questions were posed based on the participants' responses. In addition, two group interviews were conducted with only the teacher candidates and a researcher Also, throughout the semester, data for Dr. Taylor's and the teacher candidates' actions in the process of teaching and learning were obtained through class observations and field notes. To further inform the researchers, informal interviews with the instructor and the teacher candidates were conducted prior to and following the class observations. Finally, in the process of analyzing data and writing the research report, selected participants were consulted as a means of member checking and establishing validity (Stake, 1995). An analysis of the data indicated that Dr. Taylor and the teacher candidates perceived vast differences between traditional instruction and teaching and learning "this way" (Julie, interview, 12/8/ 94) as modeled by Dr. Taylor. Moreover, both Dr. Taylor and the teacher candidates expressed a clear image of what they thought teaching in grades 4 through 8 should be. This image of ideal teaching was quite consistent with the teaching and learning that they experienced in Dr. Taylor's class. Five categories emerged from the data in regard to the participants' perceptions of traditional teaching and learning, teaching and learning in Dr. Taylor's class, and the participants' image of what teaching and learning should be for grades 4 through 8. These categories are presented below. I. Doing Mathematics in Typical (Traditional) Courses Means Mimicking the Teacher and Following Prescribed Steps Without Understanding Teacher Candidates' Perceptions of Traditional. All five teacher candidates expressed the same view of how mathematics teaching and learning typically takes place. Their usual experience with mathematics is that it is dry, rule-based, and consists of a set of procedures, each which leads to a single correct (or incorrect) answer. The teacher candidates were accustomed to doing large sets of similar mathematics problems without understanding the meaning or purpose of the problems. Julie relates her prior experiences with mathematics as consisting entirely of procedures without understanding when she says, Before when I would have math classes, . . .it's just that I had to be able to mimic what the teacher did; I just had to be able to follow the steps and just do it without understanding what I was actually doing. So later on, it would be. . . so much easier for me to forget the things because I hadn't really understood it, I was just following what the professor had done (Interview, 10/5/94). Also, Kevin discusses the lack of interest he felt and observed from other classmates: [Typically in mathematics classes] they stress memorizing formulas and things like that, or they'd give you the formula and then you'd have to go home and do 20 like that for homework. . . I've had classes where you sit down and people will fall asleep, and the teacher was goin' on talking (Interview, 12/8/94). In addition, Heidi relates the lack of active participation found in most mathematics classes: My math classes were always, you sat at a desk with your book, and you had examples to do, and the teacher would write on the board, and...and I mean, that was math, and that's what you expected from math. You sit and listen to the teacher (Interview, 12/8/94). Dr. Taylor's Perception of Traditional. While Dr. Taylor did not focus his discussions on his perceptions about traditional mathematics teaching to the extent that the teacher candidates did, his view of what happens in traditional mathematics classrooms was consistent with what the teacher candidates shared. He states, In a traditional class, they learn "how to" problems, they go home and they do their problems, and the other kind of stuff is just immaterial (Interview, 12/6/94). II. Doing Mathematics in this Course Means Emphasizing Concepts and Understanding Not Just Memorizing or Doing Procedural Routines Teacher Candidates' Perceptions of Class. All of the teacher candidates perceived Dr. Taylor's class as different from what they were used to in mathematics class. They recognized that the course was focused on concepts and understanding and learning meaningful mathematics. Julie explains how the course emphasized concepts over memorization and understanding the significance of mathematics. [In this course the emphasis was on] concepts. It was a lot of understanding just in general, like knowing how things work - more than just a memorization of facts - just understanding what we were doing and not just kind of following what he said to do, and what the book said to do. . . .You have to do a lot more thinking about the bigger picture; that's always what [he] stresses, is looking for the bigger picture and finding the great significance in it, and not. . . . the knit-picky things, but understanding the overall process (Interview, 12/8/94). Beth compares the focus on understanding in Dr. Taylor's course to an emphasis on memorizing empty facts that she experienced in previous mathematics courses. [Dr. Taylor's course] has definitely been more of understanding of how to solve the problems as opposed to the memorization of facts and stuff (Interview, 12/8/94). Dr. Taylor's Perception of Class. The teacher candidates' perceptions of the course emphasizing concepts and understanding are consistent with what Dr. Taylor envisioned in planning the course. When he discussed his intentions for teaching and learning early in the semester he emphasized that the course would not focus on procedures without understanding: I think that one thing that we [do not do] is a lot of procedural routines. . . that stuff on the board (Interview, 9/16/95). Dr. Taylor later describes what he considers to be important learning for the students in his class: learning based on reasoning, connections, and meaningful problems. He states, [The students should] be able to explain [methods of problem solving] . . .[it's] not going to be just memory of a fact, it's going to be understanding of a whole way of reasoning about a problem. . . .We're trying to help students. . .make the connection between the real object and the mathematical representation or the mathematical model of it. . . . We're trying to have the course problem-based in a sense that the mathematical ideas will be encountered first in looking at the context of working on a problem of some kind rather than "here's how we're gonna do today's problems". It's trying to embed the mathematics in problem-solving activity. . . It's more an applied problem . . .; more making sense of a real situation and patterns in data (Interview, 9/16/94). Dr. Taylor's description of what he considered to be important in mathematics teaching and learning is consistent with what the researcher observed as the focus of activities and discussions during class and in the course materials. III. Doing Mathematics in this Course Involves Communication and Collaboration Teacher Candidates' Perceptions of Class. An important component of conceptual learning of mathematics based on understanding is perceived to be discussing ideas and working together to gain an understanding of mathematics. Four of the five teacher candidates made specific references to the importance of communication and collaboration in the process of learning mathematics. Kevin discusses how working with others helps in generating ideas and strategies for problem solving: [Dr. Taylor] gives you a problem that you have to solve, and you get together with other students and you all try to solve the problem together, so you're coming up with all these different ideas of ways to conquer this problem (Interview, 10/5/94). In addition, Julie states that the process of explaining her reasoning to others is a necessary part of understanding and being able to do mathematics: [In this class] it's like I have to do this [mathematics] here, I have to understand it right now, and I have to be able to explain it to someone else, and I have to be able to move with this (Interview, 10/5/94). Dr. Taylor's Perception of Class. Dr. Taylor stresses the importance of communicating and collaborating to learn mathematics. At the beginning of the term, Dr. Taylor expressed his interest in incorporating these things in the teaching and learning process: [I am] asking students to collaborate with each other and to work cooperatively. Quite often asking students to present...to communicate their ideas in writing, submitting write-ups about their solutions to a problem or talking, sharing what their group has come up with orally in class (interview, 9/16/94). Dr. Taylor's commitment to communicating and collaborating throughout the semester is evidenced by classroom observations which reflected regular use of group work and oral and written reports from students. In addition, toward the end of the semester, Dr. Taylor discussed the notion that explanation of ideas and reasoning played an important role in students demonstrating what they knew on [On the exams],...there was a lot of problem solving in the sense of using techniques that they'd learned to analyze a situation. . ., and they were asked to explain. . .why they did what they did (Interview, 12/6/94). IV. Teaching Mathematics in this Course Means Facilitating and Guiding Understanding Teacher Candidates' Perceptions of Teaching. In several instances, the teacher candidates discussed the actions of Dr. Taylor: what he did as a teacher to create the learning environment described above. All of the teacher candidates, in one way or another, mentioned that Dr. Taylor acted as a facilitator or guide to learning as opposed to a lecturer who delivers information and facts to Kevin explains how Dr. Taylor would ask questions in an effort to engage students in thinking about a problem: The teacher will come around and sort of direct you in a certain direction, or ask you more questions, get you thinking more. It seems, that you're sort of widening your focus on math instead of running a single process, and you will learn that process, but you also, along the way, you know, sort of pick up this other stuff. And you're not just copying things copying things off the board (Interview, 10/5/94) Also, Julie states that Dr. Taylor's questions would help to re-direct their thinking if they were having difficulties approaching a problem: [Dr. Taylor] would step in and kind of guide us the right way, maybe asking us questions in different ways so that we can see in a different way what he's trying to get across, and that way remember it because we understand it (Interview, 12/8/94). The notion that Dr. Taylor was always "walking around" and "asking questions" to guide learning was prevalent in the teacher candidates' comments and in the researcher's observations of the class. The teacher candidates quickly became accustomed to this approach to teaching and seemed to welcome his involvement in their learning. Dr. Taylor's Perception of Teaching. Dr. Taylor explains that his intention in teaching was not to tell students information and what to do to solve a problem, but instead, it was to let the students attempt solving the problem. According to Dr. Taylor, what was important for him to do was to "get them thinking" not necessarily to arrive at a specific answer. In describing an example of how he employed this method of teaching, Dr. Taylor mentions a probability problem he presented in class. . . The context was in a store and the average salesperson is successful on two out of five customers on average, and two different people were working in that store, and one of them has a day when they only sell to four out of 15 customers, another one has 8 out of 15 customers. Does it seem fair for the person who only sold the four out of 15 person to be fired as incompetent or substandard? And so I let them discuss what their reaction was. And to some extent what it gave me [was information about which students] had any inkling that . . . there could be a chance phenomena operating. . . I was using [the problem] to get them thinking about what might be involved, and also, I guess that rather than me saying, "Here is a problem that you can study with probability, and here is how you can do it,"...I use it more [as] a way of getting them to think about what the issues are in a situation (Interview, 9/16/95). Dr. Taylor's description of the probability activity is typical of what the researcher observed in his class and in the course materials. Usually, students were presented with a problem that would stimulate discussion and some form of data collection as a basis for reasoning through a problem. Rarely, were the students given problems that had a single, correct numerical answer. V. Image of What Mathematics Teaching and Learning Should Be for Grades 4 through 8. Teacher Candidates' Image. After experiencing mathematics in a reform-style classroom, the teacher candidates perceived Dr. Taylor's teaching as modeling the type of teaching and learning that they would like to promote when they begin teaching in the elementary/middle grades. All five of the teacher candidates described an image of what mathematics teaching and learning should be for grades 4 through 8 in a manner consistent with the type of teaching and learning they experienced in Dr. Taylor's class. They stressed the importance of meaningful mathematics, an emphasis on conceptual understanding, students' active involvement in learning activities, students working collaboratively in groups to solve problems, and teachers acting as facilitators and guides in the learning process. Moreover, the teacher candidates believed that this type of teaching and learning promotes better understanding in that the mathematics they have learned is more meaningful to them in life. For example, Beth describes her image of good mathematics teaching and learning as the teacher serving as a facilitator and promoting collaboration: [Good mathematics teaching and learning involves] more interaction with the students instead of just, like, standing up there and saying, "Okay. This, this, this." Because lecturing doesn't really work and , at least for me, it doesn't really work. . . So, like, more like letting the kids work together, or working with students, asking them questions and having them say what they think (Interview, 10/5/94). Also, Paula states that a good mathematics teacher motivates students to be interested in mathematics through the use of meaningful mathematics that applies to real-world situations: [A good mathematics teacher is] someone who gets you interested in what you're doing, who doesn't just give you problems, and tell you to answer them, and show you how to do it; somebody who maybe applies it...applies math,...shows how math is used in the real world, other than just giving you random problems and just having you solve them--showing students that you can use this. This is something that can be helpful to you in life, it's not just something you're doing in school (Interview, 10/5/94). Dr. Taylor's Image. Dr. Taylor also believed that the type of teaching and learning that took place in his undergraduate mathematics course modeled what should be happening in grades 4 though 8. He The [NCTM] Standards' model of the instruction and curriculum are problem oriented learning, contextualized learning, learning in true collaboration with other people, learning through active investigation of things, and so we try to do all those things. And those things seem to be appropriate, at least as far as we know, appropriate guidelines for intermediate school instruction (Interview, 12/6/94). Implications and Educational Significance The experiences of these teacher candidates and this professor have implications for teacher education programs interested in preparing pre-service teachers to achieve the standards for teaching and learning set forth in the reform documents. A First Step First, a major implication gained from this qualitative study is that the college students who experienced a reform-style mathematics classroom completed a first step in achieving the vision for reform of mathematics education: constructing an initial model of mathematics teaching and learning which embraces the ideals of the reform movement. Although not at the undergraduate level, research shows that this type of construction has occurred for other students who experienced learning in reform-style classrooms. In a study of two elementary school classrooms, Cobb, Wood, Yackel, and McNeal (1992) discuss this notion of students constructing a new idea of what it means to do mathematics. Cobb, et al. (1992) investigate and contrast instructional situations in mathematics which promote teaching and learning for understanding and instructional situations that do not promote understanding. The researchers view the classroom interactions in terms of five distinct types of classroom social norms (regulations, conventions, morals, truths, and instructions) and focus on the mathematical explanations and justifications that occurred during the lessons. Mathematical explanations and justifications are considered to be essential components of teaching and learning for understanding as is recommended by the goals of the reform movement in mathematics education (Cobb, et al., 1992), and these components were also important in Dr. Taylor's class. Cobb, et al. (1992) characterize two distinct classroom mathematics traditions in their descriptions of the classrooms studied. In the first classroom, doing mathematics means following procedural instructions, and thus mathematical explanations and justifications are not valued or expected. In the second classroom, doing mathematics means co-constructing a mathematical reality based on the students' and teacher's experiences with created and manipulated abstract mathematical objects. Correspondingly, in the second classroom, mathematical explanations and justifications are expected and valued. Thus, when a teacher uses a more traditional style of mathematics teaching, the students continue to view and act on mathematics as strictly procedural and rule-based; however, when a teacher believes and behaves in a way that models and supports the ideals of reform-based teaching and learning the students respond by changing their views of mathematics. Based on the findings, Dr. Taylor's students' experiences were similar to that of the second classroom in Cobb, et al.'s (1992) study. In order to justify this claim, Cobb, et al.'s (1992) study is examined more closely. Cobb, et al. (1992) describe the first teacher's actions as facilitating "her students' enculturation into what Lave (1988) called the folk beliefs about mathematics" (p. 589.). (Folk beliefs about mathematics include the idea that mathematics consists of standard procedures only appropriate for "school-like" tasks (p. 589).) In contrast, the second teacher facilitated the students' enculturation into mathematical ways of knowing which consisted of "taken-as-shared mathematical meanings and practice" (Cobb, et al., 1992, p. 595). A similar process of enculturation seemed to occur for Dr. Taylor's students. Being in a classroom where reform-style teaching was modeled and where students were engaged in active learning through meaningful problem solving and collaboration enabled the students to construct a new model of mathematics teaching and learning. Exploring this notion of enculturation further, consider Gee's (1990) ideas on enculturation. He makes a distinction between acquisition and learning. Gee (1990) defines these terms as follows: Acquisition is a process of acquiring something subconsciously by exposure to models, a process of trial and error, and practice within social groups, without formal teaching. It happens in natural settings which are meaningful and functional in the sense that acquirers know that they need to acquire the thing they are exposed to in order to function and they in fact want to so function. This is how most people come to control their first language. Learning is a process that involves conscious knowledge gained through teaching (though not necessarily from someone officially designated a teacher) or through certain life-experiences that trigger conscious reflection. This teaching or reflection involves explanation and analysis, that is breaking down the thing to be learned into its analytic parts. It inherently involves attaining, along with the matter being taught, some degree of meta-knowledge about the matter (p. 146). Based on these definitions, it seems that while Dr. Taylor's students may have been learning mathematics, they were acquiring ideas about the teaching and learning of mathematics. The students were being exposed to Dr. Taylor's model of teaching and learning, and it was in the natural setting of teaching and learning: a classroom. Formal teaching about mathematics occurred; however, formal teaching about the teaching and learning process was not present. (This lack of formal teaching about the teaching and learning process is discussed further in the next section.) Gee (1990) goes on to say that, "Acquisition must (at least, partially) precede learning; apprenticeship must precede `teaching' (in the normal sense of the word `teaching')" (p. 147). Here, Gee (1990) links acquisition to apprenticeship. This notion of apprenticeship is also discussed by Lave and Wenger (1991); however, they prefer to use the term "situated learning" (p. 31). Lave and Wenger (1991) stress the importance of situated learning as "learning by doing" (p. 31). These ideas apply to the teacher candidates in Dr. Taylor's class in that they were enculturated into the ideas of reform-style teaching and learning by experiencing it as a student. They were "learning by doing" from the perspective of students. What has not yet taken place is the "teaching" of how to become a reform-style teacher. However, it seems that the phase of enculturation into the social practices associated with reform-style teaching is a necessary first step. The idea of needing to experience mathematics as a student in a reform-style classroom before being able to create a reform-style teaching and learning environment as a teacher are evident in the experiences related by Schifter and Fosnot (1993). They studied practicing teachers who participated in SummerMath, a summer workshop for teachers interested in implementing reform goals in their elementary mathematics teaching. One of the key premises of the SummerMath program is that, "If teachers are expected to teach mathematics for understanding [as defined in the reform documents] they must themselves become mathematics learners" (Schifter & Fosnot, 1993, p. 16). Moreover, the Professional Teaching Standards (NCTM, 1991) calls for such experience when they state, "If teachers are to change the way they teach, they need to learn significant mathematics in situations where good teaching is modeled" (p. 191). In other words, while all teachers do not necessarily need a full college-level, reform-style course in mathematics, they do need experiences as learners (or students) in a reform-style environment before they can be expected to emulate it as teachers. However, this initial experience as a student in a reform-style mathematics classroom is not enough for preparing pre-service teachers. In accordance with the findings of Borko, Eisenhart, and colleagues (Borko, et al., 1992; Eisenhart, et al., 1993), the teacher candidates in Dr. Taylor's class believed that further educational coursework and field experiences would be necessary before they would be prepared to "do the things that [Dr. Taylor is] doing now" (Beth, Interview, 12/8/94) in their own teaching. This finding suggests that while one content course taught from a constructivist perspective is not sufficient in preparing pre-service teachers to meet the goals for reform, it is an important step beginning the process of preparing pre-service teachers to incorporate reform-based practices into their future mathematics teaching. What Was Not Said Another implication for the preparation of pre-service teachers rests in what was not discussed or taught in Dr. Taylor's class. Earlier, the claim was made that formal teaching about the teaching and learning process (pedagogical issues) did not take place in Dr. Taylor's class. In observing the classes and talking to the participants, the researcher never heard overt talk about how the teacher candidates' experiences in Dr. Taylor's class might translate to the their future practice as elementary/middle school teachers unless they were specifically asked to discuss this by the researcher. It seems that discussions of pedagogical issues relevant to pre-service teachers were considered to be inappropriate discourse. In an effort to validate this finding and to understand why issues of pedagogy were not discussed, the researcher asked Dr. Taylor and Julie (the key informant among the teacher candidates) for their views on this matter. Dr. Taylor said that he did not "recall talking explicitly about [his] teaching as a model of how one would teach middle school kids" (electronic communication, 2/9/96). However, he did address his general rationale behind approaching teaching and learning in a way that was different from what teacher candidates were used to experiencing in a mathematics class. He says, "We did fairly often talk about why the innovative features of the course were being used - my rationale for doing things in different ways (in part this was a periodic pep-talk to encourage them that things were going reasonably well, even if different)" (electronic communication, 2/9/96). Julie's recollection about talking about pedagogical issues was similar to Dr. Taylor's in that she states that Dr. Taylor "alluded" to reasons why he was approaching topics at times, but never directly discussed how teaching and learning in his class related to their future teaching in the elementary and middle-level schools. Julie continued by saying that this type of conversation did not seem appropriate for a mathematics course since they were there to learn math. These comments from both Dr. Taylor and Julie are consistent with what the researcher observed. However, both Dr. Taylor and Julie revealed that pedagogical issues were discussed in the MCTP Seminar Course which was taught by Dr. Taylor and an MCTP science professor. (This course is beyond the bounds of this study.) As mentioned earlier, the purpose of the Seminar Course was to make connections between the mathematics and science courses that the MCTP teacher candidates were taking and to discuss issues related to their future teaching of these subjects. In addition to the seminar, Julie said that outside of class (in the hallway to and from class) the five MCTP teacher candidates occasionally discussed how their experiences in Dr. Taylor's class might relate to their future teaching. Thus, pedagogical issues were appropriate for discussion outside of mathematics classes. Next, the question to Dr. Taylor was, "What were his reasons (if any) behind not discussing pedagogical issues pertinent to future elementary/middle school teachers?" Dr. Taylor said, "In part, this was because of the low density of MCTP students [in the class]" (electronic communication, 2/9/96). (There were five MCTP teacher candidates in the class, and approximately 8 out of 20 students who intended to teach - including the MCTP students.) In pursuing whether more MCTP teacher candidates or other education students would have affected his decision to include discussions about pedagogy, Dr. Taylor stated that even if the class were entirely composed of education students, he does not believe he would have included pedagogical discussion. In fact, he preferred that the course not be offered exclusively to education majors. He wanted to concentrate on the mathematics and not turn it into a pedagogy course. Also, Dr. Taylor was sensitive to the perception that a mathematics course designed exclusively for pre-service teachers might be viewed by other mathematics department faculty as a course that was made easier even though that would not be true. Dr. Taylor's concern about the perception that college faculty might have (regarding content courses designed specifically for education majors as being less rigorous) appears to be supported given the recommendations by mathematics and science faculty from colleges and universities throughout the United States published in an NSF document (NSF, 1993). In this document there is concern expressed that "watered down" versions of content courses for pre-service teachers be avoided (NSF, 1993) with the implication that this watering down is a perceived risk of specialized content courses for future teachers. The question that remains is, "Why is it significant that pedagogy was not discussed in a mathematics course?" Shulman (1986) brought the notion of pedagogical content knowledge to the forefront of teacher education. He defines pedagogical content knowledge as going "beyond knowledge of subject matter per se to the dimension of subject matter knowledge for teaching " (Shulman, 1986, p. 9) Included in the category of pedagogical content knowledge are: "the ways for representing and formulating the subject that make it comprehensible to others, [and] . . . an understanding of what makes the learning of specific topics easy or difficult" (Shulman, 1986, p. 9). Shulman (1986) calls for teacher education programs which offer instruction focusing on content that includes "knowledge of the structures of one's subject, pedagogical knowledge of the general and specific topics of the domain, and specialized curricular knowledge" (p. 13). In other words, pre-service teachers need to learn about the pedagogical issues in the context of subject matter knowledge. This need is also stated in the reform documents (e.g., NCTM, 1991). Furthemore, much has been said about the value of metacognition in learning (e.g., Flavell, 1979, 1981; Schoenfeld, 1992). Flavell (1981) defined metacognition as "knowledge or cognition that takes as its object or regulates any aspect of any cognitive endeavor" (p. 37). There seems to be a metacognitive component to the notion of pedagogical content knowledge as it relates to learning in Dr. Taylor's class. Referring back to Flavell's (1981) definition, the object of the learning is the mathematics content; however, for the teacher candidates, an important metacognitive aspect of learning is relating the mathematical content to ideas regarding their future teaching of mathematics. While this metacognitive aspect of connecting the teacher candidates' experiences learning mathematics with pedagogical issues related to their future teaching of mathematics did not take place in Dr. Taylor's class, it did seem to occur outside of the class in the seminar course. (It should be noted that while metacognition in relationship to pedagogical issues was not a part of the class, Dr. Taylor did incorporate metacognition in the students' reflection on their own mathematical learning and problem solving. He states, "Students really are asked, and encouraged, to think a lot more about their own thinking" (Interview, 12/6/94).) Regardless of whether the metacognitive learning that facilitates the development of pedagogical content knowledge occurs within or outside of the mathematics classroom, this learning is important for the development of future teachers. The need for pedagogical content knowledge has implications for classes like Dr. Taylor's. However, a paradox exists concerning what is needed for the preparation of pre-service teachers in regard to pedagogical content knowledge and what content professors like Dr. Taylor are willing to include (or not include) as a part of their courses. Dr. Taylor seems to have sound reasons in his context for focusing on content at the near exclusion of pedagogical discussions, and many other mathematics and mathematics education faculty probably agree with his reasons. However, does this mean that pedagogical discussions must be delayed until pre-professional education courses? It seems that to delay would be missing a significant opportunity for the development of pedagogical content knowledge. So, how is this paradox resolved? If professors are unwilling or unable to include pedagogical discussions in mathematics content courses, then perhaps providing opportunities such as the MCTP seminar is important complementary environment for pre-service teachers. In other words, if conversations which promote reflecting on and making connections between the pre-service teachers' learning experiences in a mathematics course and their future teaching are not taking place in mathematics classrooms, then teacher education programs should consider initiating forums where this type of conversation can concurrently take place to foster pedagogical content knowledge. One additional note: In the case of Dr. Taylor, he was in the position of teaching both the content course and the seminar course that dealt with pedagogical issues. In situations where one person is not able to serve in both roles, further efforts may need to be made to bridge the content course and the pedagogical discussions and to emphasize the notion that neither area is valued more. Reactions of Key Informants In an effort to validate these findings and implications, member checking (Stake, 1995) was used with two key informants, Dr. Taylor and Julie. Dr. Taylor and Julie were provided with a draft of this manuscript and asked to react to the interpretations of the researchers. Dr. Taylor indicated that the only thing that the paper did not capture was his feelings of the difficulty and the struggles involved with instructional decision making in this type of course. However, these struggles were not apparent in either the his interviews or in the teacher candidates' perceptions. Perhaps this suggests that creating this kind of teaching and learning environment is far more complex than it may seem as Simon (1995) has indicated. Julie said that she agreed with the interpretations and added, "I found it fascinating how we (students and professor) were so much on the same wave length" (Written communication, 3/27/96). Also, she wanted to be sure it was understood that she believed that "the lack of addressing [pedagogical issues] was not necessarily inappropriate because we were in a math class" (Written communication, 3/27/96). This statement confirms earlier findings that both the teacher candidates and Dr. Taylor do not see the inclusion of pedagogy as important in a content course, and again, this indicates that other venues for the discussion of the connections between pedagogy and content are necessary. Remaining Questions Some of the many research questions that remain are: How will these pre-service teachers continue to develop and learn about reform-style teaching? Will experiences such as what Dr. Taylor's students' had combined with further educational coursework and field experiences enable these pre-service teachers to meet the goals for reform in their teaching? What components of the MCTP program (such as the Seminar course or field experiences) are most significant in ensuring the pre-service teachers development and what implications does this have for other programs? Furthermore, how many and what types of content and education courses are necessary? As we continue to follow MCTP teacher candidates throughout their undergraduate preparation for teaching and in their first years of teaching, we hope to gain a better understanding of answers to these questions. Alasuutari, P. (1995). Researching culture: Qualitative method and cultural studies. Thousand Oaks, CA: Sage publications. Blumer, H. (1986). Symbolic interactionism. Berkeley, CA: University of California Press. Bogdan, R.C. & Biklen, S.K. (1992). Qualitative research for education: An introduction to theory and methods. Boston, MA: Allyn and Bacon. Borko, H. Eisenhart, M., Brown, C., Underhill, R.G., Jones, D., & Agard, P.(1992). Learning to teach hard mathematics: Do novice teachers and their instructors give up too easily? Journal for Research in Mathematics Education, 23,194-222. Brown, C. & Borko, H. (1992). Becoming a mathematics teacher. In D.A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 209 - 242). New York: Macmillan. Bruffee, K. (1986). Social construction, language, and the authority of knowledge: A bibliographical essay. College English, 48 (8), 773 - 789. Cobb, P. & Bauersfeld, H. (1995). The emergence of mathematical meaning: Interaction in classroom cultures. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. Cobb, P., Wood, T., Yackel, E. & McNeal, B. (1992). Characteristics of classroom mathematics traditions: An interactional analysis. American Educational Research Journal, 29 (3), 573 - 604. Delamont, S. (1983). Interaction in the classroom. New York: Methuen. Driver, R., Asoko, H., Leach, J., Mortimer, E., & Scott, P. (1994). Constructing scientific knowledge in the classroom. Educational Researcher, 23 (7), 5 - 12. Eisenhart, M., Borko, H., Underhill, R., Brown, C., Jones, D., & Agard, P. (1993). Conceptual knowledge fall through the cracks: Complexities of learning to teach mathematics for understanding. Journal for Research in Mathematics Education, 24 (1), 8 - 40. Ernest, P. (1991). The philosophy of mathematics education. New York: The Falmer Press. Flavell, J. (1979). Metacognition and cognitive monitoring: A new area of cognitive - developmental inquiry. American Psychologist, 34 (10), 906 - 911. Flavell, J. (1981). Cognitive monitoring. In W. P. Dickson (Ed.). Children's oral communication skills (pp. 35 - 60). New York: Academic Press. Gee, J. (1990). Social linguistics and literacies: Ideology in discourse. New York: The Falmer Press. Gergen, K. (1985). The social constructionist movement in modern psychology. American Psychologist, 40 (3), 266 - 275. Goetz, J. & LeCompte, M. (1984). Ethnography and qualitative design in educational research. New York: Academic Press. Kennedy, M.M. (1991). Some surprising findings on how teachers learn to teach. Educational Leadership, 14 - 17. Hicks, D. (1995). Discourse, learning, and teaching. In M. Apple (Ed.), Review of research in education: Vol. 21. (pp. 49 - 95). Washington, DC: American Educational Research Association. Lave, J. (1988). Cognition in practice: Mind, mathematics, and culture in everyday life. New York: Cambridge University Press. Lave, J. & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York: Cambridge University Press. LeCompte, M., Millroy, W., & Preissle, J. (1992). The handbook of qualitative research in education. New York: Academic Press. Mathematical Association of America: Committee on the Mathematical Education of Teachers. (1988). Guidelines for the continuing mathematical education of teachers. Washington, DC: Author. Mathematical Sciences Education Board and National Research Council. (1990). Reshaping school mathematics: A philosophy and framework for curriculum. Washington, DC: National Academy Press. Mathematical Sciences Education Board and National Research Council. (1991). Counting on you. Washington, DC: National Academy Press. Mathematical Sciences Education Board and National Research Council. (1995). Mathematical preparation of elementary school teachers: Issues and recommendations. Washington, DC: National Academy Merriam, S.B. (1988). Case study research in education. San Francisco: Jossey-Bass Publishers. National Council of Teachers of Mathematics. (1989). Curriculum and evaluation standards for school mathematics. Reston, VA: Author. National Council of Teachers of Mathematics. (1991). Professional standards for teaching mathematics. Reston, VA: Author. National Research Council. (1991). Moving beyond myths: Revitalizing undergraduate mathematics. Washington, DC: National Academy Press. National Science Foundation. (1993). Proceeding of the National Science Foundation workshop on the role of faculty from the scientific disciplines in the undergraduate education of future science and mathematics teachers. Washington, DC: Author. Noddings, N. (1990). Constructivism in mathematics education. In R.B. Davis, C.A. Maher & N. Noddings (Eds). Journal for Research in Mathematics Education Monograph Number 4, (pp. 7 - 18). Reston, VA: National Council of Teachers of Mathematics. Romberg, T. (1992). Perspectives on scholarship and research methods. In D.A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 49 - 64). New York: Macmillan. Schifter, D. & Fosnot, C. (1993). Reconstructing mathematics education: Stories of teachers meeting the challenges of reform. New York: Teachers College Press. Schoenfeld, A. (1992). Learning to think mathematically: Problem solving, metacognition, and sense making in mathematics. In D.A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 334 - 370). New York: Macmillan. Shulman, L. (1986). Those who understand knowledge growth in teaching. Educational Researcher, 15 (1) , 4 - 14. Simon, M. (1995). Reconstructing mathematics pedagogy from a constructivist perspective. Journal for Research in Mathematics Education, 26, 114 - 145. Simon, M. & Schifter, D. (1991). Towards a constructivist perspective: An intervention study of mathematics teacher development. Educational Studies in Mathematics, 22, 309 - 331. Stake, R. (1994). Case studies. In N. Denzin & Y. Lincoln (Eds.), Handbook of qualitative research (pp. 236 - 247). Thousand Oaks, CA: Sage Publications. Stake, R. (1995). The art of case study research. Thousand Oaks, CA: Sage Publications. Tucker, A. & Leitzel, J. (1995). Assessing calculus reform efforts: A report to the community. Washington, DC: Mathematics Association of America. von Glasersfeld, E. (1987). Learning as a constructivist activity. In C. Janvier (Ed.), Problems of representation in the teaching and learning of mathematics. (pp. 3 - 17). Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. von Glasersfeld, E. (1990). An exposition of constructivism: Why some like it radical. In R. B. Davis, C. A. Maher, & N. Noddings (Eds.), Journal for Research in Mathematics Education Monograph Number 4, (pp. 19 - 29). Reston, VA: National Council of Teachers of Mathematics. Yin, R. (1994). Case study research: Design and methods. Thousand Oaks, CA: Sage Publications. Student Interview Protocols Interview #1 1. What does it take for a student to be successful in mathematics? 2. What do you expect of a good math teacher? 3. What does it take for a student to be successful in science? 4. What do you expect of a good science teacher? 5. Can a student do well in both mathematics and science? Interview #2 1. Has the instruction in [Dr. Taylor's] class helped you make connections between mathematics and science? 2. To what extent has this class involved the application of technologies (e-mail, cd's, computers, calculators, etc.)? 3. Has the instructor made significant attempts to understand your understanding of a topic before instruction? Did the tests reflect this emphasis? 4. To what extent has this course stressed reasoning, logic, and understanding over memorization of facts and procedures? 5. Do you think the teaching you experienced in this course models the type of teaching that you believe should be done in grades 4 - 8? How? Why? 6. Did your instructor explicitly encourage you to reflect on what you learned in this class? 7. After participating in this content class, what are your expectations regarding your mathematics and science methods classes? How should they each be taught? What should be in the curriculum? Faculty Interview Protocol (Used for both interviews - with verb tense changed for second interview.) 1. To what extent is the instruction in this class planned to highlight connections between mathematics and the sciences? 2. To what extent will this class involve the application of technologies (e-mail, cd's, computers, calculators, etc.)? 3. To what extent will you make significant attempts to access you students' prior knowledge of a topic before instruction? What techniques will you use? 4. To what extent do the tests and exams of this course stress reasoning, logic and understanding over memorization of facts and procedures? Would you provide copies of these materials? 5. In what ways do you think your teaching in this course models the type of teaching that you believe should be done in grades 4 - 8? 6. To what extent will you explicitly encourage your students to reflect on changes in their ideas about topics in your course? Can you give an example? What techniques do you anticipate using?
{"url":"http://www.inform.umd.edu/UMS+State/UMD-Projects/MCTP/Research/AERAprop.html","timestamp":"2014-04-16T16:06:32Z","content_type":null,"content_length":"65008","record_id":"<urn:uuid:5ee856a9-db31-48e2-99b4-64c038a5e200>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Heinz Bauer Born: 31 January 1928 in Nuremberg, Germany Died: 15 August 2002 in Nuremberg, Germany Click the picture above to see six larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Heinz Bauer was educated at the Realgymnasium in Nürnberg, obtaining his leaving qualification in 1947. He spent the year 1947-48 doing compulsory service as an assistant construction worker. Then, in 1948, he entered the University of Erlangen, officially named the Friedrich-Alexander-Universität at this time, to begin his studies of mathematics and physics. His lecturers included Georg Nöbeling and Otto Haupt who suggested that he went to study for a period with Jean Dieudonné and Laurent Schwartz at the University of Nancy in France. Dieudonné and Schwartz were at this time two of the leading members of Bourbaki, and the style of mathematics which they were promoting had a lasting influence on Bauer. In fact his first paper Eine Rieszsche Bandzerlegung im Raum der Bewertungen eines Verbandes (1953), which studies decompositions of valuations of a lattice by use of F Riesz's bands in complete vector lattices, shows strong Bourbaki influences. In the autumn of 1952 Bauer took the examinations to qualify as a high school teacher in Bavaria, then in February of the following year he was awarded a doctorate, with distinction, from Erlangen for his thesis Reguläre und singuläre Abbildungen eines distributiven Verbandes in einen vollständigen Vektorverband, welche der Funktionalgleichung f (x ∨ y) + f (x ∧ y) = f (x) + f (y) genügen which he had written with Otto Haupt as his advisor. The work was published in the papers Eine Rieszsche Bandzerlegung im Raum der Bewertungen eines Verbandes (1953) and a paper with the same title as his thesis which was published in Crelle's Journal in 1955. He remained at Erlangen as an assistant, and he habilitated there in 1956. His main interests at this time were in measure and integration, and in the work submitted for his habilitation he studied an abstract Riemann integral, introduced by L H Loomis, from the point of view of the theory of Radon measure. Bauer spent 1956-57 as a Research Fellow at the Centre National de la Recherche Scientifique, Paris, working with Gustave Choquet and Marcel Brelot. It was at this time that he became interested in potential theory and convexity theory, two areas to which he was to make major contributions over the rest of his career. In 1961 Bauer was appointed to the University of Hamburg where he was appointed director of the Institute of Actuarial Mathematics and Mathematical Statistics and together with Emil Artin, Lothar Collatz, Helmut Hasse, Emanuel Sperner and Ernst Witt became one of the directors of the Mathematics seminar at the University of Hamburg. However, he did not take up the appointment at Hamburg straight away, for he had already planned spending the period from August 1961 to April 1962 as Visiting Associate Professor at the University of Washington in Seattle. In Hamburg, Bauer replaced Leopold Schmetterer, who had moved to Vienna to succeed Johann Radon who had died in 1956. Bauer made a research visit to Paris in the spring of 1964. He has served as Dean of the Faculty of Sciences at the University of Hamburg in session 1964-65. On 1 September 1965, Bauer became a full professor at the University of Erlangen. We note that the University of Erlangen had been renamed Friedrich-Alexander-University Erlangen-Nurnberg in 1961 after merging with the Nurnberg College of Economics and Social Sciences. He held this chair for 31 years until his retirement, but he also held visiting positions in a number of universities, some of which we have mentioned above, but the remainder include the University of Munich, the University of Washington, the Sorbonne, the California Institute of Technology, New Mexico State University, and Aarhus University. Chatterji [2] writes:- What obviously impresses any one reading any of the papers of Bauer is the clarity and precision of the presentation; these qualities are perceptible right from his very first publications. It is therefore not surprising that he has written several highly successful textbooks and very useful expository articles. We now look briefly at some of these 'highly successful textbooks' he published. The first of these Wahrscheinlichkeitstheorie und Grundzüge der Masstheorie was published in 1964 while he was still at Hamburg. L L Helms writes in a review:- This book is in two parts. The first part is a standard development of measure theory, containing three chapters dealing with measure theory, integration theory, and product measure spaces in that order. ... The second part of the book is devoted to probability theory. Generally speaking, only probability theory as it pertains to product measure spaces is discussed. ... The book is efficiently organized, with emphasis entirely on positive results. Lectures Bauer gave in the summer semester of 1965 at Hamburg were published as Harmonische Räume und ihre Potentialtheorie (1966) and further lectures, inspired by Dieudonné's book Foundations of modern analysis appeared as two separate texts Differential- und Integralrechnung. I, II (1966). These latter two texts give an excellent account of the basic concepts of analysis. In 1968 he published a second book with the title Wahrscheinlichkeitstheorie und Grundzüge der Masstheorie. The first half of this volume consists of the 1964 text, and it is followed by a section on measure in topological spaces and on the Fourier transform. The final part of the text proceeds at a much faster pace and covers topics such as the central limit theorem, conditional expectation, martingales, and some topics in stochastic processes. An English version Probability Theory and Elements of Measure Theory was published in 1972. Further editions of the German text appeared, the third in 1978, and the fourth edition of 1991 was published with the title Wahrscheinlichkeitstheorie. Because of the great popularity the book enjoyed, an extensive reworking and expansion of the sections on probability appeared in English translation as Probability theory in 1996, with the same treatment was given to the sections of measure theory, published in English translation as Measure and integration theory in 2001. Another book by Bauer also became a classic. This was Mass- und Integrationstheorie (1990) which provided an introduction to measure theory and the theory of integration. A second edition was published in 1992. A collaboration with Bernd Anger led to the publication of Mehrdimensionale Integration (1976) which developed (from a review by L Janos):- ... the theory of multidimensional Lebesgue integration as a tool for handling integrals involved in problems of analysis and mathematical statistics (the gamma function, the Gauss distribution function, potential theory, the volume of the n-dimensional sphere, etc.). Bauer has served as Editor of Inventiones mathematicae (1966-79), Mathematische Annalen , Expositiones Mathematicae, and Aequationes Mathematicae. He served as a member of the Board of the Mathematical Research Institute of Oberwolfach for sixteen years from 1966. He was honoured with election to the Bavarian Academy of Sciences in 1975, and was also elected to the Finnish Academy of Sciences, the Austrian Academy of Sciences, the Royal Danish Academy of Sciences and the German Academy of Natural Scientists Leopoldina. He was the chairman of Mathematics section of this Academy in 1991. He was a winner of the Bavarian Order of Merit and the Bavarian Maximilian's Order of Science and the Arts. The Charles University, Prague, awarded him their medal in 1987 and in 1992 awarded him an honorary doctorate. The University of Dresden also awarded him an honorary doctorate in 1994. He was for many years a member of the German Mathematical Society and served as its President during 1976-77. Netuka writes [7]:- In 1974 Bauer was an invited speaker at the International Congress of Mathematicians in Vancouver. His contribution ti the Proceedings starts as follows: "In 1964 Pierre Jacquinot opened a colloquium on potential theory in Orsay, France, by comparing potential theory with a road intersection in mathematics. This was ten years ago. Meanwhile traffic has increased, and crossroads had to be converted into interchanges of highways - also in potential theory." The first part of the contribution addresses three aspects of classical potential theory: superharmonic functions, Newtonian kernel and potentials, Brownian motion. The role of balayage is emphasised. The second part reflects two main sspects of potential theory of the early seventies: harmonic spaces and Markov processes. The last part is devoted to Fuglede's theory of finely harmonic functions, including an application to asympototic paths for subharmonic functions. In April 1979 Bauer was a plenary speaker at the British Mathematical Colloquium which was held at University College, London. He spoke on Korovkin approximation and convexity. In 1980 Bauer received the Chauvenet Prize from the Mathematical Association of America. The American Mathematical Monthly reported [1]:- The Board of Governors of the Mathematical Association of America voted to award the 1980 Chauvenet Prize to Professor Heinz Bauer for his paper "Approximation and Abstract Boundaries," which appeared in this Monthly in 1978. A certificate and monetary award in the amount of five hundred dollars were presented to Professor Bauer at the Business Meeting of the Association on January 6, 1980. The Chauvenet Prize is awarded for a noteworthy paper of an expository or survey nature published in English that comes within the range of profitable reading for members of the Association. ... Professor Bauer is involved in research in integration theory, functional analysis (convexity and approximation theory), potential theory, and Markov processes. ... The paper for which Professor Bauer received the Chauvenet Prize discusses three famous theorems of P P Korovkin that concern uniform approximation of functions. These theorems are presented in a well-chosen setting and are illustrated and illuminated superbly with a collection of examples and applications. The paper is accessible to graduate students who have learned about the Lebesgue integral. Bauer retired from his chair in Erlangen in the spring of 1996. Sadly he suffered a stroke in the summer of that year from which he never fully recovered. In addition to his outstanding professional qualities, Bauer was known for his exceptionally broad general education, his knowledge of literature and history, his love of music and his high appreciation of cultural values. Article by: J J O'Connor and E F Robertson List of References (8 books/articles) Mathematicians born in the same country Honours awarded to Heinz Bauer (Click below for those honoured in this way) Speaker at International Congress 1974 BMC Plenary speaker 1979 MAA Chauvenet Prize 1980 MAA Chauvenet Prize 1980 Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © July 2008 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Bauer.html","timestamp":"2014-04-19T19:35:24Z","content_type":null,"content_length":"22605","record_id":"<urn:uuid:5c2b1562-8897-4702-8e37-00c4310b6b3c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Robert Murphy: Mathematician and Physicist - The Early Years The Early Years We are fortunate to have an entry on Robert Murphy in the Dictionary of National Biography. This entry, by Thompson Cooper [1894], led the authors to original sources on Murphy, such as a letter written by Augustus De Morgan (1806-1871) and an obituary. Unless otherwise noted, the information in this section (The Early Years) and the following section (Careers in Academia) is from Cooper Figure 1. Main street of Mallow, County Cork, Ireland, in 2007 (Source: Photograph by Alison, 2007. Licensed under Creative Commons Attribution-Share Alike 3.0 Unported license.) Life Experiences Robert Murphy was born in 1806 in Mallow, Cork County, Ireland. We do not know Murphy’s exact birth date, but he was baptized in the Church of Ireland on March 8, 1807 [Barry 1999]. He was the third son (of seven children) of John Murphy, a shoemaker, and Margaret Murphy. (Creedon [2001] claimed that Murphy was the sixth of nine children, but no other references support this claim.) When he was eleven, Murphy was run over by a cart in an accident that resulted in a fractured thighbone. This incident left him bedridden for one year. During this time, Murphy read the works of Euclid (ca. 300 BCE) and studied algebra. Murphy demonstrated his mathematical ability at a young age when he anonymously responded with ingenious solutions to mathematical problems posed in a local newspaper by a teacher in Cork named Mulcahy. Murphy would end his responses by signing “Mallow” [Barry 1999]. Mulcahy was shocked when he learned that it was a thirteen-year-old boy who was submitting solutions to his problems [Creedon 2001]. Mr. Croker, one of Murphy’s neighbors, provided an account in which Mulcahy said in amazement, “Mr. Croker, you have a second Sir Isaac Newton in Mallow: pray look after him” [Long 1846, pp. 337-338]. Mulcahy then convinced Mr. Hopley, principal of a local school, to pay for the fees and books required for Murphy to attend. After completing his studies at the school, Mulcahy and Hopley sponsored Murphy’s application to Trinity College, Dublin, in 1823. Murphy was not admitted, most likely because of his lack of formal education [Creedon 2001]. Although Murphy’s first attempt to gain entrance into college was unsuccessful, he gained recognition less than a year later by writing a paper that dealt with the three classical Greek problems. Murphy’s Refutation When he was eighteen, Murphy was recognized for his aforementioned publication, Refutation of a Pamphlet Written by the Rev. John Mackey Entitled “A Method of Making a Cube a Double of a Cube, Founded on the Principles of Elementary Geometry,” wherein His Principles Are Proved Erroneous and the Required Solution Not Yet Obtained [1824]. Murphy’s pamphlet was cited by the well-known mathematician Augustus De Morgan in his “Budget of Paradoxes” [1864], published in the The Athenæum: Journal of Literature, Science, and the Fine Arts. De Morgan wrote [1864, p. 181]: This refutation was the production of an Irish boy of eighteen years old, self-educated in mathematics, the son of a shoemaker at Mallow. Murphy’s [1824] Refutation is something of an enigma, because we know very little about John Mackey. However, it is clear that Mackey thought he had found a way to use a straightedge and compass to double the cube and trisect an angle. With fervor, Murphy set out to demonstrate that Mackey was incorrect [Murphy 1824, pp. iv-v]: But amongst all the attempts which have been made for the solution of the duplication, there has not been one more foolish or more erroneous, than that of the Rev. John Mackey; which being masked under the appearance of truth, consists of a collection of false propositions. Murphy began the Refutation by providing a brief history of these two ancient problems. Additionally, Murphy stated that all he needed was Euclid’s Elements [Elrington 1822] and the “Method” of Girolamo Cardano (1501-1576) to show that Mackey was wrong. Murphy used ideas from classical Euclidean constructions and the algebraic ideas behind the solution to the cubic equation in his demonstrations. After he proved Mackey wrong, Murphy ended his paper with the sly comment, “We shall conclude with hoping that Mr. Mackey’s next attempt will be more successful” [Murphy 1824, p. 19]. The authors have provided a transcription of Murphy's Refutation, with commentary, as an appendix available here.
{"url":"http://www.maa.org/publications/periodicals/convergence/robert-murphy-mathematician-and-physicist-the-early-years","timestamp":"2014-04-17T12:39:18Z","content_type":null,"content_length":"103948","record_id":"<urn:uuid:854be273-384c-4dc2-90ff-c6be2f00329b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
The atoms oscillate about their equilibrium positions with an average amplitude that depends on the thermal energy. An increase of temperature leads to vibrational motions of larger amplitude. This, in itself, does not necessarily result in an expansion of the solid, since the size of the solid is determined by the average separation between the atoms, not by their amplitude of motion. However, the interatomic "springs" deviate somewhat from Hooke's Law -- they are somewhat easier to stretch than to compress. (This asymmetry of the interatomic force can be recognized from the plot of the interatomic potential energy given in Figure 8.5. The curve of potential energy is asymmetric; it is steeper on the near side of the equilibrium point than on the far side). Consequently, when an increase of temperature brings about an increase of the amplitude of motion, the maximum displacement attained during stretching of the spring exceeds the maximum displacement attained during compression, and therefore the average separation between the atoms increases, resulting in an expansion of the solid. The thermal expansion of a solid can be described mathematically by the increase in the linear dimensions of the solid. The increment in the length is directly proportional to the increment of temperature and to the original length, The constant of proportionality a is called the coefficient of linear expansion. Table 20.2 lists the values of this coefficient for a few materials. The increment in the volume of the solid is directly proportional to the increment in the temperature and to the original volume, Here the constant of proportionality b is called the coefficient of cubical expansion. This coefficient is three times the coefficient of linear expansion, To see how this relationship comes about, consider a solid in the shape of a cube of edge L and volume V = L^3. A small increment DL in the length can be treated as a differential and consequently DV = 3L^2 DL, which gives Comparing this with Eq. (3), we see that, indeed, b=3a . The increment in the volume of a liquid can be described by the same equation [Eq. (3)] as the increment in the volume of a solid. Table 20.2 also lists values of coefficients of cubical expansion for some liquids. Water has not been included in this table because its behavior is rather peculiar: from 0°C to 3.98°C, the volume decreases with temperature, but not uniformly; above 3.98°C, the volume increases with temperature. Figure 20.4 plots the volume of 1 kg of water as a function of the temperature. The strange behavior of the density of water at low temperatures can be traced to the crystal structure of ice. Water molecules have a rather angular shape that prevents a tight fit of these molecules; when they assemble in a solid, they adopt a very complicated crystal structure with large gaps. As a result, ice has a lower density than water -- the density of ice is 917 kg/m^3, and the volume of 1 kg of ice is 1091 cm^3. At a temperature slightly above the freezing point, water is liquid, but some of the water molecules already have assembled themselves into microscopic (and ephemeral) ice crystals; these microscopic crystals give the cold water an excess volume.
{"url":"http://curricula2.mit.edu/pivot/book/ph2002.html?acode=0x0200","timestamp":"2014-04-17T12:59:19Z","content_type":null,"content_length":"13776","record_id":"<urn:uuid:477bd80e-90cc-4e56-8a45-2b73c61249ee>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Quick question on classical optimization techniques based on the maximal FP of a dataflow equation. (self.Compilers) submitted ago by leegao sorry, this has been archived and can no longer be voted on If you scroll to the half way point of http://www.math.cornell.edu/~vlad/math6140/cornell-only/LectureNotes_02.pdf to a section called the "Generic Label-Correcting Algorithm", it seems to be equivalent to the "worklist" algorithm used to compute the maximal fp of many dataflow equation based optimization (like live variable analysis). Since most of these equations are monotone (and I suppose causal by the definition in the notes), does this imply that all such algorithms are expected to terminate in O(n^2) for n the number of basic blocks in your control flow graph? all 2 comments [–]larsberg4 points5 points6 points ago sorry, this has been archived and can no longer be voted on Those notes are unfortunately not available outside Cornell. However, since I spend a lot of time working on similar algorithms (in the context of our compiler, Manticore, for a parallel dialect of Standard ML), I'll give it a shot. Yes, most of these algorithms run in O(|variables| * |values|) in theory. The intuition is that the result of many dataflow-style analyses is a map from variables to values. Assuming you start from empty mappings, at each step of the algorithm, you will add at least one value to at least one variable's entry in the map (else the worklist would be empty and you would terminate. Also, the monotone property ensures the mapped values never decrease). In the worst case, you'll have something like every variable getting every possible value, which is where the product comes in above. In practice, we tend to use some hacks to cap the worst-case. For example, we'll say that a variable can have "up to six distinct values". That brings it down to a linear-time algorithm, which you can use in practice. The O(n^2) versions of analyses tend to be completely impractical for any non-trivial example and, even when tuned, tend to blow up when you feed them something like the output from a parser generator. Also note that this is for worklist-style algorithms. If you've decided to "cheese it" and iterate over the whole program's IR rather than set up the dataflow equations, you can end up with algorithms that are O(|program nodes| * |variables| * |values|), which gets really ugly really fast. Also also remember that you may have another small factor in there, depending on the underlying representation of your map. If you aren't using a dense-matrix to store your results (size constraints, etc.) and instead have something like a set that you're checking membership in and adding elements to and you haven't used the hack I mentioned above, you may have another log-ish factor. Hope that helps!
{"url":"http://www.reddit.com/r/Compilers/comments/17c25v/quick_question_on_classical_optimization/","timestamp":"2014-04-16T11:08:26Z","content_type":null,"content_length":"53232","record_id":"<urn:uuid:96f10256-825b-494f-8eb6-123858d54aca>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics and control of wall turbulence Seminar Room 1, Newton Institute It has been generally accepted that nonlinearity is an essential characteristic of turbulent flows. Consequently, except for special situations in which a linear mechanism is expected to play a dominant role (e.g., rapidly straining turbulent flows to which the rapid distortion theory can be applied), the role of linear mechanisms in turbulent flows has not received much attention. Even for transitional flows, a common notion is that the most a linear theory can provide is insight into the early stages of transition to turbulence. But several investigators have recently shown that linear mechanisms play an important role even in fully turbulent, and hence fully nonlinear, flows. Examples of such studies include: optimal disturbances in turbulent boundary layers (Butler \& Farrell 1993); transient growth due to non-normality of the Navier-Stokes system (Reddy \& Henningson 1993); applications of a linear control theory to transitional and turbulent channel flows (Joshi \etal 1997); and a numerical experiment (Kim \& Lim 2000) demonstrating that near-wall turbulence could not be maintained in turbulent channel flow when a linear mechanism was artificially suppressed. \medskip Turbulent channel flow is analyzed from a linear system point of view. After recasting the linearized Navier-Stokes equations into a state-space representation, the singular value decomposition (SVD) analysis is applied to the linear system, with and without control input, in order to gain new insight into the mechanism by which various controllers are able to accomplish the viscous drag reduction in turbulent boundary layers. We examine linear-quadratic-regulator (LQR) controllers that we have used, as well as the opposition control of Choi \etal (1994), which has been a benchmark for comparison of various control strategies. The performance of control is examined in terms of the largest singular values, which represent the maximum disturbance energy growth ratio attainable in the linear system under control. The SVD analysis shows a similarity between the trend observed in the SVD analysis (linear) and that observed in direct numerical simulations (nonlinear), thus reaffirming the importance of linear mechanisms in the near-wall dynamics of turbulent boundary layers. It is shown that the SVD analysis of the linearized system can indeed provide useful insight into the performance of linear controllers. Other issues, such as the effect of using the evolving mean flow as control applied to a nonlinear flow system (a.k.a. gain scheduling) and high Reynolds-number limitation, can be also investigated through the SVD analysis. Finally, time permitting, a linear Floquet analysis of a channel flow with periodic control, which had been shown to sustain skin-friction drag below that of a laminar channel, will be discussed to elucidate the drag reducing mechanism. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/HRT/seminars/2008090914201.html","timestamp":"2014-04-17T09:44:27Z","content_type":null,"content_length":"8496","record_id":"<urn:uuid:1c28b5bd-1d6e-4b4e-b3cc-e2ef6fec51d4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
solutions of a particular equation Hello all... I have a problem which I have been grappling with for some time. Let b be a positive integer and consider the equation z = x + y + b where x,y,z are variables. Suppose the integers {1,2,...4b+5} are partitioned in two classes. I wish to show that at least one of the classes contains a solution to the equation. I have tried using induction on b. The case b = 1 has been solved entirely by me. But I cannot understand how to use the induction hypothesis to prove the result. The more I think of it, the more I feel that a different approach to the problem is needed, but I cant figure out what. It is sort of a special case of a research problem, which has been solved in a more general way. I have little experience of doing research on my own, and so will be glad if anyone can offer me any advice or hints. Thanks.
{"url":"http://www.physicsforums.com/showthread.php?p=2776097","timestamp":"2014-04-19T07:29:14Z","content_type":null,"content_length":"26595","record_id":"<urn:uuid:905960ad-d6c8-4844-95b1-0f388c748ff9>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
NCCTM State High School Mathematics Contests The State Mathematics Contest was initiated in 1979 to provide state level competition in comprehensive mathematics for those students who had excelled in regional contests held earlier across the state. Under the sponsorship of the North Carolina Council of Teachers of Mathematics, regional winners are invited to the NC School of Science and Mathematics to determine the top mathematics students in the state. The State Contest expanded to include state finals in Algebra 1, Algebra 2, and Geometry. Results from three regional final sites are used to determine the top performers in the three subjects. Go to the Contest web site for more information or contact John Goebel, Contest Committee Chair. 2007 USA Mathematical Olympiad The Mathematical Association of America (MAA) has announced that 14 North Carolina students are among the 505 high school students who qualified for the highly selective and prestigious 2007 USA Mathematical Olympiad (USAMO). The two-day, six-question USAMO was distributed via the Internet in April to the participants’ schools across the nation. Students took the mathematical proof-writing USAMO contest in school under the supervision of a teacher. Student's test papers were faxed to the American Mathematics Competitions (AMC) office for grading by a panel of professional mathematicians. Twelve top scoring students will be announced as winners of the USAMO Monday, May 7, 2007. An awards ceremony for the 12 USAMO Winners will be held May 20-21 in Washington DC at the MAA Headquarters and the US Department of State Building. The USAMO is the pinnacle event in the sequence of increasingly challenging mathematical contests administered annually by the MAA in the American Mathematics Competitions program. The AMC seeks to encourage student interest and achievement in mathematics through these challenging mathematical contests. From over 225,000 students worldwide taking the first contest (AMC10 and/or AMC12), only 10,000 were invited to compete in the second contest, the American Invitational Mathematics Examination (AIME). Only 505 were selected from the second contest to participate in the USAMO. • Tian-yi Jiang, Cary Academy, Cary • Andrew Hertzberg, Chapel Hill HS, Chapel Hill • John Pardon, Durham Academy, Durham • Jeremy Hahn, East Chapel Hill HS, Chapel Hill • Arnav Tripathy, East Chapel Hill HS, Chapel Hill • Bryce Taylor, Hanes MS, Winston-Salem • John Berman, Hoggard HS, Wilmington • Kevin Lang, Myers Park HS, Charlotte • Yakov Berchenko-Kogan, Broughton HS, Raleigh • Joseph Lozier, NC School of Science and Mathematics, Durham • Ray Wang, NC School of Science and Mathematics, Durham • Seoungjun Lee, Ravenscroft School, Raleigh • Vivek Bhattacharya, Enloe HS, Raleigh • Mikhail Lavrov, Enloe HS, Raleigh • Daniel Vitek, Enloe HS, Raleigh North Carolina Mathematics Team Wins National Competition A team of fifteen students from across North Carolina defeated over one hundred teams from the United States, Canada, Taiwan and the Philippines at the American Regions Mathematics League (ARML) Meet held on Saturday, June 3rd. The meet was held simultaneously at three locations in the US; Penn State University, the University of Iowa, and the University of Nevada, Las Vegas. Two teams from North Carolina participated in this meet and were chosen on the basis of their performance in the contests sponsored by the North Carolina Council of Teachers of Mathematics (NCCTM) and on their scores on the various tests of the American Mathematics Competitions (AMC). The NCCTM sponsors sixteen regional qualifying contests culminating in the State High School Mathematics Contest. The top twenty students at this contest are automatically invited to be on one of the two North Carolina Math Teams. The other students are typically younger students and are selected on the basis of their scores on a variety of contests, including those administered by the AMC. The coaches for this year's teams were Archie Benton of North Buncombe High School, Ken Thwing of Freedom High School (Morganton), Kathy Hill and Deanna Lancaster of Athens Drive High (Raleigh), and David Mermin of Duke University. The coaches knew before heading to Pennsylvania that they had a good team. Many of the students on the team had competed last year at ARML and placed seventh. In addition, five of the team members had already been selected to attend the training session for the USA Mathematical Olympiad Team this summer at the University of Nebraska. Only fifty-four students nationwide were selected for this practice/selection session and six of them were from North Carolina. The ARML Competition is primarily a team event. Three of the four parts of the contest are done in teams. The Power Round lasts for one hour and the students must write rigorous mathematical derivations and proofs. The Team Round last twenty minutes and the students must collectively come up with answers to ten unrelated problems. The Relay Round consists of two problems which are done in relay fashion, with each student passing an answer on to another team member, who then uses that answer to complete his or her stage of the relay. The final round is the Individual Round in which each participant must individually answer eight questions. The North Carolina Team had the highest team score, the highest Power Round score, and the highest individual total of all the teams The high scoring individual team members included John Berman, a ninth-grader from J. T. Hoggard High School (Wilmington), Jeremy Hahn, a tenth-grader from East Chapel Hill High School, Mikhail Lavrov, an eleventh-grader from Enloe High School (Raleigh), Arnav Tripath, an eleventh-grader from East Chapel Hill High School, and Amy Wen of the North Carolina School of Science and Mathematics. The high scoring member of the "B" team was Steven Ji of the North Carolina School of Science and Mathematics. The North Carolina Math Teams are sponsored by the North Carolina Council of Teachers of Mathematics with generous support also coming from Duke Energy Corporation. For the past three years Duke Energy has provided additional funding so that this trip costs the individual students very little.
{"url":"http://www.ncctm.org/math_contest.cfm","timestamp":"2014-04-19T13:03:34Z","content_type":null,"content_length":"20524","record_id":"<urn:uuid:57924b52-585c-4dc7-9b1f-d76c1cd3e68d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic growth of functions...Asymptotic notations question March 9th 2011, 06:01 PM Basic growth of functions...Asymptotic notations question Hi all I'm probably missing something, it's not obvious that g(n) = O(f(n)) for all n (I don't see how 2^1000 boundaries of g(n) make any different here) ? March 9th 2011, 11:10 PM Hi all I'm probably missing something, it's not obvious that g(n) = O(f(n)) for all n (I don't see how 2^1000 boundaries of g(n) make any different here) ? Yes, $g(n)=O(f(n))$, but is that what the question is asking for? The wording seems pretty obscure. Could it want something like: $f(n)=O(n^3\log(n))$ and $g(n)=O(n^3)$ and so $g(n)=O(n^3\log(n))$. Also you probably are expected to justify $f(n)=O(n^3\log(n))$ March 10th 2011, 11:14 AM ok thanks didn't think it through.
{"url":"http://mathhelpforum.com/calculus/174052-basic-growth-functions-asymptotic-notations-question-print.html","timestamp":"2014-04-20T01:07:23Z","content_type":null,"content_length":"5906","record_id":"<urn:uuid:9cc68257-e197-41ca-92e1-522e220347c5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Game System Cportloto - Hyper Culture circulation, but instead will try to predict what numbers will not get in the winning list (after this will be less than the figures of which with a little luck you will guess a winning combination), thus increasing the likelihood of success. Certainly from a mathematical point of view, the probability of winning is almost unchanged. For me, such as the number 3 which has already dropped out twice in a row circulation (eg 159i160) the probability of loss in well, very small and I had safely excluded, and I will be fewer numbers of them have to guess the winner. But at the same time we will not stop, we come up with rules which will continue to exclude certain numbers in a given quantity. Of course we need to have statistics on the winning numbers. For example come up with a rule under which the number of days of the circulation is not in the list of winning in the circulation, any available statistics can assess how many of the prior editions, this rule is true. So we have already ruled out two numbers. Next must come up with different rules Recent Comments Account Deleted: Hello, You can also explore an Algorfa propert... | more ยป On Orlova Two
{"url":"http://hyperculture.typepad.com/blog/2011/09/game-system-cportloto.html","timestamp":"2014-04-16T04:28:54Z","content_type":null,"content_length":"36263","record_id":"<urn:uuid:9987002c-d7f7-4d5f-a818-a7693865a79c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Middle Village Algebra 2 Tutor Find a Middle Village Algebra 2 Tutor ...I spend way too much time watching shows like Family Guy, Friends, Pawn Stars and the Simpsons. I have a weird obsession with Military History so if you need tutoring for anything related to that, I'm your man.I have extensive experience tutoring K-6 students in English, Math, and Social Studies... 37 Subjects: including algebra 2, reading, English, writing ...I am a young man that works patiently and relentlessly until the mission is finished. My mother and teacher taught me that in each area of work and learning do it Honestly and faithfully. I follow this principle daily, which can be reflected in all aspects of my life. 27 Subjects: including algebra 2, reading, chemistry, Spanish ...I maintain a broad range of clientele, from students to those more advanced in their careers. I teach all topics, from beginners material to more advanced uses such as data analysis and VBA/SQL coding. I have been the go-to guy for PowerPoint presentations at work and school. 15 Subjects: including algebra 2, calculus, algebra 1, finance ...During a year abroad in Istanbul, I worked individually with young children to improve their English. In my senior year I was employed by the school's athletic department to tutor student-athletes in Latin and English. I enjoy the opportunity to continue helping students with their studies of Latin and English, and with writing in general. 11 Subjects: including algebra 2, English, writing, algebra 1 ...The progress is guaranteed!I am a native Russian speaker. My overall grade in Russian and Russian literature in high school was 5 (the highest possible grade), and I received a Gold Medal for outstanding academic performance. I continue to actively use Russian in my everyday life. 24 Subjects: including algebra 2, physics, GRE, Russian
{"url":"http://www.purplemath.com/Middle_Village_algebra_2_tutors.php","timestamp":"2014-04-19T07:13:02Z","content_type":null,"content_length":"24245","record_id":"<urn:uuid:4f2acc88-9f2c-4b24-ae33-3295ee4e9062>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Manning, Manning formula, history of the Manning formula, Fadi Khoury, hydraulics, hydrology, Victor Miguel Ponce Robert Manning was born in Normandy, France, in 1816, a year after the battle of Waterloo, in which his father has taken part. He died in 1897. In 1826, he moved to Waterford, Ireland, and worked as an accountant. In 1846, during the year of the great famine, Manning was recruited into the Arterial Drainage Division of the Irish Office of Public Works. After working as a draftsman for a while, he was appointed an assistant engineer to Samuel Roberts later that year. In 1848, he became district engineer, a position he held until 1855. As a district engineer, he read "Traité d'Hydraulique" by d'Aubisson des Voissons, after which he developed a great interest in hydraulics. From 1855 to 1869, Manning was employed by the Marquis of Downshire, while he supervised the construction of the Dundrum Bay Harbor in Ireland and designed a water supply system for Belfast. After the Marquis' death in 1869, Manning returned to the Irish Office of Public Works as assistant to the chief engineer. He became chief engineer in 1874, a position he held it until his retirement in Manning did not receive any education or formal training in fluid mechanics or engineering. His accounting background and pragmatism influenced his work and drove him to reduce problems to their simplest form. He compared and evaluated seven best known formulas of the time: Du Buat (1786), Eyelwein (1814), Weisbach (1845), St. Venant (1851), Neville (1860), Darcy and Bazin (1865), and Ganguillet and Kutter (1869). He calculated the velocity obtained from each formula for a given slope and for hydraulic radius varying from 0.25 m to 30 m. Then, for each condition, he found the mean value of the seven velocities and developed a formula that best fitted the data. The first best-fit formula was the following: V = 32 [RS (1 + R^ 1/3 )] ^ 1/2 He then simplified this formula to: V = C R^ x S^ 1/2 In 1885, Manning gave x the value of 2/3 and wrote his formula as follows: V = C R^ 2/3 S^1/2 In a letter to Flamant, Manning stated: "The reciprocal of C corresponds closely with that of n, as determined by Ganguillet and Kutter; both C and n being constant for the same channel." On December 4, 1889, at the age of 73, Manning first proposed his formula to the Institution of Civil Engineers (Ireland). This formula saw the light in 1891, in a paper written by him entitled "On the flow of water in open channels and pipes," published in the Transactions of the Institution of Civil Engineers (Ireland). Manning did not like his own equation for two reasons: First, it was difficult in those days to determine the cube root of a number and then square it to arrive at a number to the 2/3 power. In addition, the equation was dimensionally incorrect, and so to obtain dimensional correctness he developed the following equation: V = C (gS)^ 1/2 [R^ 1/2 + (0.22/m^ 1/2 )(R - 0.15 m)] where m = "height of a column of mercury which balances the atmosphere," and C was a dimensionless number "which varies with the nature of the surface." However, in some late 19th century textbooks, the Manning formula was written as follows: V = (1/n) R^ 2/3 S^ 1/2 Through his "Handbook of Hydraulics," King (1918) led to the widespread use of the Manning formula as we know it today, as well as to the acceptance that the Manning's coefficient C should be the reciprocal of Kutter's n. In the United States, n is referred to as Manning's friction factor, or Manning's constant. In Europe, the Strickler K is the same as Manning's C, i.e., the reciprocal of n. This contribution was written in April of 2005 by Fadi Khoury, based on the available literature. Fadi graduated with an M.S. degree in Civil Engineering from San Diego State University in June 2007. Return to ponce.sdsu.edu Other sites milestones.sdsu.edu facets.sdsu.edu chezy.sdsu.edu onlinechannel.sdsu.edu manningsn.sdsu.edu manningsn2.sdsu.edu
{"url":"http://manning.sdsu.edu/","timestamp":"2014-04-18T10:58:17Z","content_type":null,"content_length":"7435","record_id":"<urn:uuid:060fdfe3-78ed-49ab-aebb-63470dd6659c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Variational Upper and Lower Bounds for Noisy-OR Next: Approximate Inference for QMR Up: Variational Methods Previous: Variational Methods Let us now return to the problem of computing the posterior probabilities in the QMR model. Recall that it is the conditional probabilities corresponding to the positive findings that need to be simplified. To this end, we write where f(x) is a concave function of x. Based on the discussion in the previous section, we know that there must exist a variational upper bound for this function that is linear in x: Using Eq. (9) to evaluate the conjugate function The desired bound is obtained by substituting into Eq. (13) (and recalling the definition Note that the ``variational evidence'' d. Just as with the negative findings, this implies that the variational evidence can be incorporated into the posterior in time linear in the number of diseases associated with the finding. There is also a graphical way to understand the effect of the transformation. We rewrite the variational evidence as follows: Note that the first term is a constant, and note moreover that the product is factorized across the diseases. Each of the latter factors can be multiplied with the pre-existing prior on the corresponding disease (possibly itself modulated by factors from the negative evidence). The constant term can be viewed as associated with a delinked finding node We now turn to the lower bounds on the conditional probabilities f is concave we need only identify the non-negative variables a, which is now 12) we have: where we have allowed a different variational distribution Next: Approximate Inference for QMR Up: Variational Methods Previous: Variational Methods Michael Jordan Sun May 9 16:22:01 PDT 1999
{"url":"http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume10/jaakkola99a-html/node5.html","timestamp":"2014-04-16T14:53:26Z","content_type":null,"content_length":"6962","record_id":"<urn:uuid:7a643807-034a-4e6b-be7e-2e07455c6883>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Auditory space modeling and simulation via orthogonal expansion and ASA 124th Meeting New Orleans 1992 October 2pPP7. Auditory space modeling and simulation via orthogonal expansion and generalized spline model. Jiashu Chen Barry D. Van Veen Kurt E. Hecox University of Wisconsin---Madison, H6/573, 600 Highland Ave., Madison, WI 53792-5132 A two-stage model that establishes a mathematical representation of auditory space is developed. The first stage of the model consists of a low-dimensional subspace representation for the free-field-to-eardrum transfer functions (FETF's). The bases of this subspace are complex-valued eigentransfer functions (EF's) obtained from the Karhunen--Loeve expansion of the measured FETF's covariance matrix. Each FETF is represented as a weighted sum of the EF's. The second stage of the model is a functional representation for the weights, termed spatial transformation characteristic functions (STCF's), applied to the EF's. The STCF's are functions of azimuth and elevation. A generalized spline model is fitted to each STCF derived from measurements. The spline model filters out noise and permits interpolation of the STCF between measured directions. The FETF's for an arbitrary direction is synthesized by weighting the EF's with the smoothed and interpolated STCF's. Using FETF's sampled uniformly over the upper 3/4 sphere for one KEMAR ear, it is shown that 99.9% of the energy in the measured FETF's is contained in a 16-dimensional subspace. The relative average mean-square error between 2320 measured and simulated FETF's is found to be less than 0.25%. [Raw data provided by Dept. of Neurophysiology, University of Wisconsin---Madison.]
{"url":"http://www.auditory.org/asamtgs/asa92nwo/2pPP/2pPP7.html","timestamp":"2014-04-20T18:26:51Z","content_type":null,"content_length":"2077","record_id":"<urn:uuid:2befe976-c296-4923-b9ab-912edbcf7665>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Stockertown Trigonometry Tutor I am a degreed mechanical engineer who enjoys teaching others. I have helped all my kids from elementary through college with their homework. I have been in the industry for over 20 years. 20 Subjects: including trigonometry, geometry, ASVAB, algebra 1 ...I have strong math skills and am able to successfully tutor students in understanding math concepts essential to passing the GED. I am currently tutoring several students in GED preparation. Experienced college and high school chemistry tutor with excellent understanding of the physical sciences. 36 Subjects: including trigonometry, chemistry, reading, physics If you are having trouble in a high school or college level math class, or are preparing to take the SATs, then look no further. I am a 2001 graduate of Penn State's Math Education department. I have a flexible schedule with availability during the day, night, or weekends. 11 Subjects: including trigonometry, Spanish, calculus, geometry ...Trigonometry is sometimes introduced using a dull collection of problems, like those asking you to determine the height of a lighthouse based on the length of a shadow. However, I am a physicist, so I understand how sines and cosines can be used to calculate the dynamics of flying objects and us... 13 Subjects: including trigonometry, reading, physics, writing ...While in college, I was part of the Academic Learning Center where I tutored college students in mathematics and sociology anywhere from basic courses to preparing for the Praxis exams. How I tutor depends on the student. I always want to learn the student's strengths and areas of weakness in order to provide the optimal chance to understand the information. 13 Subjects: including trigonometry, Spanish, calculus, geometry
{"url":"http://www.purplemath.com/stockertown_pa_trigonometry_tutors.php","timestamp":"2014-04-19T23:29:52Z","content_type":null,"content_length":"24217","record_id":"<urn:uuid:b9e91085-a7b5-456a-86fa-b6d647ef3fa5>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Functions: How to arrive at the answer? October 14th 2009, 02:49 PM #1 Oct 2009 Functions: How to arrive at the answer? This is a problem in my book and I don't know how it came to the answer. A rancher has 200 feet of fencing to enclose two adjacent rectangular corrals. y[______|______] basically how the figure looks <--x--> <--x--> It says to write the area A of the corral as a function of x. The answer is A=8x(50-x) / 3 I'm confused. Using the variables, the drawing, and the given total length, find an equation for the total length, in terms of "x" and "y". Solve this equation for y, in terms of x. Write down the equation for the area A in terms of the lengths x and the depth y. Replace "y" with the expression from above. If you get stuck, please reply showing your steps and reasoning so far. Thank you! Here are my steps. P = 4x + 2y 200 = 4x +2y y = 200 - 4x / 2 y = 100 - 2x A = xy A = x(100 - 2x) I get stuck after this part. I'm not sure if I need to distribute the x or not. doesn't matter, leave it as is or distribute; it's the same thing. personally, I'd leave it as is ... it allows me to "see" the value of x that will yield a maximum area. October 14th 2009, 03:09 PM #2 MHF Contributor Mar 2007 October 14th 2009, 03:27 PM #3 Oct 2009 October 14th 2009, 03:43 PM #4
{"url":"http://mathhelpforum.com/pre-calculus/108075-functions-how-arrive-answer.html","timestamp":"2014-04-16T09:07:35Z","content_type":null,"content_length":"38673","record_id":"<urn:uuid:e443041e-3c48-41eb-bbdc-905aa260a429>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
An adaptive prediction and detection algorithm for multistream syndromic surveillance Surveillance of Over-the-Counter pharmaceutical (OTC) sales as a potential early indicator of developing public health conditions, in particular in cases of interest to biosurvellance, has been suggested in the literature. This paper is a continuation of a previous study in which we formulated the problem of estimating clinical data from OTC sales in terms of optimal LMS linear and Finite Impulse Response (FIR) filters. In this paper we extend our results to predict clinical data multiple steps ahead using OTC sales as well as the clinical data itself. The OTC data are grouped into a few categories and we predict the clinical data using a multichannel filter that encompasses all the past OTC categories as well as the past clinical data itself. The prediction is performed using FIR (Finite Impulse Response) filters and the recursive least squares method in order to adapt rapidly to nonstationary behaviour. In addition, we inject simulated events in both clinical and OTC data streams to evaluate the predictions by computing the Receiver Operating Characteristic curves of a threshold detector based on predicted outputs. We present all prediction results showing the effectiveness of the combined filtering operation. In addition, we compute and present the performance of a detector using the prediction output. Multichannel adaptive FIR least squares filtering provides a viable method of predicting public health conditions, as represented by clinical data, from OTC sales, and/or the clinical data. The potential value to a biosurveillance system cannot, however, be determined without studying this approach in the presence of transient events (nonstationary events of relatively short duration and fast rise times). Our simulated events superimposed on actual OTC and clinical data allow us to provide an upper bound on that potential value under some restricted conditions. Based on our ROC curves we argue that a biosurveillance system can provide early warning of an impending clinical event using ancillary data streams (such as OTC) with established correlations with the clinical data, and a prediction method that can react to nonstationary events sufficiently fast. Whether OTC (or other data streams yet to be identified) provide the best source of predicting clinical data is still an open question. We present a framework and an example to show how to measure the effectiveness of predictions, and compute an upper bound on this performance for the Recursive Least Squares method when the following two conditions are met: (1) an event of sufficient strength exists in both data streams, without distortion, and (2) it occurs in the OTC (or other ancillary streams) earlier than in the clinical data. Surveillance of Over-the-Counter pharmaceutical (OTC) sales as a potential early indicator of developing public health conditions, in particular in cases of interest to biosurvellance, has been suggested in the literature [1]. Sales of over-the-counter pharmaceuticals (OTCs) offer several advantages as possible early indicators of public health. They are very widely used [2], and reliable and detailed electronic records of their sales exist. Another possible advantage is the timeliness of OTC sales relative to other observable events that might occur when the public health is threatened. This is a particularly difficult aspect since it requires the identification of specific events in all the data streams before a judgment can be reached as to the correlations and the timeliness of those events. We have, in a previous article [3], provided evidence that when judiciously grouped, the OTC data show time-dependent correlations with clinical data, and that the present days values of the latter can be estimated well from the present and past values of the former using a set of linear filters h[j][m], where the subscript j refers to the particular OTC product group (multiple groups are used) and the index m refers to the time step. If we denote the clinical data time series on day number n by y[n], and the OTC time series on the same day number n by x[j][n], (the index j denotes the OTC product group), then the estimation problem discussed in our previous paper refers to using today's and past days' OTC data to estimate today's clinical data, in the sense that the estimated quantity is h[j][m] are assumed to have a span of M points (days). This estimate is to be compared with the actual value of the clinical data today. The "prediction" problem, the subject of the present paper, refers to an attempt to estimate future values of the clinical data using today's and past days' values of the OTC channels, i.e. the predicted quantity is now k > 0. In the parlance of linear filter theory, the data set whose prediction is desired (the dependent variable) is termed the primary data channel. All other data sets (distinct from the primary channel) that are used to make the predictions are known as reference channels, otherwise known as independent variables. When the primary data set (the dependent variable) is used to predict its own values, then the primary channel is also the reference channel (the independent variable). We present a prediction method based on an adaptive recursive least squares filter. In addition, we compare these predictions, which we term auto predictions, with similar predictions that use the same method applied to the clinical data alone without referencing any OTC channels. It is our contention that when the auto prediction results (i.e. when using the clinical data in the past to predict its future values) are equally (in the sense of minimum squared error) effective as or better than those predictions based solely on OTC streams, in all time intervals, then it is highly probable that no event of interest to biosurveillance actually exists in the clinical data. This is based on the fundamental premises of linear optimal predictors that a nonstationary and relatively short duration event superimposed on an otherwise stationary and predictable background cannot be predicted from the stationary background data alone. We argue that the best performance comparison, in the context of a biosurveillance system whose objective is to detect an outbreak early, among all method/data stream combinations is tied closely to the existence of such events. Lacking any real specific events of sufficient signal strength, we perform a study based on simulated events in order to compute an upper bound on the indicated performances. We emphasize that the system whose performance we are investigating here is a predict and detect system, in the sense that it uses historical clinical and other ancillary data streams in order to predict clinical data many days into the future. The detection performance is then based on a study of probabilities of true detections versus the probabilities of false alarms. The meaning of an upper bound on the detection performance in this context is in the following sense. Given a data stream y[t ]that includes an event of short duration then the detection performance of a specific prediction method, is related to the quantity y[t ]is the value of the data stream in the absence of the event. This predicted value could be based on the data stream itself, or it could be based on a combination of the data stream and several other correlated data sets. In a real-time situation one might perform detections based on the quantity y[t ]is not available when the predictions are made at t - Δt. We contend that an upper bound on detection performance is obtained when we use the "actual background" y[t ]instead of the "predicted background" Data grouping and recursive least squares prediction JHU/APL is currently collecting large quantities of daily OTC sales data. We receive sales records of 622 different products under the general category of cold remedies from a single vendor, with similar numbers from other vendors. Many of these products are used to treat very similar conditions. Product sales from some of these product groups are known to be good indicators of the corresponding clinical data. For instance, chest rub sales are highly correlated with the count of physician diagnosis of acute bronchitis or acute bronchiolitis [4]. The OTC products of interest were grouped based on a combination of the syndromes the product is intended to alleviate, the physical description of the product (e. g. a pill, a powder, a lip balm, etc.) and the age/sex group the product is targeted for. There were 15 syndrome groups, 15 physical types, and 4 target age/sex groups (mostly age, but 4 of the products were designated as intended especially for women). Some combinations contained no products, but there were a total of 92 combinations that did, so there was still some need for aggregation of these groups. The aggregation procedure has been reported elsewhere [5]. The groups we eventually used are shown in Table 1. The clinical data are counts of outpatient encounters, based on physicians diagnoses (according to the International Classification of Diseases, Ninth Revision (ICD9) standards [6]) reported in insurance claims that fall within a particular set of acute respiratory conditions (see table 1). Encounters were included only for patients 12 years old and older. The encounters are further restricted to include only patients living in the National Capitol Region (NCR), which includes the District of Columbia, Baltimore, suburban portions of Maryland and the Washington suburbs in Virginia. The encounters are time-tagged according to the day of occurrence. Table 1. OTC Adult Medication Product Groups Here, we consider the clinical data, the dependent variable, as the primary data channel (in the parlance of adaptive filter theory) whose values are to be predicted. The OTC product groups (the independent variables) are then used to predict the daily clinical data in the following manner. Today's and several past days' OTC data are combined to make a future clinical data prediction, which is then compared to the actual value of that day's clinical data when it becomes available, and the error is used to update the filter coefficients in such a way as to minimize the square of the error. For simplicity and to illustrate the method we consider the estimation problem in which there is reference channel whose value at each time is denoted by ] (note that the subscript denoting the particular product group is now missing since we are using an example with only 1 product group). The latter is used to estimate the present value ] of the primary channel (the dependent variable – office visit data). The estimation equations, once put into the recursive form are then easily generalized to the prediction problem. A linear estimate of the primary channel in terms of a single reference channel is given by . The last equation can be written as a vector dot product = [ ] = [ - ( - 1)]] , and the superscript denotes the transposition operation (the transpose of a row vector is a column vector). A linear predictor of the primary channel (the dependent variable) at steps ahead is then given by Clearly we could perform a similar prediction process when we use the clinical data by itself instead of the OTC data streams, as well as simply including the clinical data as an "additional" reference data stream. Let P denote the number of days ahead to predict, M denote the number of linear filter coefficients, and N denote the number of OTC data channels. Then the filter vector will have M × N elements when we predict the clinical data using the OTC channels, and it will have M elements when we use the clinical data to predict itself, and (M + 1) × N elements when we combine the OTC channels and the clinical data to predict the clinical data. The covariance/correlation matrix of the reference channels will then be a square matrix of the appropriate dimension in each case. The filter application and updates are recursive. Denoting the clinical data (the dependent variable) on day n by y[n] and the reference data (the independent variables) by [j], the Recursive Least Squares P – days ahead prediction equation is [7]L denotes the number of reference data streams and it is equal to N when only OTC data are used, or 1 if the clinical data is used to predict itself, or N + 1 if both clinical and OTC channels are used to predict the clinical data. This step is repeated as many times as required in order to obtain the predicted values 1. We should point out that when we use the clinical data alone (i.e. for self prediction) then the prediction equation is of the form j could be left out since only one filter is used. Additional File 1. Recursive Least Squares Prediction with minimum distance multiple look error feedback. Format: PDF Size: 34KB Download file This file can be viewed with: Adobe Acrobat Reader Our simulated signal was constructed by combining the following assumptions about an event of interest that can be reasonably expected to arise if a biological attack were to occur. The event is the result of a deterministic multiplicative signal s(t), in the sense that if y(t) denotes the clinical data in the absence of the signal, the presence of the signal will lead to the following clinical data y(t){1+s(t)}. The corresponding event has a sharp rise of about 5–10 days from a minimum of 0 to a maximum value that shows the corresponding percent increase of clinical data at the peak of the outbreak, above the normal "background" number. We consider the rise time of about a week to be a reasonable assumption based on observations of infectious disease characteristics [8]. In addition, we assume that the event has a fall off period of about twice that of the rise time. This signal can be easily modeled by a function of the form [8]. Figure 1 shows a specific example matching the requirements of an event described above. The simulation consists of applying this deterministic signal with a given maximum value to the OTC data, on any given day, and applying the same deterministic signal to the clinical data with a time delay. Then we compute predictions of the clinical data once using OTC data only, and a second time using the clinical data itself. In both cases we use the predicted clinical data for detection and the detector output is t. The predictions are performed once with no signal present (to compute false alarms), and once with signal present to compute true detections, and for all day numbers 100 through 550. A range of thresholds are used to calculate the total number of detections and the total number of false alarms. These numbers are then averaged over the total number of days to give the probability of detection p[d ]and the probability of false alarm p[fa], both of which are functions of threshold. The Receiver Operating Characteristic (ROC) curve is then obtained by eliminating the threshold variable and plotting p[d ]as a function of p[fa]. This curve provides the most concise form of evaluating the performance of any detection system [9]. The best performance is, by definition, a horizontal line p[d ]= 1, while the worst is the line p[d ]= 0, for all values of p[fa]. The 45° line represents the performance of a hypothetical detector that decided on the presence of a true signal by tossing a fair coin, i.e. equal probabilities of detection vs false alarm. Simulation parameters are as follows. The signal maximum strength takes on values 10%, 100% and 200% (percentage increases refer to the background counts). We have chosen 2 signal lag times of 5 and 10 days (lag times refer to the lag between the application of the maximum signal strength to the OTC channels and the office visit count channel). The predictor uses a filter length of 5 days and we try 2 sets of predictions: 5 days ahead and 10 days ahead. Results and discussion Figure 2 shows the actual clinical data, and figure 3 shows all the OTC channels, for the period 11/1/2001 through 5/14/2003 consisting of 560 days. Figure 4 shows the 5-days ahead prediction results using only the OTC channels. Figure 5 shows a similar output when the office visits data are added in as another reference channel. Figure 6 shows the output when only the office visits data are used to predict their own future values, i.e. OTC channels are not used here. All results use a 5 point filter, i.e. for each OTC product group j the corresponding filter is h[j][m] and 0 ≤ m ≤ 4. The effective memory of each filter is set at approximately 1 month. Figure 2. Office visit data [clinical data] between 11/1/2001 and 5/14/2003. Figure 3. OTC sales data for the same period. Figure 4. 5-days ahead predictions of office visit data using OTC sales data, versus actual data. Figure 5. 5-days ahead predictions of office visit data using OTC sales data and office visit data, versus actual data. Figure 6. 5-days ahead predictions of office visit data using OTC sales data and office visit data, versus actual data. A simple measure of the effectiveness of the predictor performance (not the detector performance) is a plot of the mean versus standard deviation of the difference between the actual and the predicted values (prediction error vector). Figure 7 shows the means versus standard deviations for all 3 cases and for 5-days ahead as well as 10-days ahead predictions. The prediction error vector was computed between days 100 and 550, to allow for filter initialization in the beginning; we could have started making predictions as early as 50 days from the beginning since the effective memory of the filter is set at 30 days, but chose to begin making predictions at day 100 to be absolutely safe. These prediction results are quite encouraging and show significant correlations between OTC and office visit data, in the sense that the predictions are quite close to the actual values. What is perhaps more surprising, is the fact that using office visit data for self prediction apparently has the lowest error. Although these errors are computed over the entire time section, there are no time intervals over which the self prediction results are worse than those when the OTC are used to predict the clinical data. Figure 7. Performance characteristics of 5-days and 10-days ahead predictions, using OTC sales data alone, OTC data plus office visit data, and office visit data alone. Our interpretation is that this particular office visit data has sufficiently strong autocorrelations at long lags that allows for a better prediction when compared to the predictions made using the cross correlations. We emphasize that one cannot draw any conclusions as to the best combination of method/data for prediction from these results when in fact no identifiable and significant events of interest exist in the present data set. For instance, one cannot state that OTC data can be safely ignored in the prediction problem in favour of using the clinical data itself. In order to illustrate this point and to place an upper bound on detection performance of a biosurveillance system that relies on predicting the clinical data from OTC and/or clinical data, we have performed an analysis based on simulated events superimposed on the present data sets. Figure 8 shows the ROC curves (p[d ]vs p[fa]) for a 5-day lag and 5 days ahead prediction for all three signal maximum amplitudes. The dotted lines represent the auto-predictions made using only the clinical data. The solid lines show the predictions using the OTC data. The thickness of the lines in each case represents the signal maximum amplitude. Based on this figure alone, we can summarize these results as follows. Given the assumptions in this simulation, the auto-predictions do not appear to perform well in a predict-ahead and detect surveillance system. For instance, even at signal maximum amplitude of 200%, for a 9 and 10. Figure 11 shows a dramatic fall in performance when the latter condition is not satisfied, i.e. when the number of prediction days exceeds the lag number. Clearly, if we try to predict "too many" days ahead (irrespective of the lag number), the detection results worsen considerably. The present data set predictions appeared to hold to about 12 days Figure 8. ROC curves of auto-predictions (office visit data alone to predict itself), and OTC predictions (OTC data to predict office visit data), for 3 signal strengths, using a simulated actual lag of 5 days between OTC and office visit data and a 5-days ahead predictor. Figure 9. ROC curves of auto-predictions (office visit data alone to predict itself), and OTC predictions (OTC data to predict office visit data), for 3 signal strengths, using a simulated actual lag of 10 days between OTC and office visit data and a 5-days ahead predictor. Figure 10. ROC curves of auto-predictions (office visit data alone to predict itself), and OTC predictions (OTC data to predict office visit data), for 3 signal strengths, using a simulated actual lag of 10 days between OTC and office visit data and a 10-days ahead predictor. Figure 11. ROC curves of auto-predictions (office visit data alone to predict itself), and OTC predictions (OTC data to predict office visit data), for 3 signal strengths, using a simulated actual lag of 5 days between OTC and office visit data and a 10-days ahead predictor. Our choice of the synthetic signal requires further discussion. Since we are interested in placing upper bounds on the performance of a multi-stream syndromic surveillance system that uses a prediction method to detect an outbreak we decided to concentrate on a type of epidemic curve that has a reasonably fast rise time and a slower fall off. We chose the 1-week rise time and 2-weeks fall off because they are reasonable numbers in the context of early detection of most biological attack scenarios. It so happens that these numbers appear to fit the observations by Sartwell and so we used a log-normal shape. It turns out the results are quite insensitive to the analytic form of the signal, for instance, we could have used a "triangle" signal with the same rise-time characteristics and reached similar conclusions. Finally we should discuss our results in view of the fact that we applied the same multiplicative signal in all data streams without distortion. A complete simulation study of the performance of a multi-stream syndromic surveillance system that uses a prediction method to detect an outbreak would include all possible signal distortions (including amplitude reduction, but also changes in the shape of the signal), and all reasonable time delays; this is a huge task and well beyond the scope of this publication. What we have attempted here is obtaining an upper bound on this performance by varying the time delay and the maximum amplitude of the signal but keeping the signal undistorted. Any distortion of the signal would clearly degrade the ROC curves. In the absence of a general theory of infectious disease evolution and the uncertainties associated with the impact of an infectious disease outbreak upon all the data streams in a multi-stream syndromic surveillance system we have found performance upper bounds on the limited number of cases we have studied, in conjunction with the prediction algorithm presented here. Based on our simulation results we can state the following broad conclusions regarding a multistream syndromic surveillance system that operates by predicting the clinical data several days in advance and issuing early warnings if the predicted values exceed a given thershold. This predict-and-detect system must include ancillary data streams (such as OTC) with established correlations with the clinical data, and a prediction method that can react to nonstationary events sufficiently fast. Any predictions of the clinical data using only the clinical data, i.e. relying on self-correlations of the clinical data rather than cross-correlations with other data streams such as OTC data, can be an effective estimate of the background conditions. Whether OTC (or other data streams yet to be identified) can provide the best source of predicting clinical data is still an open question. The system must also include a prediction algorithm that can react sufficiently fast to nonstationary changes. The Recursive Least Squares Minimum Distance Error algorithm presented here seems to satisfy this condition. Finally, we have no way of knowing the likelihood that events of interest will always be present in both the clinical data and the ancillary streams, without significant distortion, and with reasonable time lags. But if any event satisfies these conditions, we have provided the framework for a system that has an excellent chance of detecting it in advance. Authors' contributions The idea of predicting clinical data using OTC, and the algorithm using recursive least squares with feedback of minimum error among multiple looks, and the associated computer programmes were developed by A. H. Najmi. The OTC data were grouped from among a much larger set into a small set used in this study, via a maximum likelihood method developed by S. F. Magruder. This research is sponsored by the Defense Advanced Research Projects Agency and managed under Naval Sea Systems Command (NAVSEA) contract N00024-98-D-8124. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied of the Defense Advanced Research Projects Agency, NAVSEA, or the United States Government. 1. Anna Goldenberg , et al.: PNAS. Early statistical detection of Anthrax outbreaks by tracking over the counter medication sales 2002, 99(8):5237-5240. 2. Self-care in the New Millennium, report by Roper Starch Worldwide, Inc., prepared for the Consumer Healthcare Products Association, Roper-Starch. 2001. 3. Najmi AH, Magruder SF: BMC Medical Informatics and Decision Making. Estimation of Hospital Emergency Room Data Using OTC Pharmaceutical Sales and Least Mean Square Filters 2004, 4:5. 4. Magruder SF: Johns Hopkins University Applied Physics Laboratory Technical Digest. Evaluation of OTC pharmaceutical sales as a possible early warning indicator of public health 2003., 24 5. Magruder SF, Happel-Lewis S, Najmi AH, Florio AE: Special Issue on the Proceedings of the National Syndromic Surveillance Conference. Progress in Understanding and Using Over-the-Counter Pharmaceuticals for Syndromic Surveillance of Public Health, Morbidity and Mortality Weekly Report 2004, 53(Suppl):S117-122. 6. The Incubation Period of Poliomyelitis. Volume 42. Philip Sartwell, American Journal of Public Health; 1952::1403-1408. Pre-publication history The pre-publication history for this paper can be accessed here: Sign up to receive new article alerts from BMC Medical Informatics and Decision Making
{"url":"http://www.biomedcentral.com/1472-6947/5/33?fmt_view=mobile","timestamp":"2014-04-18T18:12:39Z","content_type":null,"content_length":"96594","record_id":"<urn:uuid:5e6d503e-b451-4ea2-823c-600eea275882>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
The Fragility of Sensitivity Analysis: An Encompassing Perspective The Fragility of Sensitivity Analysis: An Encompassing Perspective NOTE: International Finance Discussion Papers are preliminary materials circulated to stimulate discussion and critical comment. References in publications to International Finance Discussion Papers (other than an acknowledgment that the writer has had access to unpublished material) should be cleared with the author or authors. Recent IFDPs are available on the Web at http:// www.federalreserve.gov/pubs/ifdp/. This paper can be downloaded without charge from the Social Science Research Network electronic library at http://www.ssrn.com/. Robustness and fragility in Leamer's sense are defined with respect to a particular coefficient over a class of models. This paper shows that inclusion of the data generation process in that class of models is neither necessary nor sufficient for robustness. This result holds even if the properly specified model has well-determined, statistically significant coefficients. The encompassing principle explains how this result can occur. Encompassing also provides a link to a more common-sense notion of robustness, which is still a desirable property empirically; and encompassing clarifies recent discussion on model averaging and the pooling of forecasts. Keywords: Encompassing, exogeneity, extreme bounds analysis, model averaging, parameter nonconstancy, pooling of forecasts, robustness, regime shifts, sensitivity analysis JEL classification: C52, E41 1. Introduction Economists want their empirical results to be robust, and with good reason. Especially after Goldfeld's (1976) missing money and Lucas's (1976) critique, economists have been all too aware of the fragility of many empirical models to the choice of (e.g.) explanatory variables, sample period, and dynamics. In an attempt to measure a coefficient's sensitivity to model selection, Leamer's (1983) extreme bounds analysis (also known as sensitivity analysis) calculates the range of potential coefficient estimates over a class of models. Extreme bounds analysis thus offers an appealing and intuitive methodology for determining whether an empirical result is robust or fragile. Extreme bounds analysis also formalizes a common practice in empirical studies, and its explicit implementation is widespread; cf. Allen and Connolly (1989), Cooley and LeRoy (1981), Levine and Renelt (1992), Leamer (1997), Sala-i-Martin (1997), Serra (2006), Freille et al. (2007), and Bjø rnskov et al. (2008) inter alia. Robustness and fragility in Leamer's sense are defined with respect to a particular coefficient or set of coefficients over a class of models. This paper shows that the inclusion of the data generation process in that class of models is neither necessary nor sufficient for robustness. This result holds even if the properly specified model has well-determined, statistically significant coefficients. The encompassing principle--pioneered by Mizon (1984) and Mizon and Richard (1986)--explains how this result can occur. Encompassing also provides a link to a more common-sense notion of robustness that is a desirable property empirically; and encompassing clarifies recent discussion on model averaging and the pooling of forecasts. Put somewhat differently, extreme bounds analysis focuses on the variation of estimated coefficients across model specifications. While that coefficient variation is of interest, it is so only in light of its causes. Extreme bounds analysis considers all coefficient variation, regardless of its causes. The encompassing principle discerns between coefficient variation that is not explainable by the model at hand, and coefficient variation that is. Encompassing thus clarifies when to worry about coefficient variation and when not to, whereas extreme bounds analysis worries about all coefficient variation. In essence, extreme bounds analysis is not sensitive enough to the data's nuances, whereas the encompassing principle is. Section 2 briefly describes extreme bounds analysis, including modifications proposed by Leamer and Leonard (1983) and Levine and Renelt (1992). Section 3 establishes the lack of necessity and the lack of sufficiency through illustrations with classical regression models. Section 4 demonstrates how robustness and fragility in Leamer's sense are themselves fragile in practice, employing Hendry and Ericsson's (1991) model of U.K. money demand. Section 5 re-interprets the fragility of sensitivity analysis in light of the encompassing principle. In so doing, Section 5 clarifies the meaning of robustness and illuminates recent discussion on model averaging and the pooling of forecasts. Section 6 concludes. In order to make the results readily accessible, examples--rather than proofs--are employed. Also, proofs are readily apparent, given the examples. 2. A Characterization This section summarizes extreme bounds analysis (EBA), including modifications proposed by Leamer and Leonard (1983) and Levine and Renelt (1992). The classical regression model serves to illustrate. See Leamer (1978), Chapter 5 for an initial discussion of Bayesian sensitivity analysis; and see Leamer and Leonard (1983), Leamer (1985), and Levine and Renelt (1992) for more detailed descriptions of EBA . Consider the standard linear regression model: where a priori its coefficient is believed to be different from zero, and a variable is doubtful if a priori its coefficient is believed to be equal to zero. Extreme bounds analysis centers on a "focus" coefficient, denoted Extreme bounds analysis determines the range of the least squares estimates of Bounds can be calculated from the four estimates of Leamer and Leonard (1983) propose a simple solution to this conundrum: calculate the bounds over all linear combinations of the doubtful variables. While these bounds might appear complicated to compute, they are not; and Breusch (1990, equation (17)) provides analytical formulae for them in terms of standard regression output from (1), estimated with and without the restriction where the circumflex ˆ and tilde ˜ denote unrestricted and restricted estimates respectively, Estimation of ^1 Robustness and fragility in this sense are called L-robustness and L-fragility ("L" for Leamer) so as to distinguish them from other senses of robustness and fragility used below. Some values of McAleer and Veall (1989) criticize EBA for not accounting for the uncertainty in the extreme bounds estimates themselves. McAleer and Veall use bootstrap techniques to estimate the standard errors of the bounds and find that those standard errors can be large empirically. [Magee 1990] derives the asymptotic variance of the extreme bounds: where the two terms resulting from the " With the estimated bounds' uncertainty in mind, Levine and Renelt (1992) propose a modified EBA, which solves for "... the widest range of coefficient estimates on the [focus variable] that standard hypothesis tests do not reject" (p. 944). In practice, Levine and Renelt use 95% critical values for calculating the extreme bounds. If the variance of the estimate of Thus, while pure EBA ignores the plausibility of the bounds (in terms of the data) and the uncertainty of the bounds themselves, computationally feasible solutions exist for addressing both 3. Implications of L-robustness and L-fragility This section establishes that L-robustness is neither necessary nor sufficient for the data generation process (DGP) to be included as one of the models in extreme bounds analysis (Section 3.1). To simplify discussion, only the "pure" form of EBA is examined initially; the modifications to EBA are considered at the end of Section 3.1. A simple DGP and several classical regression models illustrate the four propositions associated with lack of necessity and lack of sufficiency (Section 3.2). In the examples, bounds are calculated at population values for the various estimates in equation (3). This permits a clearer exposition and in no way invalidates the results. In fact, the four propositions hold both in finite samples and asymptotically, and they are not restricted to the examples in Section 3.2 Hendry and Mizon (1990) provide the essential framework for this section's approach. Other authors have also pointed out difficulties with EBA . McAleer et al. (1985) show that the bounds may lie in implausible parts of the parameter space, that other information may be relevant to the model's usefulness (e.g., white-noise errors), and that L-robustness is sensitive to the choice of the parameter's prior mean and to its classification as free or doubtful. McAleer and Veall (1989) also show that accounting for the uncertainty in the estimated bounds can affect the results of EBA . 3.1 Four Propositions L-robustness is neither necessary nor sufficient for the DGP to be included as one of the models in extreme bounds analysis. It is helpful to examine this statement as four separate propositions. No formal proofs need be given; the examples in Section 3.2 are sufficient. Propositions 1 and 2 pertain to L-robustness. Proposition 1 If a set of models for EBA includes the DGP, a result may be L-robust. Proposition 2 If a set of models for EBA excludes the DGP, a result may be L-robust. Propositions 1 and 2 are unsurprising. Still, together they imply that L-robustness says nothing about whether any of the models in the EBA are the DGP, and so says nothing about how close or far away any of the models in the EBA are to the DGP . Propositions 3 and 4 pertain to L-fragility, or the lack of L-robustness. Proposition 3 If a set of models for EBA includes the DGP, a result may be L-fragile. From the perspective of an empirical modeler, Proposition 3 is problematic: correct specification of the unrestricted model does not ensure L-robustness. Proposition 4 If a set of models for EBA excludes the DGP, a result may be L-fragile. Propositions 3 and 4 together imply that L-fragility says nothing about whether any of the models in the EBA are the DGP, paralleling the implication about L-robustness from Propositions 1 and 2. Propositions 1-4 extend immediately to EBA that is modified to account for uncertainty in the estimated bounds. For Propositions 1 and 2, a large enough sample size always exists such that the uncertainty in the estimated bounds is negligible. For Propositions 3 and 4, the results remain L-fragile when accounting for uncertainty in the estimated bounds because that uncertainty must increase the range spanned by the bounds. Propositions 1 and 2 also extend to EBA that is restricted to lie within a likelihood ellipsoid, as when the doubtful variable has no explanatory power; see Examples 1 and 2 below. Section 5 discusses implications for Propositions 3 and 4. 3.2 Four Examples Examples 1,2,3, and 4 below illustrate Propositions 1,2,3, and 4 respectively. A simple framework is used to characterize the implications as clearly and directly as possible. Suppose that the data ( and that the marginal variables To evaluate the properties of EBA, the right-hand side variables in the conditional process (6) must be mapped into focus, free, and doubtful variables. Also, it is possible that some The EBA will satisfy any of Propositions 1-4, depending upon the values of the parameters in the DGP (6)-(7). Thus, the examples are stated in terms of the parameters of the DGP . Example 1 illustrates Proposition 1. Example 1 Suppose that: Then Proposition 1 holds for large enough In Example 1, expectations of relevant estimators are Example 2 is the same as Example 1 except that the omitted variable is important in the DGP . Example 2 Suppose that In Example 2, For the next two examples--Examples 3 and 4--the marginal process (7) is modified slightly to include a nonzero correlation ( Example 3 Suppose that Example 3 is the same as Example 1 except that the doubtful variable is necessary for explaining For the parameter values in Example 3, equation (8) implies extreme bounds of approximately The DGP in Example 4 is the same as in Example 3, except that Example 4 Suppose that Because carry through, but with 4. An Empirical Example To highlight the empirical consequences of the four propositions in Section 3, the current section re-examines an empirical model of narrow money demand in the United Kingdom from Hendry and Ericsson (1991). This model is described, and its history and properties summarized. Several alternatives to this model are then estimated, and extreme bounds are calculated for various model pairs. Treating the original model as the DGP, examples of all four propositions can be found empirically. Of course, the original model is not the DGP . However, that model does appear well-specified when examined with a wide range of diagnostic statistics, so it behaves like a local DGP, thus making these extreme bounds analyses of substantive interest. The data are quarterly seasonally adjusted nominal M1 ( where variables in lowercase are in logarithms, The history of (9) provides a perspective on its empirical validity, which motivates treating (9) as the DGP in the examples below. Hacche (1974) and Coghlan (1978) developed some of the first models of U.K. narrow money demand. Hendry (1979) noted problems in the dynamic specification of those models and obtained a better-specified model much like (9) as a simplification from an autoregressive distributed lag model on data through 1977. Equation (9) differs from Hendry's (1979) model by having the interest rate in levels rather than logs (a formulation proposed by Trundle (1982)), by having slightly simpler dynamics, and by having the net interest rate rather than the local authority rate. The net interest rate helps properly measure the economic concept of the opportunity cost when narrow money started earning interest in the 1980s. The empirical specification in (9) has been extensively analyzed. Aspects examined include parameter constancy [Hendry (1985), Hendry and Ericsson (1991)], cointegration [Johansen (1992), Hendry and Mizon (1993), Paruolo (1996)], weak exogeneity [Johansen (1992), Boswijk (1992)], super exogeneity [Cuthbertson (1988), Hendry (1988), Hendry and Ericsson (1991), Engle and Hendry (1993)], dynamic specification [Ericsson et al. (1990)], finite-sample biases in estimation [Kiviet and Phillips (1994)], and seasonality [Ericsson et al. (1994)]. Only the last (seasonality) provides any evidence of mis-specification, and the magnitude of that mis-specification appears relatively small. To illustrate the four propositions above, consider the following four variants on equation (9). equation (9) itself; equation (9), excluding equation (9), excluding equation (9), excluding Table 1 summarizes the estimation results for these four models. For EBA, choices of excluded, free, doubtful, and focus variables must be made; and the following choices aim to illustrate Propositions 1-4. Treating Table 2 presents the extreme bounds analysis for each of the four model pairs. The interest rate To summarize, inclusion or exclusion of the DGP in the models examined has no bearing on the determination of L-robustness or L-fragility in extreme bounds analysis. Additionally, a coefficient can be highly statistically significant, yet be either L-robust or L-fragile. As this section and the previous section show, these negative results are easily demonstrated in principle and in empirical 5. Encompassing, EBA, and Robustness This section interprets Propositions 1-4 in light of the encompassing literature (Section 5.1), relates Leamer and Leonard's modified EBA to encompassing (Section 5.2), and re-interprets several encompassing tests and diagnostic tests as tests of robustness (Section 5.3). See Mizon (1984), Mizon and Richard (1986), Hendry and Richard (1989), and Bontemps and Mizon (2003) for key references on encompassing. 5.1 An Encompassing Interpretation of EBA Whether or not a variable is L-robust depends inter alia upon the correlations between the free and doubtful variables. That dependence leads to an encompassing interpretation of EBA. A simple regression model illustrates. Returning to equations (1) and (2), consider the case with one focus variable With only one doubtful variable in (10), the extreme bounds are given by the unrestricted and restricted estimates Thus, for restricted models with statistically invalid restrictions, loose bounds (i.e., implying L-fragility) are unworrying. For example, for the model pair For restricted models with statistically valid restrictions, the restricted model parsimoniously encompasses the unrestricted model because those restrictions are statistically valid. If EBA obtains loose bounds in such a situation, those loose bounds must arise from the uncertainty in the estimated coefficients: the corresponding statistical reduction appears valid, implying no omitted variable The examples in the previous two paragraphs compare a given model with a more general, nesting model. A given model can also be compared with a non-nested model or with a more restricted model. Non-nested comparisons generate variance-encompassing and parameter-encompassing test statistics. General-to-specific comparisons generate the standard 5.2 Encompassing, and Modified EBA Leamer and Leonard's (1983) modified EBA restricts the bounds to lie within some specified likelihood ellipsoid relative to the unrestricted model. This statistical modification is very classical in nature: modified EBA considers only those models that are statistically valid simplifications of the unrestricted model. This modification is very much in the spirit of the encompassing literature, given the discussion in Section 5.1; and it motivates orthogonalization of variables in model design, as discussed below. Equation (11) highlights an advantage to having nearly orthogonal regressors: they help minimize the potential for omitted variable bias. Because linear models are invariant to nonsingular linear transformations of the regressors, orthogonalization of the variables in the unrestricted model could be obtained by construction. For typical (i.e., highly autocorrelated) economic time series, near orthogonalization can often be obtained by using two economically interpretable transformations: differencing, and differentials. For example, in equation (9), inflation While this sense of robustness is often achievable by design, no procedure appears capable of ensuring orthogonalization with respect to variables that are not included in the unrestricted model. This implication emphasizes the importance of starting with a general enough model. [Leamer and Leonard 1983, p. 306] are sympathetic to this view, given their concern for obtaining robust inferences over a broad family of models. For detailed discussions of general-to-specific modeling and model design, see Hendry (1983), Hendry et al. (1984), Gilbert (1986), Spanos (1986), Ericsson et al. (1990), Mizon (1995), Hoover and Perez (1999), Hendry and Krolzig (1999, 2005), Campos et al. (2005), and Doornik (2008). 5.3 Robustness and Encompassing L-robustness focuses on how coefficient estimates alter as the information set for the model changes. From the discussion above, L-robustness is only statistically or economically interesting if the information that is excluded--relative to the unrestricted model--is validly excluded. Tests of encompassing are tests of that exclusion; hence tests of encompassing are interpretable as tests of robustness. Put slightly differently, an encompassing test of a given model evaluates whether or not the information in the other model is redundant, conditional on the information in the given model. If encompassing holds, then the given model is robust to that additional information. At a more general level, robustness (and so encompassing) can be defined in terms of generic changes to the model's information set, and not just in terms of changes associated with the additional variables in another model; see Mizon (1995, pp. 121-122; 2008) and Lu and Mizon (1996). In a partition of information sources similar to the one in [Hendry 1983] for test statistics, consider the following four sources of information: the model itself, other models, other sample periods, and other regimes. Information from the model itself. Robustness to data in the model itself corresponds to satisfying a range of standard diagnostic tests, such as those for white-noise residuals and homoscedasticity. In this spirit, Edwards et al. (2006) propose a further refinement to Leamer and Leonard's modified EBA by requiring the bounds to satisfy not just the standard likelihood ratio test but also a battery of diagnostic tests. By focusing on congruence, this refinement parallels the generalized concept of encompassing. Information from other models. Robustness to data in another model corresponds to standard encompassing: in particular, variance encompassing and parameter encompassing for non-nested models, and parsimonious encompassing for nested models. Differences in coefficient estimates across models are unimportant per se. Rather, the interest is in the ability of a given model to explain why other models obtain the results that they do. The formula for omitted variable bias provides one way for such an accounting. The relationship of the given model to the alternative model formally defines the type of encompassing statistic. Non-nested models generate variance-encompassing and parameter-encompassing statistics; nested models generate parsimonious-encompassing statistics. If the two models differ in their dynamic specification, special attention must be given to the construction of the encompassing statistic, even although the comparison of models may appear conceptually equivalent to the one generating the usual encompassing statistics. See Hendry and Richard (1989) and Govaerts et al. (1994) for details. Encompassing accounts for information in other models. Model averaging is an alternative approach to accounting for such information. Early versions of model averaging include pre-test, Stein-rule, and shrinkage estimators; see Judge and Bock (1978), Raftery et al. (1997) and Hansen (2007) exemplify recent directions in model averaging. While model averaging is an appealing way of combining information, it has several statistical disadvantages relative to encompassing through general-to-specific model selection; see Hoover and Perez (2004), Hendry and Krolzig (2005), and Hendry and Reade (2005) inter alia. For example, consider model averaging across a set of models that includes a well-specified model (e.g., the DGP) and some mis-specified models. As Hendry and Reade (2005) demonstrate, typical rules for model averaging place too much weight on the mis-specified models, in effect mixing too much bad wine with too little good wine. Encompassing through general-to-specific modeling aims to find the well-specified model among the set of models being considered, thus (to continue the analogy) singling out that one bottle of a rare vintage. If the union model is the DGP but none of the individual models are, the distinction between model averaging and encompassing is even sharper. Model averaging places zero weight on the DGP, whereas encompassing through general-to-specific modeling has power to detect the union model as the DGP . Information from other sample periods. Robustness to data from another sample period corresponds to parameter constancy. Fisher's (1922) covariance statistic and Chow's (1960) prediction interval statistic are two early important statistics for testing this form of robustness. More recent developments have focused on testing robustness to a range of sample periods: see Brown et al. (1975), Harvey (1981), and Doornik and Hendry (2007) inter alia on recursive statistics, and Andrews (1993) and Hansen (1992) on statistics for testing parameter instability when the breakpoint is unknown. Information from other regimes. Robustness to regime changes corresponds to valid super exogeneity. Two common tests for super exogeneity are constructed as follows. (i) Establish the constancy of the parameters in the conditional model and the nonconstancy of those in the marginal model; cf. Hendry (1988). (ii) Having established (i), further develop the marginal model by including additional explanatory variables until the marginal model is empirically constant. Then, test for the significance of those additional variables when added to the conditional model. Insignificance in the conditional model demonstrates invariance of the conditional model's parameters to the changes in the marginal process; cf. Engle and Hendry (1993) for this test's initial implementation, Hendry and Santos (2006) for a version based on impulse saturation, and Hendry et al. (2008) and Johansen and Nielsen (2008) for statistical underpinnings of the latter. These tests use statistics for testing parameter constancy and statistics for omitted variables. Thus, these tests are interpretable as tests of robustness to information from other sample periods and from other models. However, tests of super exogeneity merit separate mention because super exogeneity is central to policy analysis. Hendry and Ericsson (1991) calculate both types of super exogeneity tests. Hendry and Ericsson show that the EqCM (9) is empirically constant, but that autoregressive models for inflation and the net interest rate are not. The EqCM is constant across regime changes, which were responsible for the nonconstancy of the inflation and interest rate processes. From (i), inflation and the net interest rate are super exogenous in (9). Additionally, functions of the residuals from the marginal processes are insignificant when added to the EqCM, so inflation and the net interest rate are super exogenous from (ii). See Engle et al. (1983) for a general discussion of exogeneity. Overlapping sources of information. Robustness to the intersection of multiple sources of information is also of interest. For instance, robustness to another model's data over an out-of-sample period corresponds to forecast encompassing and forecast-model encompassing; see Chong and Hendry (1986), Lu and Mizon (1991), Ericsson (1992), and Ericsson and Marquez (1993). See also Bates and Granger (1969), Granger (1989), Wright (2003a, 2003b), Hendry and Clements (2004), Hendry and Reade (2006), and Castle et al. (2008) inter alia for discussion on the related concept of forecast Other implications of encompassing. Encompassing does not imply that the DGP is included in the set of models being examined. However, an encompassing model is congruent with respect to the available information set and thus parsimoniously encompasses the local DGP . In that specific sense, the encompassing model establishes a closeness to the DGP . General-to-specific modeling with diagnostic testing enforces encompassing and generates a progressive research strategy that converges to the DGP in large samples; cf. White (1990) and Mizon (1995). Extreme bounds analysis--at least in its unmodified form--does neither. 6. Summary and Remarks Extreme bounds analysis re-emphasizes the importance of robustness in empirical modeling. The measure of robustness in EBA has several unfortunate properties that render that particular measure useless in practice. Nonetheless, the structure of EBA helps elucidate an important role of encompassing and model design in empirical modeling: encompassing tests and several other diagnostic tests are interpretable as tests of a more appropriately defined notion of robustness. Allen, S. D., and R. A. Connolly (1989) "Financial Market Effects on Aggregate Money Demand: A Bayesian Analysis", Journal of Money, Credit, and Banking, 21, 2, 158-175. Andrews, D. W. K. (1993) "Tests for Parameter Instability and Structural Change with Unknown Change Point", Econometrica, 61, 4, 821-856. Bates, J. M., and C. W. J. Granger (1969) "The Combination of Forecasts", Operational Research Quarterly, 20, 451-468. Bjørnskov, C., A. Dreher, and J. A. V. Fischer (2008) "Cross-country Determinants of Life Satisfaction: Exploring Different Determinants Across Groups in Society", Social Choice and Welfare, 30, 1, Bontemps, C., and G. E. Mizon (2003) "Congruence and Encompassing", Chapter 15 in B. P. Stigum (ed.) Econometrics and the Philosophy of Economics: Theory-Data Confrontations in Economics, Princeton University Press, Princeton, 354-378. Boswijk, H. P. (1992) Cointegration, Identification and Exogeneity: Inference in Structural Error Correction Models, Thesis Publishers, Amsterdam (Tinbergen Institute Research Series, No. 37). Breusch, T. S. (1990) "Simplified Extreme Bounds", Chapter 3 in C. W. J. Granger (ed.) Modelling Economic Series: Readings in Econometric Methodology, Oxford University Press, Oxford, 72-81. Brown, R. L., J. Durbin, and J. M. Evans (1975) "Techniques for Testing the Constancy of Regression Relationships over Time", Journal of the Royal Statistical Society, Series B, 37, 2, 149-163 (with Campos, J., N. R. Ericsson, and D. F. Hendry (2005) "Introduction: General-to-Specific Modelling", in J. Campos, N. R. Ericsson, and D. F. Hendry (eds.) General-to-Specific Modelling, Volume I, Edward Elgar, Cheltenham, xi-xci. Castle, J. L., N. W. P. Fawcett, and D. F. Hendry (2008) "Forecasting, Structural Breaks and Non-linearities", mimeo, Department of Economics, University of Oxford, Oxford, May. Chong, Y. Y., and D. F. Hendry (1986) "Econometric Evaluation of Linear Macro-economic Models", Review of Economic Studies, 53, 4, 671-690. Chow, G. C. (1960) "Tests of Equality Between Sets of Coefficients in Two Linear Regressions", Econometrica, 28, 3, 591-605. Coghlan, R. T. (1978) "A Transactions Demand for Money", Bank of England Quarterly Bulletin, 18, 1, 48-60. Cooley, T. F., and S. F. LeRoy (1981) "Identification and Estimation of Money Demand", American Economic Review, 71, 5, 825-844. Cuthbertson, K. (1988) "The Demand for M1: A Forward Looking Buffer Stock Model", Oxford Economic Papers, 40, 1, 110-131. Doornik, J. A. (2008) "Autometrics", in J. L. Castle and N. Shephard (eds.) The Methodology and Practice of Econometrics: A Festschrift in Honour of David F. Hendry, Oxford University Press, Oxford, Doornik, J. A., and D. F. Hendry (2007) PcGive 12, Timberlake Consultants Ltd, London (4 volumes). Edwards, J. A., A. Sams, and B. Yang (2006) "A Refinement in the Specification of Empirical Macroeconomic Models as an Extension to the EBA Procedure", Berkeley Electronic Journal of Macroeconomics: Topics in Macroeconomics, 6, 2, 1-24. Engle, R. F., and D. F. Hendry (1993) "Testing Super Exogeneity and Invariance in Regression Models", Journal of Econometrics, 56, 1/2, 119-139. Engle, R. F., D. F. Hendry, and J.-F. Richard (1983) "Exogeneity", Econometrica, 51, 2, 277-304. Ericsson, N. R. (1992) "Parameter Constancy, Mean Square Forecast Errors, and Measuring Forecast Performance: An Exposition, Extensions, and Illustration", Journal of Policy Modeling, 14, 4, 465-495. Ericsson, N. R., J. Campos, and H.-A. Tran (1990) "PC-GIVE and David Hendry's Econometric Methodology", Revista de Econometria, 10, 1, 7-117. Ericsson, N. R., D. F. Hendry, and H.-A. Tran (1994) "Cointegration, Seasonality, Encompassing, and the Demand for Money in the United Kingdom", Chapter 7 in C. P. Hargreaves (ed.) Nonstationary Time Series Analysis and Cointegration, Oxford University Press, Oxford, 179-224. Ericsson, N. R., and J. Marquez (1993) "Encompassing the Forecasts of U.S. Trade Balance Models", Review of Economics and Statistics, 75, 1, 19-31. Fisher, R. A. (1922) "The Goodness of Fit of Regression Formulae, and the Distribution of Regression Coefficients", Journal of the Royal Statistical Society, 85, 4, 597-612. Freille, S., M. E. Haque, and R. Kneller (2007) "A Contribution to the Empirics of Press Freedom and Corruption", European Journal of Political Economy, 23, 4, 838-862. Gilbert, C. L. (1986) "Professor Hendry's Econometric Methodology", Oxford Bulletin of Economics and Statistics, 48, 3, 283-307. Goldfeld, S. M. (1976) "The Case of the Missing Money", Brookings Papers on Economic Activity, 1976, 3, 683-730 (with discussion). Gouriéroux, C., and A. Monfort (1995) "Testing, Encompassing, and Simulating Dynamic Econometric Models", Econometric Theory, 11, 2, 195-228. Govaerts, B., D. F. Hendry, and J.-F. Richard (1994) "Encompassing in Stationary Linear Dynamic Models", Journal of Econometrics, 63, 1, 245-270. Granger, C. W. J. (1989) "Invited Review: Combining Forecasts--Twenty Years Later", Journal of Forecasting, 8, 167-173. Granger, C. W. J., and H. F. Uhlig (1990) "Reasonable Extreme-bounds Analysis", Journal of Econometrics, 44, 1-2, 159-170. Granger, C. W. J., and H. F. Uhlig (1992) "Erratum: Reasonable Extreme-bounds Analysis", Journal of Econometrics, 51, 1-2, 285-286. Hacche, G. (1974) "The Demand for Money in the United Kingdom: Experience Since 1971", Bank of England Quarterly Bulletin, 14, 3, 284-305. Hansen, B. E. (1992) "Tests for Parameter Instability in Regressions with I(1) Processes", Journal of Business and Economic Statistics, 10, 3, 321-335. Hansen, B. E. (2007) "Least Squares Model Averaging", Econometrica, 75, 4, 1175-1189. Harvey, A. C. (1981) The Econometric Analysis of Time Series, Philip Allan, Oxford. Hendry, D. F. (1979) "Predictive Failure and Econometric Modelling in Macroeconomics: The Transactions Demand for Money", Chapter 9 in P. Ormerod (ed.) Economic Modelling: Current Issues and Problems in Macroeconomic Modelling in the UK and the US, Heinemann Education Books, London, 217-242. Hendry, D. F. (1983) "Econometric Modelling: The `Consumption Function' in Retrospect", Scottish Journal of Political Economy, 30, 3, 193-220. Hendry, D. F. (1985) "Monetary Economic Myth and Econometric Reality", Oxford Review of Economic Policy, 1, 1, 72-84. Hendry, D. F. (1988) "The Encompassing Implications of Feedback Versus Feedforward Mechanisms in Econometrics", Oxford Economic Papers, 40, 1, 132-149. Hendry, D. F., and M. P. Clements (2004) "Pooling of Forecasts", Econometrics Journal, 7, 1, 1-31. Hendry, D. F., and N. R. Ericsson (1991) "Modeling the Demand for Narrow Money in the United Kingdom and the United States", European Economic Review, 35, 4, 833-881 (with discussion). Hendry, D. F., S. Johansen, and C. Santos (2008) "Automatic Selection of Indicators in a Fully Saturated Regression", Computational Statistics, 23, 2, 317-335, 337-339. Hendry, D. F., and H.-M. Krolzig (1999) "Improving on 'Data Mining Reconsidered' by K. D. Hoover and S. J. Perez", Econometrics Journal, 2, 2, 202-219. Hendry, D. F., and H.-M. Krolzig (2005) "The Properties of Automatic Gets Modelling", Economic Journal, 115, 502, C32-C61. Hendry, D. F., and G. E. Mizon (1990) "Procrustean Econometrics: Or Stretching and Squeezing Data", Chapter 7 in C. W. J. Granger (ed.) Modelling Economic Series: Readings in Econometric Methodology, Oxford University Press, Oxford, 121-136. Hendry, D. F., and G. E. Mizon (1993) "Evaluating Dynamic Econometric Models by Encompassing the VAR", Chapter 18 in P. C. B. Phillips (ed.) Models, Methods, and Applications of Econometrics: Essays in Honor of A. R. Bergstrom, Basil Blackwell, Cambridge, 272-300. Hendry, D. F., A. Pagan, and J. D. Sargan (1984) "Dynamic Specification", Chapter 18 in Z. Griliches and M. D. Intriligator (eds.) Handbook of Econometrics, Volume 2, North-Holland, Amsterdam, Hendry, D. F., and J. J. Reade (2005) "Problems in Model Averaging with Dummy Variables", mimeo, Department of Economics, University of Oxford, Oxford, May. Hendry, D. F., and J. J. Reade (2006) "Forecasting Using Model Averaging in the Presence of Structural Breaks", mimeo, Department of Economics, University of Oxford, Oxford, June. Hendry, D. F., and J.-F. Richard (1989) "Recent Developments in the Theory of Encompassing", Chapter 12 in B. Cornet and H. Tulkens (eds.) Contributions to Operations Research and Economics: The Twentieth Anniversary of CORE, MIT Press, Cambridge, 393-440. Hendry, D. F., and C. Santos (2006) "Automatic Tests of Super Exogeneity", mimeo, Department of Economics, University of Oxford, Oxford, February. Hoover, K. D., and S. J. Perez (1999) "Data Mining Reconsidered: Encompassing and the General-to-specific Approach to Specification Search", Econometrics Journal, 2, 2, 167-191 (with discussion). Hoover, K. D., and S. J. Perez (2004) "Truth and Robustness in Cross-country Growth Regressions", Oxford Bulletin of Economics and Statistics, 66, 5, 765-798. Johansen, S. (1992) "Testing Weak Exogeneity and the Order of Cointegration in UK Money Demand Data", Journal of Policy Modeling, 14, 3, 313-334. Johansen, S., and B. Nielsen (2008) "An Analysis of the Indicator Saturation Estimator as a Robust Regression Estimator", in J. L. Castle and N. Shephard (eds.) The Methodology and Practice of Econometrics: A Festschrift in Honour of David F. Hendry, Oxford University Press, Oxford, forthcoming. Judge, G. G., and M. E. Bock (1978) The Statistical Implications of Pre-test and Stein-rule Estimators in Econometrics, North-Holland, Amsterdam. Kiviet, J. F., and G. D. A. Phillips (1994) "Bias Assessment and Reduction in Linear Error-correction Models", Journal of Econometrics, 63, 1, 215-243. Leamer, E. E. (1978) Specification Searches: Ad Hoc Inference with Nonexperimental Data, John Wiley, New York. Leamer, E. E. (1983) "Let's Take the Con Out of Econometrics", American Economic Review, 73, 1, 31-43. Leamer, E. E. (1985) "Sensitivity Analyses Would Help", American Economic Review, 75, 3, 308-313. Leamer, E. E. (1997) "Revisiting Tobin's 1950 Study of Food Expenditure", Journal of Applied Econometrics, 12, 5, 533-553 (with discussion). Leamer, E. E., and H. Leonard (1983) "Reporting the Fragility of Regression Estimates", Review of Economics and Statistics, 65, 2, 306-317. Levine, R., and D. Renelt (1992) "A Sensitivity Analysis of Cross-country Growth Regressions", American Economic Review, 82, 4, 942-963. Lu, M., and G. E. Mizon (1991) "Forecast Encompassing and Model Evaluation", Chapter 9 in P. Hackl and A. H. Westlund (eds.) Economic Structural Change: Analysis and Forecasting, Springer-Verlag, Berlin, 123-138. Lu, M., and G. E. Mizon (1996) "The Encompassing Principle and Hypothesis Testing", Econometric Theory, 12, 5, 845-858. Lucas, Jr., R. E. (1976) "Econometric Policy Evaluation: A Critique", in K. Brunner and A. H. Meltzer (eds.) The Phillips Curve and Labor Markets, North-Holland, Amsterdam, Carnegie-Rochester Conference Series on Public Policy, Volume 1, Journal of Monetary Economics, Supplement, 19-46 (with discussion). Magee, L. (1990) "The Asymptotic Variance of Extreme Bounds", Review of Economics and Statistics, 72, 1, 182-184. McAleer, M., A. Pagan, and P. A. Volker (1985) "What Will Take the Con Out of Econometrics?", American Economic Review, 75, 3, 293-307. McAleer, M., and M. R. Veall (1989) "How Fragile Are Fragile Inferences? A Re-evaluation of the Deterrent Effect of Capital Punishment", Review of Economics and Statistics, 71, 1, 99-106. Mizon, G. E. (1984) "The Encompassing Approach in Econometrics", Chapter 6 in D. F. Hendry and K. F. Wallis (eds.) Econometrics and Quantitative Economics, Basil Blackwell, Oxford, 135-172. Mizon, G. E. (1995) "Progressive Modeling of Macroeconomic Time Series: The LSE Methodology", Chapter 4 in K. D. Hoover (ed.) Macroeconometrics: Developments, Tensions, and Prospects, Kluwer Academic Publishers, Boston, 107-170 (with discussion). Mizon, G. E. (2008) "Encompassing", in S. N. Durlauf and L. E. Blume (eds.) The New Palgrave Dictionary of Economics, Palgrave Macmillan, New York, Second Edition. Mizon, G. E., and J.-F. Richard (1986) "The Encompassing Principle and its Application to Testing Non-nested Hypotheses", Econometrica, 54, 3, 657-678. Paruolo, P. (1996) "On the Determination of Integration Indices in I(2) Systems", Journal of Econometrics, 72, 1/2, 313-356. Raftery, A. E., D. Madigan, and J. A. Hoeting (1997) "Bayesian Model Averaging for Linear Regression Models", Journal of the American Statistical Association, 92, 437, 179-191. Sala-i-Martin, X. X. (1997) "I Just Ran Two Million Regressions", American Economic Review, 87, 2, 178-183. Serra, D. (2006) "Empirical Determinants of Corruption: A Sensitivity Analysis", Public Choice, 126, 1-2, 225-256. Spanos, A. (1986) Statistical Foundations of Econometric Modelling, Cambridge University Press, Cambridge. Stewart, M. B. (1984) "Significance Tests in the Presence of Model Uncertainty and Specification Search", Economics Letters, 16, 3-4, 309-313. Trundle, J. M. (1982) "The Demand for M1 in the UK", mimeo, Bank of England, London. White, H. (1990) "A Consistent Model Selection Procedure Based on Modelling Economic Series: Readings in Econometric Methodology, Oxford University Press, Oxford, 369-383. Wright, J. H. (2003a) "Bayesian Model Averaging and Exchange Rate Forecasts", International Finance Discussion Paper No. 779, Board of Governors of the Federal Reserve System, Washington, D.C., Wright, J. H. (2003b) "Forecasting U.S. Inflation by Bayesian Model Averaging", International Finance Discussion Paper No. 780, Board of Governors of the Federal Reserve System, Washington, D.C., Table 1. Estimates for Restricted and Unrestricted Models │Variable or Statistic │Model: │Model: │Model: │Model: │ │ │-0.687 │ │ │ │ │ │ │ │ │ │ │ │(0.125)│ │ │ │ │ │-0.175 │-0.133 │ │0.343 │ │ │ │ │ │ │ │ │(0.058)│(0.066)│ │(0.090)│ │ │-0.630 │-0.786 │-0.718 │ │ │ │ │ │ │ │ │ │(0.060)│(0.060)│(0.051)│ │ │ │-0.093 │-0.092 │-0.084 │-0.006 │ │ │ │ │ │ │ │ │(0.009)│(0.010)│(0.009)│(0.012)│ │ │0.023 │0.024 │0.022 │0.003 │ │ intercept │ │ │ │ │ │ │(0.004)│(0.005)│(0.005)│(0.007)│ │ │0.762 │0.686 │0.673 │0.133 │ │ │1.313% │1.498% │1.522% │2.478% │ │ │ │F(1,95)│F(2,95)│F(2,95)│ │ │ │ │ │ │ │ │ │30.01**│17.67**│125.3**│ │ │ │ │ │ │ │ │ │[0.000]│[0.000]│[0.000]│ │ │ │ │F(1,96)│F(1,96)│ │ │ │ │ │ │ │ │ │ │4.09* │169.3**│ │ │ │ │ │ │ │ │ │ │[0.046]│[0.000]│ Table 2. Extreme Bounds Analysis of the Model Pairs │Proposition illustrated │1 │2 │3 │4 │ │ Unrestricted model │ │ │ │ │ │ Restricted model │ │ │ │ │ │ │included │excluded │included │excluded │ │ Focus variable │ │ │ │ │ │ │[-0.79, -0.63]│[-0.79, -0.72]│[-0.18, +0.34]│[-0.13, +0.34]│ │ Bounds │ │ │ │ │ │ │(0.05) (0.06) │(0.06) (0.05) │(0.06) (0.05) │(0.07) (0.06) │ │ Modified bounds │[-0.89, -0.51]│[-0.91,-0.62] │[-0.29, +0.44]│[-0.26, +0.45]│ │ L-robust or L-fragile │L-robust │L-robust │L-fragile │L-fragile │ This version is optimized for use by screen readers. Descriptions for all mathematical expressions are provided in LaTex format. A printable pdf version is available. Return to text
{"url":"http://federalreserve.gov/pubs/ifdp/2008/959/ifdp959.htm","timestamp":"2014-04-17T21:47:04Z","content_type":null,"content_length":"105423","record_id":"<urn:uuid:64713137-a342-42f8-ba9f-5a941b37c2a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: [math-learn] middle school math problems Replies: 11 Last Post: Jan 18, 2013 9:58 PM Messages: [ Previous | Next ] Re: [math-learn] middle school math problems Posted: Jan 18, 2013 6:04 PM Maybe a bit more comment is necessary. I agree that most tests show a disappointing picture, but that just scratches the surface. On the Basic Skills Diagnostic Test and on the Calculus Concept Inventory the results are similar, though the math is at quite different One finds the results follow a relatively standard Gaussian distribution. but there are two Gaussians with no overlap. There are a small percentage of students distributed normally around a score of 40%-50%, and a larger group distributed roughly normally around a score of 15%. There is essentially no overlap. The scores in the upper group are OK though nor fantastic, what one would have thought an average population would look like. Then the much larger group distributed roughly normally (though a bit squeezed at the bottom) around a center of 20%, There is no overlap between the two populations. On the CCI the results show one population who are typical of what most faculty's fantasy of their students is, they are a small fraction of the whole, whereas the bulk of the population it nearly 100% totally incompetent. What causes this result so far from most people's expectation, is that the tests, while very basic, do not test low-level typically memorized procedures and formulas. IN the lingo, the entire test is designed to be at the level of "conceptual understanding", whereas most tests are mostly what is called procedural knowledge. Procedural items can be answered with no understanding at all of what the question is about, this is the norm in most schools most of the time. We all make the decision, perhaps unconscious, not to ask too many of the questions we know they should understand, but which will produce disastrous results. The new finding (though well known from physics, is that even items we think are near trivial will produce the same result as soon as the item goes beyond memorization of a formula or a rule, The emperor really has no clothes, and we are all complicit in keeping that a secret. Jerry Epstein On 1/18/2013 4:48 PM, Robert Hansen wrote: On 1/18/2013 4:48 PM, Robert Hansen wrote: > I am confused. > Regarding your first statement, that students are mostly not competent > in mathematics, why did you have to do a study? All of the > standardized tests already show this. TIMSS shows this. PERLS shows > this. NCLB shows this. AP shows this. > Regarding your second statement, that these non competent students > routinely pass tests, that is in direct contradiction to the statement > above. > We can go to any public source of the tests I listed above and see > that the students are not passing. > Bob Hansen > On Jan 18, 2013, at 4:09 PM, Jerome Epstein jerepst@att.net > <mailto:jerepst%40att.net>> wrote: > > It would be helpful to know some of the background that this test > > assumes, that you believe most students don't have. Diagnostic > testing I > > have been doing for years proves beyond question that large majorities > > of high school graduates are not competent in mathematics at the 8th > > grade level if one asks for skills that are in any way beyond rote > > memorization and very low level. > > I have proved this 100 times over. > > Nearly all assessments test only memorized procedures. They are > > routinely passed by students who understand not a word of it. I have > > shown this in testing of thousands of students in some 30 states, 3 > > provinces of Canada, and about 15 other countries. If any are > interested > > in testing students, you can get either the Basic Skills Diagnostic > Test > > (BSDT) or the Calculus Concept Inventory (CCI) by writing to me. Be > > prepared for a shock. > > A paper on the CCI has just been accepted by the Notices and will > appear > > in about 6 months. > > Jerry Epstein > [Non-text portions of this message have been removed] [Non-text portions of this message have been removed] Date Subject Author 1/17/13 [math-learn] middle school math problems CCSSIMath 1/17/13 Re: [math-learn] middle school math problems Dennis 1/18/13 Re: [math-learn] middle school math problems Ed Wall 1/18/13 Re: [math-learn] middle school math problems Dennis 1/18/13 Re: [math-learn] middle school math problems Jerry Epstein 1/18/13 Re: [math-learn] middle school math problems Robert Hansen 1/18/13 Re: [math-learn] middle school math problems Jerry Epstein 1/18/13 Re: [math-learn] middle school math problems Dennis 1/18/13 Re: [math-learn] middle school math problems Ed Wall 1/18/13 Re: [math-learn] middle school math problems Zeev Wurman 1/18/13 Re: [math-learn] middle school math problems Robert Hansen 1/18/13 Re: [math-learn] middle school math problems Ed Wall
{"url":"http://mathforum.org/kb/message.jspa?messageID=8108338","timestamp":"2014-04-16T08:24:04Z","content_type":null,"content_length":"34727","record_id":"<urn:uuid:a8e9faba-fa13-41b6-a784-bf11e09615b1>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Solution to One of Landau's Problems and Infinitely Many Prime Numbers of the Form Ap±b Authors: Germán Paz This paper is a LaTeX document which combines previously posted papers 'Infinitely Many Prime Numbers of the Form ap±b' (viXra:1202.0063 submitted on 2012-02-18 19:13:46; url: http://vixra.org/abs/ 1202.0063) and 'Solution to One of Landau's Problems' (viXra:1202.0061 submitted on 2012-02-18 21:49:14; url: http://vixra.org/abs/1202.0061) into one paper. The information contained in this paper is the same as the information contained in those two original papers. No new information or results are being added. ABSTRACT. In this paper it is proved that for every positive integer 'k' there are infinitely many prime numbers of the form n^2+k, which means that there are infinitely many prime numbers of the form n^2+1. In addition to this, in this document it is proved that if 'a' and 'b' are two positive integers which are coprime and also have different parity, then there are infinitely many prime numbers of the form ap+b, where 'p' is a prime number. Moreover, it is also proved that there are infinitely many prime numbers of the form ap-b. In other words, it is proved that the progressions ap+b and ap-b generate infinitely many prime numbers. In particular, all this implies that there are infinitely many prime numbers of the form 2p+1 (since the numbers 2 and 1 are coprime and have different parity), which means that there are infinitely many Sophie Germain Prime Numbers. This paper also proposes an important new conjecture about prime numbers called 'Conjecture C'. If this conjecture is true, then Legendre's Conjecture, Brocard's Conjecture and Andrica's Conjecture are all true, and also some other important results will be true. Comments: 44 Pages. Although the results that are considered as main results have not been published, some of the theorems in this paper appear in Gen. Math. Notes. See: On the Interval [n,2n]: Primes, Composites and Perfect Powers, Gen. Math. Notes, 15(1) (2013), 1-15. Download: PDF Submission history [v1] 2012-05-19 16:01:41 Unique-IP document downloads: 133 times Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. comments powered by
{"url":"http://vixra.org/abs/1205.0077","timestamp":"2014-04-18T11:29:42Z","content_type":null,"content_length":"8795","record_id":"<urn:uuid:f94e1dc7-e7a4-4556-a5b4-102031e044ef>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
In conversation: George Daniel Mostow, geometer of the Nth dimension Here Mostow, an emeritus professor since 1998, talks about the English teacher who led him to a life in math, the pleasure of family, high-definition opera, and the Nth dimension — as well as the “eureka!” moment at a New Haven stoplight that secured him a place in the history of geometry. When did it dawn on you that you wanted to be a professional mathematician? In high school, mathematics was my favorite subject. I especially enjoyed challenging problems. But I did not know that mathematics was a profession. I am indebted to my high school English teacher who, in my senior year, called me up to his desk to ask about my career plans, and told me that his brother was a mathematician. I decided then and there that mathematics was for me. Mathematics is a powerful language not easily translated into in words. Nonetheless, could you briefly describe your discovery of the phenomenon in geometry known as the Strong Rigidity Theorem? This topic has roots going all the way back to my first publications. My work is related to the concept of congruence. Two objects are congruent if you can superimpose one on the other so there is an exact fit. Most people are familiar with this concept from two-dimensional high school plane geometry. My rigidity theorem says that in all dimensions except 2, two objects of negative curvature, of maximum "local" symmetry, and with the same Poincare group, are in fact really congruent. Was there a prevailing opinion beforehand that the theorem should be true? Quite the contrary. The theorem attracted attention because it was unexpected. You have said that "the final idea jumped out at me as I was waiting in my car at a red light. I get a high every time I pass that intersection.” Can you tell us a little bit more about that moment in New Haven, at the corner of Whalley Avenue and Fitch Street? Earlier, in my office, I had been thinking intensively about the problem. I did not think about it while I was driving, but at the light, at that moment, I suddenly thought: “Use ergodicity!” [Ergodicity is a concept in dynamical systems theory.] When I got home, I immediately sketched out a path to the theorem. In fact, I only had to add eight pages to what I already had done to complete the proof. Where were you when you heard you would win the Wolf Prize and how did you I was sleeping late that morning when my wife woke me and told me that there was a phone call for me. My first reaction to the announcement was "this may be too good to be true." Do you plan to travel to Jerusalem in May to receive the prize? Absolutely. And my family is very excited about the event. My children, stepchildren, grandchildren, and their spouses don't want to miss it. They will be coming from points spread out from Los Angeles to Haifa. You share the prize with Michael Artin of MIT. Do you know him or his work well? I have known Mike for a long time. For many years, mathematicians from Yale and Cambridge, Mass. would meet periodically at The Public House restaurant in Sturbridge and exchange new ideas and mathematical gossip. Mike has been one of the major figures in the reformulation of mathematics known as the Grothendieck revolution, which took place in the latter half of the 20th century. You retired from the active Yale faculty in 1998, taking emeritus status. But you remain affiliated with the Yale math department and intellectually active. What are you studying these days, either in mathematics or other fields? I try to keep up with developments in mathematics by reading, attending seminars, and having conversations with colleagues. One of the topics that I am interested in is non-commutative geometry. I also am intrigued by how physicists are applying mathematics in their research. Aside from math, where are some of your main interests? Reading history, especially the history of religion. Trying to understand what is the good life and living a good life. It helps to have a loving wife, four children, and their spouses, as well as 10 grandchilden and 14 great-grandchildren, all fascinating. I enjoy singing in a chorus, and attending the high-definition Metropolitan opera performances at Yale. The University’s Koerner Center for Emeritus Faculty — which has a rich program of lectures, films, and parties — keeps me in contact with colleagues from other departments. What do you remember most fondly from your long career? I derived much pleasure from collaborating with some of the principal mathematicians of our time. In many cases, our families became close and permanent friends. I also was delighted to receive letters from former students. I remember especially reading a letter from a student who is now an economist, saying: "I am grateful to you for teaching me to think in n-dimensions." Could you elaborate on the Strong Rigidity Theorem and what led you to it? In modern mathematics, we label points in a plane by pairs of numbers, namely their x[1] and x[2] coordinates. Points in space are labeled by 3 coordinates, x[1], x[2], and x[3]. But some data, such as all stock prices at a given moment, require n coordinates where n may be large. We can define an n-dimensional object by specifying a particular collection of n-tuples. For example, the set of all n-tuples that satisfy x[1]^2 + x[2]^2 + ... + x[n]^2 <=1 is called an n-ball and the set x[1]^2 + x[2]^2 + ... + x[n]^2 + x[n+1]^2 =1 is called an n-sphere.^. One tool for showing that two objects cannot be congruent is to take certain measurements of one object and take the corresponding measurements on the second, and show that these are different. Sometimes the measurements are intricate: for example, counting the number of closed curves that can be drawn on an object and cannot be shrunk to a point. In counting the number of curves, we count two curves to be the same if one can be pulled onto another. And, we will not count any curve that is a combination of other curves. That count is called the Poincare number of the object. Using the Poincare number is a way to distinguish between the surfaces of a doughnut and a sphere. The Poincare Group is a mathematical structure that provides more information about the totality of closed curves on the n-dimensional object. For example, the Poincare number does not distinguish between the surface of a doughnut and the surface of a pretzel, but the Poincare group does. There are measurements that describe other features of an object — such as how the object is curved. For example, a sphere in Euclidean space has positive constant curvature. A plane has constant curvature 0. A saddle has negative curvature, which varies from point to point. In addition, there are measurements that describe special types of symmetry that an object may have. My rigidity theorem says that in all dimensions except 2, two objects of negative curvature, of maximum "local" symmetry, and with the same Poincare group, are in fact really congruent. Read the YaleNews story about Mostow’s receipt of the Wolf Prize.
{"url":"http://www.ecnmag.com/news/2013/01/conversation-george-daniel-mostow-geometer-nth-dimension","timestamp":"2014-04-17T13:38:01Z","content_type":null,"content_length":"74047","record_id":"<urn:uuid:04038ec0-5485-49e5-96ae-51881a553135>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Commerce City Math Tutor Find a Commerce City Math Tutor ...I strongly believe that ANYONE can be "good" at math. Through tutoring, I seek to remove the intimidation of math and physics and help students build up confidence and understanding. Throughout my high school and college careers I was constantly assisting friends, peers, and my younger sister with math and physics homework. 13 Subjects: including algebra 1, algebra 2, calculus, geometry ...I feel that because I struggled in high school I have an insight into where students get stuck and I use that to help others. I use examples, the internet, pictures and models to help explain word problems and questions. When I was in high school I struggled with Algebra and Trigonometry, so I had to start from the beginning in college. 21 Subjects: including precalculus, GED, English, reading ...I adapt my teaching style according to your needs and convenience. I can build lessons around the learning materials you already have, or if you do not have learning materials, I am happy to recommend those that fit your needs. I customize my lessons based on the way you learn. 34 Subjects: including geometry, trigonometry, calculus, ACT Math ...I usually focus on the way to make Algebra 1 study easier than it looks. There are always some tips on how to learn it. I will use different tutor ways, in order to fit different students’ 27 Subjects: including SAT math, linear algebra, SPSS, differential equations Greetings, My name is Ahmed. I earned my bachelor's degree with minor in education December, 2012 with a GPA 3.95. I have tutored in many high and middle schools including Bruce Randolph, Skinner Middle School, CEC College and Hill Campus. 26 Subjects: including algebra 1, general computer, Microsoft Word, networking (computer)
{"url":"http://www.purplemath.com/Commerce_City_Math_tutors.php","timestamp":"2014-04-18T16:30:06Z","content_type":null,"content_length":"23807","record_id":"<urn:uuid:b4ab15b8-fd89-413b-aea6-49a68494fa86>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Reseda Prealgebra Tutor ...I have taught a great range of subjects from English, Math, Science, SAT and ACT reviews. I have also done general mentoring for parents who have a difficult time understanding their kids and why they are not doing well in schools. Furthermore, I previously worked as a children's activity coordinator through an internship program at several local elementary schools. 48 Subjects: including prealgebra, English, reading, precalculus ...I took my geometry math course early on in Middle School and passed with an A. I have many math books of higher levels that deal with elementary geometry. Also, I can and will develop material as needed to help students. 13 Subjects: including prealgebra, physics, calculus, geometry I offer professional Math & Science private tutoring for pre- high-school, high-school, and college aged students. One-on-one sessions, or group sessions (up to 3-4 students). Weekdays - evenings after 5 pm; Weekends - all day. My experience includes 5 summers of working with exceptional kids at J... 27 Subjects: including prealgebra, English, physics, calculus ...I love teaching because I love learning myself, and I want to help students achieve success. My Math experience includes both teaching in schools and private tutoring. Personal tutoring is important because students get the best one on one help that they need. 9 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...I am very qualified to teach other students the fundamentals of mathematics in order for them to succeed in school. I have experience as a tutor because I have taught many students with great success. They have learned a lot form me and they were able to pass their classes with very high grades. 6 Subjects: including prealgebra, calculus, algebra 1, algebra 2
{"url":"http://www.purplemath.com/reseda_prealgebra_tutors.php","timestamp":"2014-04-19T09:35:38Z","content_type":null,"content_length":"23861","record_id":"<urn:uuid:217ed524-a244-4766-a070-268ef3965fb6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: October 2011 [00617] [Date Index] [Thread Index] [Author Index] DSolve bug for complicated forcing functions in a 2nd order ODE • To: mathgroup at smc.vnet.net • Subject: [mg122394] DSolve bug for complicated forcing functions in a 2nd order ODE • From: Dan Dubin <ddubin at ucsd.edu> • Date: Thu, 27 Oct 2011 06:31:28 -0400 (EDT) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com Folks -- reporting a bug in DSolve, I think. The following Module solves a 2nd order differential equation with periodic forcing, written as a Fourier series. The number of Fourier coefficients kept in the series is M. For M less than or equal to 40, the solution is correct, but for M greater than 40 it starts to go wrong. Try M=41, for instance. The resulting solution no longer satisfies the ODE. Any ideas what is happening here? PS[M_] := Module[{}, T = 1/2; =CF=89[n_] = 2 Pi n/T; f[n_] = 1/T Integrate[1 Exp[I =CF=89[n] t], {t, 0, 1/4}]; f[0] = Limit[f[n], n -> 0]; fapprox[t_] = Sum[f[n] Exp[-I =CF=89[n] t], {n, -M, M}]; s = DSolve[{x''[t] + 16 =CF=80^2 x[t] == fapprox[t], x[0] == 0, x'[0] == 0}, x[t], t]; xs[t_] = x[t] /. s[[1]]; xs''[t] + 16 Pi^2 xs[t] - fapprox[t] // Simplify] Prof. Dan Dubin Dept of Physics, UCSD La Jolla CA 92093-0319 858-534-4174 fax: 858-534-0173 ddubin at ucsd.edu
{"url":"http://forums.wolfram.com/mathgroup/archive/2011/Oct/msg00617.html","timestamp":"2014-04-20T03:39:51Z","content_type":null,"content_length":"26049","record_id":"<urn:uuid:2078c830-0664-45a7-9c42-85197a4270f6>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Final Semantics - CAAP'96 Conference Proceedings, H.Kirchner ed., Springer LNCS , 1995 "... We show that adequate semantics can be provided for imperative higher order concurrent languages simply using syntactical final coalgebras. In particular we investigate and compare various behavioural equivalences on higher order processes defined by finality using hypersets and c.m.s.'s. Correspond ..." Cited by 15 (11 self) Add to MetaCart We show that adequate semantics can be provided for imperative higher order concurrent languages simply using syntactical final coalgebras. In particular we investigate and compare various behavioural equivalences on higher order processes defined by finality using hypersets and c.m.s.'s. Correspondingly, we derive various coinduction and mixed induction-coinduction proof principles for establishing these equivalences. - Theoretical Computer Science , 1998 "... This paper presents an elementary and self-contained proof of an existence theorem of final coalgebras for endofunctors on the category of sets and functions. ..." Cited by 7 (0 self) Add to MetaCart This paper presents an elementary and self-contained proof of an existence theorem of final coalgebras for endofunctors on the category of sets and functions. - Theoretical Computer Science , 1997 "... We produce a fully abstract model for a notion of process equivalence taking into account issues of fairness, called by Milner fair bisimilarity. The model uses Aczel's anti-foundation axiom and it is constructed along the lines of the anti-founded model for SCCS given by Aczel. We revisit Aczel's s ..." Cited by 4 (2 self) Add to MetaCart We produce a fully abstract model for a notion of process equivalence taking into account issues of fairness, called by Milner fair bisimilarity. The model uses Aczel's anti-foundation axiom and it is constructed along the lines of the anti-founded model for SCCS given by Aczel. We revisit Aczel's semantics for SCCS where we prove a unique fixpoint theorem under the assumption of guarded recursion. Then we consider Milner's extension of SCCS to include a finite delay operator ". Working with fair bisimilarity we construct a fully abstract model, which is also fully abstract for fortification. We discuss the solution of recursive equations in the model. The paper is concluded with an investigation of the algebraic theory of fair bisimilarity. Keywords: fairness, anti-foundation, finite delay, parallelism, fair bisimilarity, fortification. This paper was composed while I was unemployed and an unofficial visitor at the Department of Mathematics, University of Ioannina, Greece. My than... - University of Southern , 1996 "... Hyperuniverses are topological structures exhibiting strong closure properties under formation of subsets. They have been used both in Computer Science, for giving denotational semantics `a la Scott-de Bakker, and in Mathematical Logic, in order to show the consistency of set theories which do not a ..." Cited by 4 (2 self) Add to MetaCart Hyperuniverses are topological structures exhibiting strong closure properties under formation of subsets. They have been used both in Computer Science, for giving denotational semantics `a la Scott-de Bakker, and in Mathematical Logic, in order to show the consistency of set theories which do not abide by the "limitation of size" principle. We present correspondences between set-theoretic properties and topological properties of hyperuniverses. We give existence theorems and discuss applications and generalizations to the non -compact case. Work partially supported by 40% and 60% MURST grants, CNR grants, and EEC Science MASK, and BRA Types 6453 contracts. y Member of GNSAGA of CNR. z The main results of this paper have been communicated by this author at the "11 th Summer Conference on General Topology and Applications" August 1995, Portland, Maine. Introduction Natural frameworks for dicussing Selfreference and other circular phenomena are extremely useful in areas such ... - MATH. STRUCTURES IN COMP. SCI. 9(4):403–435 , 1998 "... We discuss new ways of characterizing, as maximal fixed points of monotone operators, observational congruences on -terms and, more in general, equivalences on applicative structures. These characterizations naturally induce new forms of coinduction principles, for reasoning on program equivalences, ..." Cited by 3 (0 self) Add to MetaCart We discuss new ways of characterizing, as maximal fixed points of monotone operators, observational congruences on -terms and, more in general, equivalences on applicative structures. These characterizations naturally induce new forms of coinduction principles, for reasoning on program equivalences, which are not based on Abramsky's applicative bisimulation. We discuss in particular, what we call, the cartesian coinduction principle, which arises when we exploit the elementary observation that functional behaviours can be expressed as cartesian graphs. Using the paradigm of final semantics, the soundness of this principle over an applicative structure can be expressed easily by saying that the applicative structure can be construed as a strongly extensional coalgebra for the functor (P( \Theta )) \Phi (P( \Theta )). In this paper, we present two general methods for showing the soundenss of this principle. The first applies to approximable applicative structures. Many c.p.o. -models in... , 1998 "... In this paper we discuss final semantics for the -calculus, a process algebra which models systems that can dynamically change the topology of the channels. We show that the final semantics paradigm, originated by Aczel and Rutten for CCS-like languages, can be successfully applied also here. This i ..." Cited by 2 (2 self) Add to MetaCart In this paper we discuss final semantics for the -calculus, a process algebra which models systems that can dynamically change the topology of the channels. We show that the final semantics paradigm, originated by Aczel and Rutten for CCS-like languages, can be successfully applied also here. This is achieved by suitably generalizing the standard techniques so as to accommodate the mechanism of name creation and the behaviour of the binding operators peculiar to the -calculus. As a preliminary step, we give a higher order presentation of the -calculus using as metalanguage LF , a logical framework based on typed -calculus. Such a presentation highlights the nature of the binding operators and elucidates the role of free and bound channels. The final semantics is defined making use of this higher order presentation, within a category of hypersets. - IN LNCS, VOLUME 902 , 1995 "... Proof principles for reasoning about various semantics of untyped λ-calculus are discussed. The semantics are determined operationally by fixing a particular reduction strategy on -terms and a suitable set of values, and by taking the corresponding observational equivalence on terms. These principl ..." Cited by 2 (0 self) Add to MetaCart Proof principles for reasoning about various semantics of untyped λ-calculus are discussed. The semantics are determined operationally by fixing a particular reduction strategy on -terms and a suitable set of values, and by taking the corresponding observational equivalence on terms. These principles arise naturally as co-induction principles, when the observational equivalences are shown to be induced by the unique mapping into a final F-coalgebra, for a suitable functor F . This is achieved either by induction on computation steps or exploiting the properties of some, computationally adequate, inverse limit denotational model. The final F -coalgebras cannot be given, in general, the structure of a "denotational" λ-model. Nevertheless the "final semantics" can count as compositional in that it induces a congruence. We utilize the intuitive categorical setting of hypersets and functions. The importance of the principles introduced in this paper lies in the fact that they often allow... - in FoSSaCS'99 (ETAPS) Conf. Proc., W.Thomas ed., Springer LNCS 1578 , 1983 "... We introduce a coinductive logical system à la Gentzen for establishing bisimulation equivalences on circular non-wellfounded regular objects, inspired by work of Coquand, and of Brandt and Henglein. In order to describe circular objects, we utilize a typed language, whose coinductive types involve ..." Cited by 1 (1 self) Add to MetaCart We introduce a coinductive logical system à la Gentzen for establishing bisimulation equivalences on circular non-wellfounded regular objects, inspired by work of Coquand, and of Brandt and Henglein. In order to describe circular objects, we utilize a typed language, whose coinductive types involve disjoint sum, cartesian product, and finite powerset constructors. Our system is shown to be complete with respect to a maximal fixed point semantics. It is shown to be complete also with respect to an equivalent final semantics. In this latter semantics, terms are viewed as points of a coalgebra for a suitable endofunctor on the category Set of non-wellfounded sets. Our system subsumes an axiomatization of regular processes, alternative to the classical one given by Milner. , 2000 "... This paper is a contribution to the foundations of coinductive types and coiterative functions, in (Hyper)set-theoretical Categories, in terms of coalgebras. We consider atoms as first class citizens. First of all, we give a sharpening, in the way of cardinality, of Aczel's Special Final Coalgebra ..." Add to MetaCart This paper is a contribution to the foundations of coinductive types and coiterative functions, in (Hyper)set-theoretical Categories, in terms of coalgebras. We consider atoms as first class citizens. First of all, we give a sharpening, in the way of cardinality, of Aczel's Special Final Coalgebra Theorem, which allows for good estimates of the cardinality of the final coalgebra. To these end, we introduce the notion of -Y -uniform functor, which subsumes Aczel's original notion. We give also an n-ary version of it, and we show that the resulting class of functors is closed under many interesting operations used in Final Semantics. We define also canonical wellfounded versions of the final coalgebras of functors uniform on maps. This leads to a reduction of coiteration to ordinal induction, giving a possible answer to a question raised by Moss and Danner. Finally, we introduce a generalization of the notion of F -bisimulation inspired by Aczel's notion of precongruence, and we show t...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1650573","timestamp":"2014-04-18T06:22:29Z","content_type":null,"content_length":"37082","record_id":"<urn:uuid:40ebaf7f-ac0a-463e-8160-8f15134ca24d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the angular speed August 29th 2008, 02:10 AM Find the angular speed A car is traveling at 40 mph and has tires that are 94 meters in diameter. Find the angular speed of the wheel in revolution per minute. August 29th 2008, 07:38 AM Hello, jcrodua! I'm sure there's typo . . . please check the wording. A car is traveling at 40 mph and has tires that are 94 meters in diameter. Find the angular speed of the wheel in revolution per minute. 94 meters is over 300 feet ... the length of a football field! . . Now that is a monster truck! It's probably: . $94\text{ centimeters} \:\approx\:37\text{ inches.}$ August 29th 2008, 08:41 AM Yah thats centimeters. its my mistake. so what would be the answer? August 29th 2008, 12:41 PM Hello, jcrodua! You're expected to know how to change units. A car is traveling at 40 mph and has tires that are 94 cm in diameter. Find the angular speed of the wheel in revolution per minute. We know that: . $\begin{array}{ccc}\text{1 hour} &=& \text{60 minutes} \\ \text{1 mile} &=& \text{1609.344 m} \end{array}$ $\text{40 mph} \:=\:\frac{40\:{\color{blue}\rlap{/////}}\text{miles}}{1\:{\color{red}\rlap{////}}\text{hour}} \times \frac{1\:{\color{red}\rlap{////}}\text{hour}}{\text{60 minutes}} \times \frac {\text{1609.344 m}}{1\:{\color{blue}\rlap{////}}\text{mile}} \;=\;\frac{\text{1072.896 m}}{\text{1 minute}}$ .[1] The tire is moving down the road at 1072.896 meters per minute. The circumference of the tire is: . $C \:=\:\pi d \:=\:94\pi\text{ cm} \:=\:0.94\pi\text{ m}$ . . That is: . $\text{1 rev} \:=\:\text{0.94}\pi\text{ m}$ Convert [1]: . $\frac{1072.896\:{\color{red}\rlap{/}}\text{m}}{\text{1 minute}} \times \frac{\text{1 rev}}{\text{0.94}\pi\:{\color{red}\rlap{/}}\text{m}} \;\approx\;363.3\text{ rev/min}$
{"url":"http://mathhelpforum.com/trigonometry/47073-find-angular-speed-print.html","timestamp":"2014-04-23T17:11:56Z","content_type":null,"content_length":"9016","record_id":"<urn:uuid:978f8a40-d488-4148-becc-697eb0e65c7c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Derry, NH SAT Math Tutor Find a Derry, NH SAT Math Tutor ...My tutoring has included counseling students in their selection of a college as well as strengthening study skills, test preparation for SAT, ACT and GRE, and editing of admissions essays. The Independent School Entrance Exam (ISEE) is a standardized admissions test for independent schools in th... 28 Subjects: including SAT math, English, algebra 1, algebra 2 ...I was a tutor for math and other science for a few years in Japan, for from elementary school to high school students. I moved in NH in 2003, and I have been doing my physics research at UNH for 10 years. I am now tutoring math and physics in NH too. 16 Subjects: including SAT math, calculus, physics, geometry ...I was an associate professor and I have an advanced degree in Engineering but I love teaching. I always put the student's interest first and adapt my explanations and the level of instructions according to the understanding of the particular person in my charge. My real satisfaction is to see t... 21 Subjects: including SAT math, French, calculus, physics ...I can help your student with more advanced algebra concepts such as solving polynomials and the quadratic equation. I can work with your child to develop approaches that can help solve polynomial equations. I have almost 20 years of experience working with MS Excel. 9 Subjects: including SAT math, geometry, algebra 1, algebra 2 Hello, I was a certified level 3 tutor through College of Reading and Learning accredited program at UNH in Manchester, where I graduated in 2004. I was a class-link tutor for remedial English classes, and tutored individual students in writing, study skills, and social sciences. 29 Subjects: including SAT math, reading, English, writing Related Derry, NH Tutors Derry, NH Accounting Tutors Derry, NH ACT Tutors Derry, NH Algebra Tutors Derry, NH Algebra 2 Tutors Derry, NH Calculus Tutors Derry, NH Geometry Tutors Derry, NH Math Tutors Derry, NH Prealgebra Tutors Derry, NH Precalculus Tutors Derry, NH SAT Tutors Derry, NH SAT Math Tutors Derry, NH Science Tutors Derry, NH Statistics Tutors Derry, NH Trigonometry Tutors
{"url":"http://www.purplemath.com/Derry_NH_SAT_Math_tutors.php","timestamp":"2014-04-19T04:53:01Z","content_type":null,"content_length":"23738","record_id":"<urn:uuid:5065e20f-d2f2-4453-84ec-a29a89337b62>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Closed Under Addition Proofs October 29th 2008, 08:25 PM #1 Senior Member Apr 2008 Closed Under Addition Proofs I'm trying to prove x is a real number is closed under addition and 3Z is closed under addition, where Z is the set of integers. To prove 3Z is closed under addition, we need to show 3*k_1 + 3*k_2 belongs to 3Z for arbitrary k_1 and k_2 in Z. Since an interger ring Z distributes multiplication over addition, 3*k_1 +3*k_2 = 3*(k_1 +k_2). Since k_1 and k_2 belongs to Z which is closed under addition, we can pick k_3 in Z satisfying k_3 = k_1 + k_2. Now, 3*k_1 +3*k_2 = 3*(k_1+k_2) = 3*k_3, where k_1, k_2, k_3 belongs to Z. Thus, 3Z is closed under addition. Last edited by vagabond; October 29th 2008 at 09:06 PM. To prove 3Z is closed under addition, we need to check 3*k_1 + 3*k_2 belongs to 3Z for arbitrary k_1 and k_2 in Z. Since an interger ring Z distributes multiplication over addition, 3*k_1 +3*k_2 = 3*(k_1 +k_2). Since k_1 and k_2 belongs to Z which is closed under addition, we can pick k_3 in Z satisfying k_3 = k_1 + k_2. Now, 3*k_1 +3*k_2 = 3*(k_1+k_2) = 3*k_3, where k_1, k_2, k_3 belongs to Z. Thus, 3Z is closed under addition. Sorry, I forgot to include that 3Z = {3x such that x is an element of Z}. Is ther an easier way to do a formal proof now? To prove 3Z is closed under addition, we need to show 3*k_1 + 3*k_2 belongs to 3Z for arbitrary k_1 and k_2 in Z. Since an interger ring Z distributes multiplication over addition, 3*k_1 +3*k_2 = 3*(k_1 +k_2). Since k_1 and k_2 belongs to Z which is closed under addition, we can pick k_3 in Z satisfying k_3 = k_1 + k_2. Now, 3*k_1 +3*k_2 = 3*(k_1+k_2) = 3*k_3, where k_1, k_2, k_3 belongs to Z. Thus, 3Z is closed under addition. Ok, I understand that, but what about proving that the set of real numbers is closed under addition. It's obvious to me that it's true, but proving it isn't so obvious. What DEFINITION of "real numbers" are you using, then, and what definition of addition of real numbers? That's a much more subtle question than you might think, but you surely can't prove something about addition in the real numbers without using the definitions! Still a little confused. I don't remember ever using a definition for real numbers in the class. October 29th 2008, 08:34 PM #2 Oct 2008 October 29th 2008, 08:50 PM #3 Senior Member Apr 2008 October 30th 2008, 08:01 AM #4 Senior Member Apr 2008 October 31st 2008, 05:39 AM #5 MHF Contributor Apr 2005 November 4th 2008, 06:31 AM #6 Senior Member Apr 2008
{"url":"http://mathhelpforum.com/discrete-math/56518-closed-under-addition-proofs.html","timestamp":"2014-04-17T04:08:21Z","content_type":null,"content_length":"45112","record_id":"<urn:uuid:29f8d192-e397-4f0d-af9c-9f3433822358>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Summing Times with a Floor I have a list of times. Some of those times are less than 15 minutes and some are more. My billing floor is 15 minutes. That means that if a task takes me 4 minutes, I still bill 15. In column C, I have this simple formula: That gives me the amount to bill; either 15 minutes or the actual time, whichever is greater. When I sum up that helper column, I get a total that’s 36 minutes more than the actual time. The challenge is to get rid of the helper column. And here’s the answer: The SUM simply sums the times and returns 7:31. The SUMPRODUCT section adds up the difference between 15 minutes and the actual time for all those times that are less than 15 minutes. If I use the Ctrl+= to calculate part of the formula, I get Yikes, that’s a long one. The first array is a TRUE if the value is less than 15 minutes and a FALSE if not. The second array is the actual difference between the time and 15 minutes. Recall that when TRUE and FALSE are forced to be a number (in this case, we force them to be a number by multiplying them), TRUE becomes 1 and FALSE becomes 0. When the two arrays are multiplied together Every value that was greater than zero gets multiplied by a 1, thereby returning itself. Every value that was less than zero gets multiplied by a 0, thereby returning zero. When you sum them all up, you get And of course everyone knows that 2.5% of a day is the same as 36 minutes right? One of the bad things about using dates and times in the formula bar is that it converts them all to decimals. But .025 x 24 hours in a day x 60 minutes in an hour does equal 36 minutes. That gets added to the SUM of the actuals and Bob’s your uncle. 8 Comments 1. If billing per quarter of an hour is usual (as is in our country) you could use the arrayformula: 2. Or exactly like you did it: 3. Why get rid of the helper column if it makes the calculation much clearer and by extension the spreadsheet easier to maintain and support? 4. We could debate the usefulness of helper columns all day. But for this application, the helper column was hidden for presentation purposes. I’d rather have a slightly more complicated formula than a hidden column. 5. For this sort of thing (replacing MAX in SUMPRODUCT) I just do it the straightforward way: {=SUMPRODUCT(IF(TIME(0,15,0) > B2:B15, TIME(0,15,0), B2:B15))} 6. My last suggestion can be reduced to 7. The SUMPRODUCT adds no value in {=SUMPRODUCT(IF(TIME(0,15,0) > B2:B15, TIME(0,15,0), B2:B15))}, you might as well just use {=SUM(IF(TIME(0,15,0) > B2:B15, TIME(0,15,0), B2:B15))} 8. True; I just use SUMPRODUCT instead of SUM by default because it doesn’t cost anything either. Posting code or formulas in your comment? Use <code> tags! • <code lang="vb">Block of code goes here</code> • <code lang="vb" inline="true">Inline code goes here</code> • <code>Formula goes here</code>
{"url":"http://dailydoseofexcel.com/archives/2014/01/09/summing-times-with-a-floor/","timestamp":"2014-04-18T08:04:21Z","content_type":null,"content_length":"43177","record_id":"<urn:uuid:ecca3de0-02b2-4222-a8a7-4e04554c3403>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
Binary Tree Traversals Interactive Data Structure Visualizations Binary Tree Traversals The reason we traverse a binary tree is to examine each of its nodes. Many different binary tree algorithms involve traversals. For example if we wish to count the number of nodes in a tree we must visit each node. If we wish to find the largest value in each node, we must examine the value contained in each node. There are two fundamentally different kinds of binary trees traversals--those that are depth-first and those that are breadth-first. When you watch the animation, notice that the path followed by each of these traversals is along the branches of the tree. Each node of the tree is visited three times during each of the depth-first traversals, once on its way down the tree, a second time coming up from the left child, and a third time coming up from the right child. When you watch the animation of these traversals, notice that a checkmark is placed beneath each node each time the node is There are three different types of depth-first traversals, preorder, inorder, and postorder. The preorder traversal extracts the value the first time it visits the node--when there is one checkmark beneath the node. During the animation, the value of the node is copied to the list at the bottom of the screen when the value is extracted. During the inorder traversal, the value is extracted during the second visit to the node--when there are two checkmarks beneath the node. During the postorder traversal, the value is extracted during the third visit--when there are three checkmarks beneath the node. There is only one kind of breadth-first traversal--the level order traversal. When you watch the animation, notice that unlike the depth-first traversals, this traversal does not follow the branches of the tree. To implement a level-order traversal, we need a first-in first-out queue--not a stack. All binary tree traversals, regardless of the order that they visit the nodes, are linear with respect to the number of number of nodes in the tree.
{"url":"http://nova.umuc.edu/~jarc/idsv/lesson1.html","timestamp":"2014-04-19T17:01:17Z","content_type":null,"content_length":"3748","record_id":"<urn:uuid:bfac450f-a7e6-4eaf-81e0-ba0d40292f8a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
recoil velocity I'm not much good at math, and I'm making this up on the spot, but I would asume that one could find what fraction of the the gun's mass the projectile has (M[p]/M[g]), and the gun's rearward acceleration will be that same fraction of the projectile's forward acceleration. So if M[p]/M[g] = n, then A[p] x n should = A[g].
{"url":"http://www.physicsforums.com/showthread.php?p=167685","timestamp":"2014-04-21T04:47:29Z","content_type":null,"content_length":"22072","record_id":"<urn:uuid:1b921091-b869-478b-8aee-a8e248c2a0ec>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
to A Increasingly, algebra is the focus of mathematics discussions in schools and districts across the United States. Policymakers, professional organizations, and researchers emphasize the importance of developing algebraic reasoning at increasingly earlier ages. The National Mathematics Advisory Panel (2007) has issued initial reports stating that students need to develop understanding of concepts, problem-solving skills, and computational skills related to algebra in grades preK–8. In 2006, the National Council of Teachers of Mathematics published the Curriculum Focal Points for Prekindergarten through Grade 8 Mathematics, which emphasizes connections to algebra as early as kindergarten and promotes the development of algebraic reasoning across the elementary and middle school grades. Finally, mathematicians and mathematics educators are speaking up about the need to increase teachers' awareness and abilities for teaching algebra across the grades (Wu, 1999). Multiple factors are driving the increased emphasis on algebra proficiency. For many educators, the primary concern is the poor performance of U.S. students on national and international assessments of mathematics ability. On the 2005 National Assessment of Educational Progress (NAEP), only 6.9 percent of 17-year-olds scored at or above proficiency on multistep problem solving and algebra (National Center for Education Statistics, 2005). On the algebra subtest of the 2003 Trends in International Mathematics and Science Survey (TIMSS), U.S. 8th graders scored below many economic competitors, such as Japan, the Russian Federation, Korea, Singapore, and China. These results suggest that a majority of U.S. students are not proficient in algebra by the time they exit middle school or high school. Although the academic performance of U.S. youth as a whole is important, No Child Left Behind (NCLB) emphasizes the need to monitor the progress of subgroup populations that have traditionally performed below expectations. On the 2005 NAEP, 59 percent of black students, 50 percent of Hispanic students, and 45 percent of American Indian students did not meet proficiency at the 8th grade level. Similarly, 69 percent of students with disabilities and 71 percent of English language learners did not reach this benchmark (National Center for Education Statistics, 2005). These results highlight the crucial need to develop algebraic thinking across the grades and focus on providing the best instructional practices for all students. Why Algebra? Employers often expect their employees to translate work-related problems into general mathematical models, from calculating discounts for merchandise to operating technology-based equipment and machinery. Many careers in the fields of science and technology demand high levels of mathematics competence to solve complex problems, such as chemical equations involved in the study of drug interactions. Algebra is also helpful in daily life, from applying formulas for calculating miles per gallon of gasoline to using functions to determine the profit of a business venture. Research suggests that students who pass Algebra II in high school are 4.15 times more likely to graduate from college than other students are (Adelman, 1999). This has led many state education agencies to raise graduation requirements to include courses in Algebra II. Currently, 13 states require students to take Algebra II to graduate from high school, up from just two states in 2005 (Achieve, 2007). Many states and school districts are considering implementing higher mathematics standards to promote college readiness and future success for their graduates. What Is Algebra? When we think about algebra in the curriculum, we often think of a separate area of mathematics concerned with symbols and equations, such as 3x + 7y - 2 = 30. Mathematics curriculums often reinforce the notion of separateness by identifying algebra as a distinct strand with such subtopics as patterning, data analysis, simple functions, and coordinate systems. However, arithmetic and algebra are not mutually exclusive areas of mathematical study. Basic algebra, as opposed to modern or abstract algebra, extends learners' understanding of arithmetic and enables them to express arithmetical understandings as generalizations using variable notation. Much of the difficulty that students encounter in the transition from arithmetic to algebra stems from their early learning and understanding of arithmetic. Too often, students learn about the whole-number system and the operations that govern that system as a set of procedures to solve addition, subtraction, multiplication, and division problems. Teachers may introduce number properties as “truths” or axioms without developing students' deep conceptual understanding or providing multiple experiences applying these properties. When teachers introduce integers and rational numbers in later elementary grades, many of these “truths” about numbers and operations don't generalize to addition and subtraction of positive and negative numbers or multiplication and division of fractions. By the time algebra is introduced in middle school, many students view mathematical principles as subjective and arbitrary and rely on memorization in lieu of conceptual understanding. The National Council of Teachers of Mathematics has attempted to bridge the gap between arithmetic and algebra by embedding algebraic reasoning standards in elementary school mathematics. From grades 3 to 5, algebra is embedded with number and operations as one of the three main focal points; beginning in grade 6, algebra is the predominant topic. However, it is not always clear how to develop students' algebraic thinking as they learn about numbers, operations, properties of numbers, data display and analysis, and problem solving. Teachers need support in learning how to integrate these topics and provide rich and explicit instruction to their students in early algebraic thinking. Teaching Algebra for Transfer Teachers' understanding of mathematics influences the quality of their instruction. Many elementary school teachers have limited experience with mathematics and lack the knowledge and skills to teach mathematics effectively (Ball, Hill, & Bass, 2005). Moreover, most credentialing programs for elementary school teachers require minimal college-level mathematics courses despite calls for considerably more extensive requirements (Conference Board of the Mathematical Sciences, 2001). Aside from developing their content knowledge in mathematics, these teachers can benefit from some general instructional practices that can help them teach arithmetic for transfer to algebra. Whenever possible, teachers should model precisely what they want students to be able to do, using multiple examples that illustrate the range of problem types that students must solve on their own. Demonstration models should include careful verbal explanations that explicitly detail for students how to perform each step of the problem. As students develop expertise, teachers can make fewer verbal explanations and focus less on each individual step. Teachers often have difficulty modeling for students how to think about mathematics problems conceptually. Rather than initially using numeric symbols to solve a problem, teachers might use concrete objects or semi-concrete representations (such as pictures) to help represent the underlying concepts behind specific problems. Teachers will find that explaining the concept of 2/3 ÷ 1/3 is more complex than explaining how to use the “invert and multiply” algorithm. To develop deep conceptual understanding, teachers should draw on different types of examples that represent problems. For example, teachers can use concrete objects to visually represent that the problem 2/3 ÷ 1/3 = □ means the same thing as “how many 1/3s are there in 2/3?” Presenting the problem this way helps students understand what it means to divide any number by a fraction and “see” that in this example, there are “2 1/3s in 2/3.” However, using visual models to help students understand how to solve problems involving division by fractions breaks down quickly when the numerical values in the problem are not artificially constrained, such as in the problem 9/23 ÷ 11/15 = □. Without using the “invert and multiply” algorithm, this problem becomes difficult to solve. After students understand the meaning of division of fractions, instruction should focus on applying the algorithm in a step-by-step fashion. With clear verbal explanations and explicit modeling, students can understand why the algorithm works and what it means to divide by fractions. In addition to hearing teachers' verbal explanations, students should share their verbal explanations to further develop conceptual understanding. Here again, carefully chosen examples can provide a rich source of discussion as students explain why 2 × 54 = 2 × 50 + 2 × 4 (an application of the distributive property); why 72 - 6 ≠ 72 (an application of the identity property of subtraction); or why 5 + 2 = 2 + 5 (an application of the commutative property of addition). Students should be able to describe the properties of numbers in their own words—such as through telling a story or describing what is happening in a picture that has an obvious numerical focus—as well as in symbolic notation, and they should be able to apply these principles in multiple contexts. For example, young students might demonstrate the commutative property of addition by using concrete objects, such as groups of marbles. Students might explain the commutative property by showing that reordering the groups of marbles does not change the sum of the marbles when the groups are added together. Once they understand the concept, the teacher might ask the students to provide multiple representations of the commutative property using symbolic notation. Students also need to demonstrate their own understanding and skills. Teachers can gauge how well students solve problems in relatively straightforward ways. Students can work different types of problems and apply algorithms to solve them. Teachers can set proficiency goals for students and monitor student progress toward these goals. Algebra-Specific Instructional Strategies Algebraic reasoning builds on students' deep understanding of numbers and their relationships. Some mathematics researchers have identified areas of arithmetic that provide the foundations for algebra. These include • Numbers and number relationships (quantities and magnitudes). • Operations (functional relationships between numbers). • Field axioms or number properties (commutative, associative, distributive, identity, inverse, and so on). Other topics linked to algebra include geometry, data analysis, proportional reasoning, and measurement. These topics provide rich opportunities for developing early algebraic reasoning as students learn about functional relationships in these areas (Van de Walle, 2004). To develop algebraic reasoning, students must understand the following four key components (Milgram, 2005). Variables and Constants As students progress through elementary school, they learn about number systems—from counting, to whole numbers, to integers, to rationals, to real numbers. Studying number systems builds students' understanding that each new system is an extension of the previous system and that all number systems are embedded in the real-number system. As such, each system satisfies the basic rules of associativity, commutativity, and distributivity. As we introduce students to variables, a key insight for students to grasp is that algebraic expressions, in which variables replace real numbers, will also satisfy the properties with which they are familiar. For example, when teachers introduce the distributive property, they can extend instruction from the context of whole numbers and integers to expressions with variables. They can follow a discussion of the problem 6 × (2 + 9) = 6 × 2 + 6 × 9 with a discussion of 6 × (t + 9) = 6 × t + 6 × 9. Representing and Decomposing Word Problems Algebraically Key to abstract reasoning and using algebra to solve problems is using algebraic expressions to describe problems. For example, students who think in algebraic terms easily translate the phrase “if you add 3 to a number times itself” into n^2 + 3. Students need to apply this conversion of phrases to solve word problems. Teachers can help students master this skill by modeling and using language that identifies the “unknown” in a problem and then translates the process of finding the unknown into mathematical statements and equations. Consider the following word problem: Maria needs to find the weight of a box of cereal using a balancing scale. Maria puts 6 identical boxes of cereal on one side of the scale. To balance the scale, Maria puts 2 more identical boxes of the same cereal and 3 4-pound blocks on the other side of the scale. How much does each box of cereal weigh? Teachers can model how to solve this problem by first identifying the unknown component (the weight of each box of cereal, labeled y) and the known components (the number of boxes of cereal and the weight and number of the blocks). Next, teachers can help students understand how to translate these elements into a mathematical statement to solve for the unknown (6y = 2y + 12). Students can check answers by inserting various numerical values into equations to verify solutions. This last step is about more than just getting the correct answer; it is an important step in problem solving because it encourages students to reflect on the original problem and determine whether the answer is reasonable. For many students, improving skills at translating or converting problems to algebraic expressions will pose challenges. Students need to learn to break the problem into separate parts and then convert each part to an expression or equation that acknowledges the restrictions that the problem places on it (for example, the phrase “times itself”). Students will also need to recognize when a problem contains irrelevant information. Symbol Manipulation Many adults associate symbol manipulation with algebra because their memories of basic algebra are with the struggles of moving abstract symbols about the page “to solve for x.” Although isolating the variable is still the goal for symbol manipulation, students need to understand that manipulating symbols in an equation merely simplifies the equation in a manner that enables us to get the answer we are seeking. Lawful manipulation of the symbols results in an equation that has the same solutions as the original equation. Related to this topic is a common misconception about the equality rule and the equal sign. Many students in the early grades view a number sentence or mathematical formula as something “to do,” most often with input on the left and output on the right. Consider the number sentence 5 + 3 = □. Students interpret this as adding the quantities 5 and 3 to find the specific answer of 8. Students may not view the following as possible solutions to the same problem: 5 + 3 = 3 + 5 5 + x = 8 8 = 5 + 3 5 + 3 = 2 + 6 Teaching equality and the meaning of the equal sign as a symbol that indicates both sides are balanced (as symbolized, for example, by a balance scale) provides opportunities for students to see equations as more than something to act on or a problem for which they must seek a single solution. Encouraging students to generate multiple solutions to 5 + 3 prepares them for working with variables, understanding and applying the commutative property and the inverse property of addition and subtraction. Students should begin to learn elements of functions early in their school careers. Teachers need to strategically teach students to build patterns in which each input has only one output. Milgram (2005) provides an example of how kindergarten teachers can help their students understand simple functions. By sorting and classifying objects on the basis of unique properties, students can understand the association between objects in one set and unique objects (or features of the object) in another set. For example, students can sort objects by color. If each object has a specific color, the object is the input and the color is the output. Sorting the objects by color is an example of a function. As students progress in their understanding, teachers can explicitly model symbolic representations of functions. Later students will learn to graph the Cartesian coordinates of the members of the input and output sets (domain and range). Next, they'll develop an understanding of how the domain and range represent a “rule of correspondence” that can be described using function notation, a convention in mathematics. Ultimately, these early insights into functions assist learners in understanding linear algebra and, later, curvilinear and quadratic functions and the role they play in mathematical relationships. Finally, to help students develop algebraic reasoning in problem solving, students must develop a degree of certainty about the properties of number systems that allow us to manipulate and operate on numbers. Teachers can build this certainty in students by teaching the process of mathematical induction so students understand that their actions must be verifiable mathematically to be lawful and useful (Milgram, 2005). Teachers often teach mathematical induction as a procedure without sufficiently understanding why induction is so crucial for students' cognitive development in mathematics. Starting Early Because the goal of teaching algebra is to help students develop abstract reasoning in problem solving, schools should begin to develop these skills in students at the elementary level. By systematically and explicitly incorporating concepts of algebra in elementary school mathematics, schools can help students avoid developing many misconceptions about number and number relationships, operations, and application of number properties. Teaching mathematics in the elementary grades to transfer to algebraic concepts may promote success for all students engaged in mathematical My “Aha!” Moment Keith Devlin, Professor of Mathematics at Stanford University, Palo Alto, California. “The Math Guy” on National Public Radio. Mathematics suddenly interested me when I encountered calculus at age 16. Before then, I never saw much point in the subject beyond basic arithmetic, and looking back I now realize why. Other than basic number skills and a bit of trigonometry, no subject generally taught before calculus shows how mathematics makes a difference in the world. Logical thinking is important in earlier math classes, but not mathematical thinking. The enormous power of mathematics—and its beauty—lies in the vast range of the subject beyond high school mathematics. The mathematics taught in school is what I call abstracted math—and it really amounts to little more than formalized common sense. You can call it math, but it really isn't. What our modern world depends on—big time—is what I call constructed math. This is the rule-based, abstract reasoning system that forms the basis of all science and engineering, and a lot else besides. It isn't really abstracted from the world; rather, we humans create it to apply to the world. By and large, this kind of mathematics cannot be learned before the upper levels of high school; it requires too much mental sophistication. But there is no reason why we can't teach such mathematics descriptively, where the goal is awareness and understanding, not the ability to do it. I am sure that if I had been taught that way, I would have been interested in math long before I was. Achieve. (2007). Closing the expectations gap 2007. Washington, DC: Achieve. Adelman, C. (1999). Answers in the tool box: Academic intensity, attendance patterns, and bachelor's degree attainment. Washington, DC: U.S. Department of Education. Ball, D. L., Hill, H. C., & Bass, H. (2005, Fall). Knowing mathematics for teaching: Who knows mathematics well enough to teach third grade, and how can we decide? American Educator, 14–17, 20–22, Conference Board of the Mathematical Sciences. (2001). The mathematical education of teachers (Vol. 2). Washington, DC: American Mathematical Society. Milgram, R. J. (2005). The mathematics preservice teachers need to know. Stanford, CA: Stanford University. National Center for Education Statistics. (2005). The nation's report card: Mathematics 2005. U.S. Department of Education. Available: http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2006453 National Council of Teachers of Mathematics. (2006). Curriculum focal points for prekindergarten through grade 8 mathematics. Reston, VA: Author. Available: www.nctm.org/standards/content.aspx?id=270 National Mathematics Advisory Panel. (2007). Conceptual Knowledge and Skills Task Group progress report. New Orleans, LA: U.S. Department of Education. Van de Walle, J. A. (2004). Elementary and middle school mathematics: Teaching developmentally (5th ed). Boston: Allyn and Bacon. Wu, H. (1999). Basic skills versus conceptual understanding. American Educator, 23(3), 14–19, 50–52. Leanne R. Ketterlin-Geller (lketterl@uoregon.edu) is Assistant Professor and Kathleen Jungjohann (kjj@uoregon.edu) is Senior Instructor and Research Assistant at the University of Oregon, Eugene. David J. Chard (dchard@smu.edu) is Dean of the School of Education and Human Development at Southern Methodist University, Dallas, Texas. Scott Baker (sbaker@uoregon.edu) is Director of Pacific Institutes for Research, Eugene, Oregon. Click on keywords to see similar products:
{"url":"http://www.ascd.org/publications/educational-leadership/nov07/vol65/num03/From-Arithmetic-to-Algebra.aspx","timestamp":"2014-04-18T02:59:38Z","content_type":null,"content_length":"177121","record_id":"<urn:uuid:8a0e7600-5181-4d4a-8005-8b0b6afe3a6c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
Does a pointed homotopy equivalence between pointed $G$-spaces which is $G$-equivariant induce a (weak) homotopy equivalence on pointed Borel constructions? up vote 4 down vote favorite Let $G$ be a topological group and let $X$ and $Y$ be connected, well-pointed $G$-spaces. Suppose $f:X\to Y$ is a pointed homotopy equivalence and a $G$-equivariant map (but not an equivariant homotopy equivalence). I know that $f$ induces a (weak) homotopy equivalence on the Borel constructions, $EG\times_G X\to EG\times_G Y$, but what about the induced map on the pointed Borel constructions, $EG_+\wedge_G X\to EG_+\wedge_G Y$? Is it a homotopy equivalence too? As far as I can see it is a homology equivalence and a stable homotopy equivalence but I would like a stronger at.algebraic-topology homotopy-theory add comment 1 Answer active oldest votes In the pointed Borel construction, you clearly mean $\wedge$ and not $\times$. Thus $$EG_+\wedge_G X = EG\times_G X/EG\times_G\ast.$$ Out of laziness, I'll assume that your $X$ and $Y$ are of the $G$-homotopy types of $G$-CW complexes. The based $G$-map $id\times f\colon EG\times X\longrightarrow EG\times Y$ is a homotopy equivalence on passage to $H$-fixed points for all $H \subset G$: the condition is empty unless $H=e$, when it is your hypothesis. Therefore $id\times f$ is a $G$-homotopy equivalence. Via the inclusions of $EG$ in source and target given by up vote 7 the basepoints of $X$ and $Y$, $id\times f$ is a map over $EG$ and therefore a $G$-homotopy equivalence over $EG$ since the inclusions of $EG$ in source and target are $G$-cofibrations by down vote your well-pointed hypothesis. On passage to orbits over $G$ and quotient spaces, it follows that $$id\wedge_G f\colon EG_+\wedge_G X \longrightarrow EG_+\wedge_G Y$$ is a based homotopy accepted equivalence. Thanks, I did mean $\wedge$, I have updated the question. What if the spaces are not of the G-homotopy types of G-CW complexes? In particular I am thinking of configuration spaces and the scanning map - I have seen this claimed in a paper by Bödigheimer and Madsen when $G$ is a compact lie group. – Richard Manthorpe Aug 21 '12 at 16:04 $G$-CW isn't too restrictive (includes all smooth $G$-manifolds when $G$ is compact Lie). But here is an argument for weak equivalence without that assumption. A comparison of the Borel 1 bundles for $X$ and $Y$ ($X\to EG\times_G X \to BG$) shows that the map of Borel constructions is a weak equivalence. The reduced Borel construction is the pushout of the maps $BG \to \ ast$ and $BG \to EG\times_G X$. The latter map is a cofibration, so the gluing lemma for weak equivalences gives that the map of reduced Borel constructions is a weak equivalence. – Peter May Aug 21 '12 at 17:24 Thank you that is exactly what I was looking for. And I guess the result for the scanning map is indeed for $G$-CW complexes. – Richard Manthorpe Aug 21 '12 at 17:41 add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology homotopy-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/105161/does-a-pointed-homotopy-equivalence-between-pointed-g-spaces-which-is-g-equi?answertab=votes","timestamp":"2014-04-20T16:13:37Z","content_type":null,"content_length":"56112","record_id":"<urn:uuid:1a45bddb-18c6-4a31-b36d-c17660de79b4>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
distribution function June 14th 2012, 10:04 PM #1 Jan 2011 distribution function if x and y are independent random variable such that f(x)= e^-x , x> or = 0 g(y)=3e^-3y, y>or = 0 find the probability distribution function of z=x/y how it can be taken forward. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/200043-distribution-function.html","timestamp":"2014-04-18T09:29:46Z","content_type":null,"content_length":"29234","record_id":"<urn:uuid:e1f5cfc4-82e9-4c8f-8982-8e1ee3bb6b93>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
SINGLE-PHASE AND POLYPHASE INDUCTION MOTOR PERFORMANCE CALCULATIONS (Electric Motors) The following procedures provide methods of calculating the performance of single-phase induction motors. Three types of single-phase motors are discussed. They are the split-phase, capacitor-start, and permanent-split-capacitor (PSC) motors. The general procedure for calculating these types of motors is identical. Differences lie in the starting conditions on the split-phase and capacitor-start motors. The PSC motor has additional performance calculations that have to be taken into account. A single-phase motor by itself has a field that pulsates north-south, south-north through the rotor but does not revolve. The rotor bars are not cut by any flux lines. No voltage or current is induced, and the motor will not start. Auxiliary means are necessary to start the motor. This can be accomplished by adding an auxiliary winding in space quadrature and by varying reactance by varying turns and resistance or by adding capacitance or inductance. The split-phase motor contains an auxiliary winding that is displaced in space by 90° electrical and connected in parallel with the main winding. Typically, the auxiliary winding has 60 to 70 percent of the number of turns of the main winding and is approximately three gauges smaller in wire size. Once the motor reaches 75 to 80 percent of synchronous speed, the auxiliary winding is switched out and the motor runs on the main winding only. The capacitor-start motor is wired in the same manner as the split-phase motor and also contains a switch that cuts out the auxiliary winding after starting is achieved; however, a capacitor is placed in series with the auxiliary winding. The capacitor creates a displacement of the currents in the two windings by 90° in time. This displacement produces the effect of a rotating field and thus a starting torque. The capacitor-start motor produces a greater locked rotor torque than its split-phase counterpart. The locked rotor torque is proportional to the product of three main factors: (1) the sine of the phase displacement angle between the currents in the two windings, (2) the product of the main-winding current multiplied by the auxiliary-winding current, and (3) the number of turns in the auxiliary winding. All three of these factors are more favorable in the capacitor-start motor. In a PSC motor, the auxiliary winding and capacitor are used continuously. No starting switch or relay is needed. These motors are generally used only in special-duty applications. A PSC motor is well suited for fan applications because of its ability to vary its speed. The continuously running capacitor improves performance by creating magnetic field conditions similar to those of a polyphase motor. These balanced field conditions can be obtained at only one load point. Generally, the designer will balance the motor at full load. The following calculation procedure for the single-phase motor is based on the cross-field theory by P. H. Trickey. Figure 6.26 shows a single phase motor with a cross-field flux. At standstill, the pulsating stator field induces a rotor current that induces an in-phase component of rotor flux. When the rotor is moving, according to Flemming’s law, a current is induced which produces quadrature rotor flux <|).The in-phase rotor current interacting with <) produces useful motor torque. At a standstill there is no cross-field flux, and therefore no torque. Both the revolving-field (as discussed in Sec. 6.2) and the cross-field theories demonstrate that single-phase motors will not start by themselves. Auxiliary methods must be employed. This section covers several of them. Calculations can begin by solving for the rotor and stator geometrical constants, plus the winding factors for the motor. Next, the slot, zigzag, end, and belt leakage reactance can be calculated and summed. The open-circuit reactance can then be calculated, along with the flux, mmf drops, and saturation factors. Primary and secondary resistance can be calculated using properties of the wire for the primary resistance and using dimensions and properties of the rotor bar material for the secondary resistance. Calculation of the iron loss and an educated estimation of the friction and windage loss will round out the preliminary calculations. Once all of the preceding values of the motor have been calculated, one can begin the performance calculation procedure. Magnetizing, secondary, and secondary cross-field current can be calculated. The overall performance of the motor, such as its torque, efficiency, and power factor, can be evaluated. As previously discussed, the single-phase motor inherently has zero starting torque. Calculation procedures are given for evaluating the starting conditions for the split-phase and the capacitor-start motor at the end of the performance calculations. FIGURE 6.26 Single-phase motor with cross-field flux (main pole). Polyphase induction motor calculations follow in a manner similar to that of the single-phase routine. As with the single-phase motor, once all of the motor constants and reactance values are calculated, the performance of the motor can be evaluated. The calculation procedure for PSC motors will give performance results at any load. The procedure outlined for calculating the performance of a PSC motor is similar in nature to that for a two-phase motor. First, calculate the reactances and resistances of the motor, just as for the polyphase motor. Next, calculate a series of motor constants based on the reactances and resistances. The currents in the motor can be solved from these constants. Finally, the performance of the motor can be evaluated. The section later discusses how to proportion the windings and the value of capacitance to obtain near-two-phase motor performance at one desired load point.
{"url":"http://what-when-how.com/electric-motors/single-phase-and-polyphase-induction-motor-performance-calculations-electric-motors/","timestamp":"2014-04-20T16:01:34Z","content_type":null,"content_length":"16255","record_id":"<urn:uuid:7740c3d0-f57b-4e14-98c1-9fbbb6717355>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
A morbid Python script Comics #493 and #893 involve actuarial tables, which are tables for calculating the probability that someone of a given age will die within a given amount of time. One evening, when I was feeling morbid, I wrote a Python script to calculate death probabilities for any collection of people: actuary.py (.txt). It takes a list of ages and genders and produces various statistics. Here’s the report for the nine living people who have walked on the moon: ~$ python actuary.py 81m 82m 80m 81m 80m 81m 76m 78m 77m There is a 5% chance of someone dying within 0.08 years (by 2012). There is a 50% chance of someone dying within 1.1 years (by 2013). There is a 95% chance of someone dying within 4.08 years (by 2016). There is a 5% chance of everyone dying within 10.78 years (by 2023). There is a 50% chance of everyone dying within 16.12 years (by 2028). There is a 95% chance of everyone dying within 22.57 years (by 2035). Probability of all dying in 1.0 year: <0.001% Probability of a death within 1.0 year: 46.32% And here’s the table for four of the main stars of the original Star Wars (Harrison Ford, Carrie Fisher, Mark Hammill, James Earl Jones): ~$ python actuary.py 69m 55f 60m 81m 10 There is a 5% chance of someone dying within 0.42 years (by 2012). There is a 50% chance of someone dying within 4.74 years (by 2017). There is a 95% chance of someone dying within 12.83 years (by 2025). There is a 5% chance of everyone dying within 18.17 years (by 2030). There is a 50% chance of everyone dying within 31.28 years (by 2043). There is a 95% chance of everyone dying within 42.62 years (by 2055). Probability of all dying in 10.0 years: 0.272% Probability of a death within 10.0 years: 85.94% Of course, these are based on average death rates based only on age and gender. Adding more specific information about the people in question will refine the calculation. For example, I’d guess former astronauts are more likely to be in good health—and have longer life expectancies—than the rest of us. 231 thoughts on “A morbid Python script” 1. #1 both link refere to xkcd#493 #2 now where did i put my death note? 2. We have 10-15 years to put someone on Mars, is how I read that 3. Don’t forget about radiation exposure. And Space Madness. 4. Python question: why are you always doing stuff like: float_value = 1*float_num 5. Somebody needs to build an actuarial table application for facebook-friends. 6. Here are the numbers for replacements to the Supreme Court of the United States during the next Presidential term of office: bash$ python actuary.py $(($Y-1955))m $(($Y-1936))m $(($Y-1936))m $(($Y-1948))m $(($Y-1933))f $(($Y-1938))m $(($Y-1950))m $(($Y-1954))f 4 There is a 5% chance of someone dying within 0.2 years (by 2012). There is a 50% chance of someone dying within 2.56 years (by 2015). There is a 95% chance of someone dying within 8.2 years (by 2020). There is a 5% chance of everyone dying within 20.88 years (by 2033). There is a 50% chance of everyone dying within 30.66 years (by 2043). There is a 95% chance of everyone dying within 39.86 years (by 2052). Probability of all dying in 4.0 years: <0.001% Probability of a death within 4.0 years: 68.51% 7. Here are comparable numbers for the current members of the United States House of Representatives: bash$ python actuary.py 78.1848049281314m 65.5496235455168m 55.1403148528405f 50.2861054072553m 75.8685831622177m 70.2149212867899m 61.5879534565366m 49.5030800821355m 79.0554414784394f 71.6659822039699f 61.9849418206708m 47.6194387405886m 70.3791923340178m 66.4093086926762m 65.4674880219028m 48.8405201916496m 69.7850787132101m 41.1252566735113m 68.6707734428474m 67.5373032169747m 87.8439425051335m 87.8329911019849m 69.1937029431896m 61.1471594798084m 67.6386036960986m 52.8240930869268m 80.2710472279261m 69.1581108829569m 78.8172484599589m 72.6461327857632m 76.227241615332m 58.1218343600274m 70.3901437371663m 49.5112936344969m 56.6351813826146f 51.192334017796m 65.388090349076f 59.5947980835045f 75.9780971937029f 68.7693360711841m 68.5859000684463m 52.8323066392882m 78.0396988364134m 62.2039698836413f 61.1444216290212m 52.1314168377823f 74.5954825462012m 61.0212183436003m 62.5023956194387m 58.9678302532512f 70.5845311430527m 55.8904859685147m 71.1540041067762m 62.0670773442847m 72.6105407255305m 52.1724845995893m 65.4537987679671f 44.041067761807f 88.4681724845996m 58.5270362765229m 68.7748117727584m 64.1505817932923m 61.6344969199179m 45.5906913073238f 59.129363449692f 56.6160164271047m 64.3340177960301m 55.3319644079398m 59.6714579055441m 56.5639972621492m 77.6509240246407m 64.3285420944559m 63.192334017796m 55.7152635181383m 52.2464065708419m 50.6502395619439m 62.6639288158795m 56.7282683093771m 60.8596851471595m 57.0102669404517m 65.5386721423682m 51.5099247091034m 72.0246406570842m 59.8822724161533m 68.974674880219f 60.4407939767283m 78.3080082135524m 41.1060917180014m 72.2819986310746m 70.8418891170431m 66.4202600958248m 57.574264202601m 61.7522245037645f 53.7467488021903f 75.066392881588m 64.8843258042437m 77.4264202600958m 57.2621492128679m 68.4435318275154m 59.9753593429158m 2 There is a 5% chance of someone dying within 0.02 years (by 2012). There is a 50% chance of someone dying within 0.28 years (by 2012). There is a 95% chance of someone dying within 1.22 years (by 2013). There is a 5% chance of everyone dying within 42.82 years (by 2055). There is a 50% chance of everyone dying within 49.16 years (by 2061). There is a 95% chance of everyone dying within 56.86 years (by 2069). Probability of all dying in 2.0 years: <0.001% Probability of a death within 2.0 years: 99.36% 8. Correction: I meant to say Senate. I’ll try the House next…. 9. I’m an actuary and curious what table you are using. You can get some pretty widely disparate results using different tables that are designed for different population subsets… 10. There is a .059% chance that neither my wife nor I will see our daughter turn 18, and a 5.363% chance that neither of us will. Thanks, Randall. python actuary.py 30.5m 25.75f 16.75 There is a 5% chance of someone dying within 16.1 years (by 2028). There is a 50% chance of someone dying within 44.74 years (by 2057). There is a 95% chance of someone dying within 60.33 years (by 2072). There is a 5% chance of everyone dying within 42.37 years (by 2054). There is a 50% chance of everyone dying within 59.95 years (by 2072). There is a 95% chance of everyone dying within 71.71 years (by 2084). Probability of all dying in 16.75 years: 0.059% Probability of a death within 16.75 years: 5.363% 11. I meant a 5.363% chance that one of us won’t. 12. As promised here is the data for the actual US House of Representatives bash$ python actuary.py 53m 36f 54m 47m 58m 65m 47f 79m 54m 55m 36m 69m 50m 50m 64m 46m 44m 55m 51m 61m 67m 66m 56m 68f 75f 67m 72f 66f 67m 61m 62f 81m 70f 71m 65f 71m 53m 45m 60m 39m 47m 74f 68m 74m 60m 58m 71m 52m 73m 54m 59f 59f 71f 74f 60f 50f 76f 43f 61m 78m 64m 65m 59m 51f 65m 52f 57m 59m 61m 70m 36m 68f 55f 37m 56m 38m 58m 57m 59m 64m 59m 69f 46m 39m 56m 53m 47m 66f 68m 61m 71m 69m 63m 49m 82m 46f 53m 61m 45f 65m 42m 70f 60f 46m 46f 51m 51m 76m 56f 47m 57m 65m 62f 58m 72m 58m 42m 43m 42m 66m 70m 57m 66m 61f 65f 45m 62m 66m 47m 46m 59m 54m 51m 71m 51m 68f 43m 34m 63m 75f 46m 66m 68m 48m 31m 54m 63m 57m 36m 42m 74m 53m 38m 50m 40m 55m 60m 78m 64m 63m 44m 49f 36m 49m 69m 48m 65m 54m 75m 53m 47m 39m 42m 61m 66m 55m 56m 57m 57m 55m 66m 50m 54f 73m 86m 61m 53m 76m 63m 53m 72m 66m 61m 66m 60m 57m 60m 60m 43m 32m 59m 83m 59m 61m 49m 54m 58f 47m 81m 55m 83m 86m 48m 65m 47m 58f 49m 56f 68m 53m 54m 64m 56m 42m 56m 65m 54m 52m 68m 49m 57m 62f 52m 57m 52m 50m 42m 61f 54m 51m 42m 60m 55m 66m 39m 59m 53m 61m 60m 75m 60m 66m 64m 61m 41m 65m 40m 62m 54m 68m 68f 70m 59m 50m 65m 71m 78m 48f 59f 42m 64f 82m 69m 65m 75f 53f 48m 63m 74f 63m 61m 61f 54f 53m 83f 41m 65m 48f 69m 72m 69f 81m 56m 61m 71f 37m 41m 67m 59m 53m 59m 61f 52m 48m 56m 58m 54m 63m 66f 66m 60f 50m 49f 58m 47m 54m 39m 58m 47m 39m 52m 63m 44m 58f 55m 64m 65m 61m 67m 56m 64m 44m 53m 57m 57m 49m 51m 60m 56m 50m 64f 59m 52m 73m 55m 60m 50m 51m 48m 47m 65m 46m 48m 45m 72m 41f 67m 65m 50m 48m 58m 61f 60f 39m 63m 59m 64m 82m 89m 55m 63m 56m 57m 65m 50m 64m 69f 54m 77m 72m 68m 58m 62f 63m 67m 65m 50m 63m 61m 66m 62m 51m 57m 65m 77f 71m 57m 61m 52m 45m 65m 53m 52m 65m 60m 43m 60m 49m 67m 54m 73m 62m 47m 34f 71m 43f 72m 76m 62m 47m 65m 59f 63m 42m 50f 49m 61f 69m 72m 41m 56m 58f 1 There is a 5% chance of someone dying within 0.0 years (by 2012). There is a 50% chance of someone dying within 0.1 years (by 2012). There is a 95% chance of someone dying within 0.46 years (by 2012). There is a 5% chance of everyone dying within 55.6 years (by 2068). There is a 50% chance of everyone dying within 60.55 years (by 2073). There is a 95% chance of everyone dying within 67.12 years (by 2079). Probability of all dying in 1.0 year: <0.001% Probability of a death within 1.0 year: 99.84% 13. You should also mention that both former astronauts and wealthy film stars probably have better health care than the average schmuck. 14. Here are statistics on the 4 living former Presidents (George H W Bush, George W Bush, Bill Clinton, Jimmy Carter): bash$ python actuary.py 87m 88m 65m 66m There is a 5% chance of someone dying within 0.14 years (by 2012). There is a 50% chance of someone dying within 1.89 years (by 2014). There is a 95% chance of someone dying within 6.35 years (by 2018). There is a 5% chance of everyone dying within 9.8 years (by 2022). There is a 50% chance of everyone dying within 21.53 years (by 2034). There is a 95% chance of everyone dying within 31.48 years (by 2044). Probability of all dying in 1.0 year: <0.001% Probability of a death within 1.0 year: 29.39% 15. You haven’t taken into account the time dilation effect of relativity that they experienced while “astronauting”. (I think that should be a verb. I’m sure @neiltyson would agree) 16. Wouldn’t get away with those tables as a UK pensions actuary! They have to use 2D tables, so the probability of an 80 year old dying in 2032 is less than the probability of an 80 year old dying in 2012. The 1D tables are expected to overestimate the probability of death more as you get further into the future. But I’ve moved into health insurance now, so I’m not bothered. (Well, trying not to be. Us actuaries are a picky bunch!) 17. Love it. (I’m an actuary.) I’m also idly curious about which mortality table was used. For the curious, the Society of Actuaries publishes many tables for free here: http://soa.org/ For those running this script on your family, note that it doesn’t account for systemic risk (the shared risk of death that you are exposed to by driving in the same car, catching the same diseases, etc.) As a result it understates “Probability of all dying” and overstates “Probability of a death.” Also, as Jon Ahearn mentioned, I don’t think it takes into account the mortality improvement that we expect to see in the future (it would need to be a 2D table with attained age on one axis and years-from-table-publication on the other). This isn’t intended as a criticism though… I think it’s a very useful first approximation and I like the idea of people becoming more familiar with the dynamics of mortality. 18. I looked up the actuary tables for the Netherlands ( http://www.kps.nl/media/rapport-prognosetafel-2010-2060-508.pdf (starting at page 25, and yes, they are 2D)). A 50 year male in 2012 has a death rate of 0.0026446 in the Netherlands, compared to 0.00573 in the script (I guess the US). For a 70-year old male it is 0.0182732 in the Netherlands and 0.02729 in the script. It gives the Dutch courage. 19. Anyone else familiar with the actuarial unit called a “micromort?” It represents a 1-in-a-million chance of dying. I think that’s excellent. 20. I’d never heard of a micromort, but I think the most disturbing thing about it is the implication that there might exist, for example, a kilomort. 21. Seems like a kilomort should be a 1 in a 1000 chance *not* to die. And that makes me wonder what unit you need to talk about the man who survived both Hiroshima’s and Nagasaki’s nuclear attacks. 22. What about expected number of deaths in n years? 23. > You can get some pretty widely disparate results using different tables that are designed for different population subsets… Indeed. Wikipedia “Simpson’s paradox”. There is no such thing as neutral stratification. Quite post-modern if you askk me. 24. Harrison Ford just aged to 70. So you may update that? Also when you are on it you could send this to his Email. I guess that it’s something nice to read for his birthday. 25. Yes, wealthy Carrie Fisher probably does have very good health care, but between all the cocaine she did, and the electroshock therapy, which seems to have caused massive amounts of memory loss– to the extent that she doesn’t really remember Star Wars (if her memoir Wishful Drinking* is to be believed), she’ll probably be in the lower percentiles. She’s abused herself quite a bit, and there’s only so much of that you can buy back with good doctors and sobering up. *worth reading 26. Interesting stats on SC, Senate, & House, but my hunch is they overstate the risk of death of those particular populations. 99.3% chance of a senator dying every two years? That doesn’t seem to fit with my memory. Could these folks be healthier than the average bloke? 27. I remember reading an interesting piece a while back on a lot of the negative health effects (largely having to do with muscle atrophy, if I recall) of going into space at all, so that kind of thing might be worth taking into consideration when considering the specific health concerns of astronauts. Also, re: Roger’s comment–I thought about this, and not being an actuary myself, I came to two conclusions anyway. 1. You don’t often hear about a representative dying in office; that’s true–but the script seems to be calculating the likelihood of any of the 435 (or whatever) members dying from here to x years in the future. So I’d think the reps who know they’re getting old and death is knocking are far more likely to retire or stop running, and their 95% likely death (out of office) isn’t widely talked about outside of their district. Or something along these lines. 2. You’re probably right. 99.84% chance of a death within a year? That does sound wrong. If the tables are based on the general population, isn’t that blatantly ignoring that the House is not made up of a deeply diverse sampling of people? But this is all just me thinking off the top of my head. 28. Data for Sweden. bothtables=[[0.00224, 0.0003, 0.00005, 0.00014, 0.00014, 0.00004, 0.00007, 0.00013, 0.00004, 0.00014, 0.00006, 0.00008, 0.00006, 0.00012, 0.00012, 0.00018, 0.00019, 0.0004, 0.00039, 0.00059, 0.00045, 0.0007, 0.00061, 0.00071, 0.00066, 0.00064, 0.00065, 0.00065, 0.00069, 0.00064, 0.00063, 0.00089, 0.00054, 0.00058, 0.00077, 0.0007, 0.00077, 0.00066, 0.00079, 0.00081, 0.00099, 0.0009, 0.00093, 0.00123, 0.00163, 0.0014, 0.00167, 0.00192, 0.00221, 0.00256, 0.00264, 0.00294, 0.00273, 0.00341, 0.00369, 0.00448, 0.00505, 0.00456, 0.00615, 0.00696, 0.00672, 0.00762, 0.00788, 0.00911, 0.01018, 0.0119, 0.01301, 0.01374, 0.01586, 0.01753, 0.01838, 0.0205, 0.02215, 0.02538, 0.02763, 0.03109, 0.03748, 0.04118, 0.04628, 0.05178, 0.05659, 0.06073, 0.07217, 0.08591, 0.09525, 0.1062, 0.11805, 0.13371, 0.14202, 0.16468, 0.1806, 0.20522, 0.22203, 0.23959, 0.25786, 0.27681, 0.29642, 0.31667, 0.33753, 0.35902, 0.38113, 0.40387, 0.42728, 0.45139, 0.47621, 0.5018, 0.52816, 0.55531, 0.58325, 0.61192, 0.64125, 0.67112, 0.70136],[0.00196, 0.00021, 0.00004, 0.00009, 0.00007, 0.00008, 0.00004, 0.00008, 0.00018, 0.00012, 0.00015, 0.00011, 0.00006, 0.00004, 0.00008, 0.00012, 0.00014, 0.00022, 0.0003, 0.0002, 0.00027, 0.00019, 0.00024, 0.00021, 0.00032, 0.00032, 0.00029, 0.00028, 0.00025, 0.00041, 0.00024, 0.00031, 0.00034, 0.00046, 0.00045, 0.00039, 0.00049, 0.00041, 0.00052, 0.00061, 0.00037, 0.00057, 0.0008, 0.00064, 0.00089, 0.00102, 0.00115, 0.00129, 0.00106, 0.0014, 0.002, 0.00172, 0.00168, 0.00262, 0.00257, 0.00323, 0.00305, 0.00332, 0.00349, 0.0036, 0.00489, 0.00466, 0.00511, 0.00584, 0.00706, 0.00711, 0.00768, 0.00852, 0.00894, 0.01086, 0.01154, 0.01381, 0.01554, 0.01569, 0.01785, 0.02038, 0.02313, 0.02432, 0.02716, 0.03397, 0.03798, 0.04403, 0.05085, 0.05546, 0.06459, 0.07389, 0.08765, 0.09624, 0.11183, 0.13017, 0.14155, 0.16528, 0.18016, 0.19587, 0.21241, 0.22973, 0.24782, 0.26664, 0.28617, 0.30639, 0.32726, 0.34879, 0.37097, 0.39381, 0.41733, 0.44154, 0.46648, 0.49216, 0.51862, 0.54586, 0.57387, 0.60262, 0.63203]] pulled from here: http://www.google.se/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CE4QFjAA&url= 29. Pingback: The Final Frontier | SlowerBuffer 30. I imagine you’ve got some pretty solid demographic knowledge about your reader base. By when are *we* going to be halved by death? 31. i started reading this and began to think that John Cleese and co used to be lot funnier than this, then i reread the title. 32. I was born to an elderly couple (by the day’s standards) almost 30 years ago. Both of my parents are alive and relatively well. Looking back, I feel lucky. Probability of all dying in 29.5 years: 5.207% Probability of a death within 29.5 years: 43.31% Morbidity is in the eye of the beholder, I’d say. 33. Pingback: sigh 34. Can you write a Python script to calculate death probabilities for skydivers, stuntmen, motorcyclists, carracers and rockstars? I think it would be very interesting… 35. Not to sound off topic or anything, but where did the forums disappear to? 36. @Jay 37. Is it a bad sign you posted this on my birthday? 38. Fuck Cancer. 39. Fuck TheAz. 40. Not so sure about former astronauts – if they’re pilots, they’ll generally have higher radiation exposure over the course of the career than the average Joe (or Jill), and that doesn’t even take into account the exposure they may get in space (which is a good question- I don’t know what that is). Couple that with prolonged exposure to jet and/or rocket fuel (Benzene, it’s what’s for dinner!) and I would expect you’ve got a fair counter-balance to being in good physical shape. 41. What about the death of Mark Hammill’s non-voice-acting-career. Zing! 42. I’m sure the trip to and from the moon took some years off…as well as the cocaine when they returned. But I also suspect everyone on the millennium falcon took 10 years off their lives while they were filming it. love your stuff 43. python actuary.py -50m 1 There is a 5% chance of someone dying within 10.56 years (by 2023). There is a 50% chance of someone dying within 34.06 years (by 2046). There is a 95% chance of someone dying within 47.45 years (by 2060). so, a 95% chance that someone born in 2062 will never be born??? 44. Of course, if you include the other star of Star Wars, the probability of a death in the next ten minutes rises to 100%. 45. James Earl Jones wasn’t actually in Star Wars (the movie later to be dubbed Star Wars: Episode IV). David Prowse played Darth Vader and he also did the voice. Empire strikes back had James Earl Jones to do the voice of Darth and Lucas later overdubbed David Prowse with James Earl Jones on the newer editions of Star Wars as well, this was done even before the CG crap was added to it and Han Solo got nerfed by having greedo shoot first etc etc. 46. Fortunately, we can still get the original via Amazon (the new Empire? “. . . meet the new boss, same as the old boss . . .”), on DVD no less: a two disc set that includes the original theatrical releases of the first three movies. So yes, in one parallel universe at least, Han is still a cold-blooded killer. Hey, Randall? How about a physics question that explores string theory and unwanted sequels and/or “enhancements”? Just sayin’ . . . 47. @AndersBackman I was surprised to hear about James Earl Jones having re-dubbed the voice of Darth Vader for Star Wars and not being in the original cast. After doing a little research on imdb.com, it would appear that he did do the voicework in March of 1977, though he refused to have his name in the credits. I’m sure Prowse did all his lines on camera though. Harmless facts. he did all his lines in one day and was paid $9000 for his work. 48. Your code is really hard to read– and that’s saying something, given that the script is written in Python, hehe. If not for your programmer fans, you should work on it so you’ll at least be able to look back and remember what you were doing. Just use a lot more whitespace (“a<b" is for chumps), always use underscores or camel-case in your variable/method names, and generally make the names more descriptive of what the variables represent (even if they're hideously long), then you'll be golden. Believe me, you'll thank yourself for doing it. 49. Of course JEJ was the voice the entire time but whatever. Like star-wise it would have been more appropriate to include Sir Alec Guinness and Peter Cushing, but they’ve been gone since 2000 and 1994 respectively. What would also be interesting is to do tables for everyone involved in writing, directing, doing special effects, acting, and so on in Star Wars (credited in the 1977 original material or not) as of 1977, as if it were still 1977. Or just the actors not there besides Guinness and Cushing, such as Kenny Baker, Anthony Daniels, Peter Mayhew, Phil Brown, Shelagh Fraser, and Dennis Lawson. Regardless of who’s included, make it at least 10 or 20 or more, and then see how the predictions worked out now, as if they were made before the fact, instead of in hindsight. Or not.
{"url":"http://blog.xkcd.com/2012/07/12/a-morbid-python-script/comment-page-1/","timestamp":"2014-04-20T20:58:10Z","content_type":null,"content_length":"77705","record_id":"<urn:uuid:23584683-0823-4ceb-898c-7e86caff713d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
Measurement unit conversion: liters ›› Measurement unit: liters Full name: liter Plural form: liters Symbol: L Alternate spelling: litres Category type: volume Scale factor: 0.001 ›› SI unit: cubic meter The SI derived unit for volume is the cubic meter. 1 cubic meter is equal to 1000 liters. Valid units must be of the volume type. You can use this form to select from known units: I'm feeling lucky, show me some random units ›› Definition: Litre The litre (spelled liter in American English and German) is a metric unit of volume. The litre is not an SI unit, but (along with units such as hours and days) is listed as one of the "units outside the SI that are accepted for use with the SI." The SI unit of volume is the cubic metre (m³). ›› Sample conversions: liters liters to gigaliter liters to cubic millimetre liters to dram liters to bucket [UK] liters to quart [UK] liters to ounce [US, liquid] liters to cubic nanometre liters to barrel [US, dry] liters to cubic inch liters to gill [UK]
{"url":"http://www.convertunits.com/info/liters","timestamp":"2014-04-16T10:33:42Z","content_type":null,"content_length":"23936","record_id":"<urn:uuid:c04c7788-7bfb-483d-81a4-6f495dc9a899>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Cerritos Algebra Tutor ...I am an avid photographer, and have been actively photographing subjects for at least 3 years. I use a Nikon D5100, though I am familiar with Canon 60Ds and Rebel series. I often volunteer my services to friends and families, along with student groups on campus. 47 Subjects: including algebra 2, algebra 1, English, chemistry ALL SPRING SEASON SPOTS FILLED ~ CURRENTLY ACCEPTING STUDENTS FOR SUMMER SEASON (RATE INCREASED TO $45 FOR STUDENTS WITHOUT MATERIAL TO LEARN FROM) As a third year college student with a Biology major and Human-Animal Studies minor, I believe I am proficient enough to tutor in the subjects listed ... 19 Subjects: including algebra 1, algebra 2, Spanish, elementary math I am an experienced tutor in math and science subjects. I have an undergraduate and a graduate degree in electrical engineering and have tutored many students before. I am patient and will always work with students to overcome obstacles that they might have. 37 Subjects: including algebra 2, algebra 1, chemistry, English ...I have an extensive background with Math and Science: I've seen a ton of physics problems, and I have an intuitive sense of how to approach the new ones I haven't seen. I've been at the forefront of research and enjoy doing calculations about how the world exists. Learning math has been part and parcel to learning these physics skills. 44 Subjects: including algebra 1, algebra 2, chemistry, reading ...Algebra doesn't have to be difficult. Whether it is solving and graphing equations, solving systems of equations or factoring, I can clear up confusion and help you excel in the subject. I have tutored many students to success in algebra. 5 Subjects: including algebra 1, algebra 2, chemistry, geometry
{"url":"http://www.purplemath.com/Cerritos_Algebra_tutors.php","timestamp":"2014-04-21T13:06:15Z","content_type":null,"content_length":"23796","record_id":"<urn:uuid:b3a6405d-616b-497d-916d-0cdc613561a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
The purpose of this experiment is to determine the shear force, bending moment and the load from the strain measurements, of a cantilever beam, loaded in bending. The cantilever beam is a widely used structural element, for example in airplane wings, supports for overhanging roofs, the front spindles of automobiles etc. A cantilever is commonly defined as a beam which is built-in and supported at only one point , and loaded by one or more point loads or distributed loads acting perpendicular to the beam axis. This experiment will study a cantilever beam in its simplest form- that is, a parallel-sided beam of a constant cross section , rigidly clamped at its fixed end and loaded by a single load on the beam centerline near the free end. The cantilever beam is shown in the sketch , along with the associated shear force and bending Moment diagrams Another characteristic of the cantilever beam used in this experiment is that the stress is uniaxial everywhere on the beam surface except in the immediate vicinity of the loading point and the clamped end. The surface stress at any section, X, along the beam axis can be calculated from X, psi (N/m^2) c = distance from neutral axis to the extreme fiber of the beam surface, in (m) (1) I = moment of inertia of beam cross section, in^4 (m^4) P = load, lbs (N) B = beam width, in (m) t = beam thickness, in (m) Z = section modulus of beam, in^3 (m^3) For uniaxial stress, Hooke s law can be expressed as: e = nornal strain, in/in (m/m) (2) E = modulus of elasticity, psi Therefore the longitudinal strain at any section , X , is The above equation demonstrates that the axial strain varies linearly along the beam from zero at the loading point to a theoretical maximum of 6PL/Ebt2 at the fixed end. This experiment can be performed by using a beam with three strain gages, installed uniformly spaced along the axis of the beam as shown in the gage installation diagram below. Since the strain distribution along the beam is presumably linear, shear force can be written as where : D M is the change in bending moment over an increment of length (4) defined by the corresponding change in distance,D X. Solving equation (3) for M and substituting into equation (4), From eqn (5) the shear force can be obtained from the difference in strain indications of any pair of gages, divided by the distance between the gages The above eqns give the shear force and thus the load applied to the beam. Since the answers will generally differ slightly due to experimental error, their average is the best estimate of the With the load known, the stress can be calculated from the following: However, the stress can also be calculated directly from the measured strain at that point with Hooke s law for uniaxial stress as follows: The stresses calculated from Eqs. (8) and (9) can be compared as verification of the fundamental beam relationships used in this experiment. Dr. Kingsbury, Lab Manager, Integrated Mechanical Testing Laboratory. Copyright © 2003 IMTL, Fulton School of Engineering, Arizona State University. All rights reserved. Revised: May 25, 2005.
{"url":"http://enpub.fulton.asu.edu/imtl/HTML/Manuals/MC105_Cantilever_Flexure.htm","timestamp":"2014-04-19T17:02:43Z","content_type":null,"content_length":"8832","record_id":"<urn:uuid:b345765e-37ca-487f-983b-a750b3197659>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Running Mathematica without the Notebook Creating and Post-Processing Mathematica Graphics Graphics in the Mathematica Front End is an evolving topic, so I'm collecting relevant information on a separate page . On this page the focus will be on Kernel-only operation. Running Mathematica without the Notebook interface Accessing the Mathematica Kernel on UNIX and Mac OS X Mathematica is a great computational tool, and the Notebook interface makes it an almost self-contained system for doing calculations and documenting your work. However, there are some situations where one does not need or even want the interactive Notebook interface. For example, you may want to do a quick calculation while doing some work in the Terminal, or run Mathematica remotely without access to the graphical desktop. The solution for this is to invoke an instance of the Mathematica Kernel from the command line. How you do this depends on where you are. I've been using this access method on various UNIX and Mac OS X systems at least since 1998, and it has worked with all versions of Mathematica so far. • On a UNIX server (no longer available on the general-purpose account called "shell" at the University of Oregon, but on the research cluster hpc), you could be logged in through a text terminal session. Then if you type math, the Kernel will start up and you'll see the following: login> math Mathematica 7.0 for Linux x86 (64-bit) Copyright 1988-2009 Wolfram Research, Inc. If you don't see this, type echo $PATH and make sure that the directory path to the math command is in the list. A common path to try is /usr/local/bin/math. • On a Mac running OS X, Mathematica is installed in the Applications folder, e.g., under the name Mathematica. Assuming this is the case on your system, you can invoke the Kernel from the command line by typing If you're on an Intel 64-bit machine, you may have to execute MathKernel64 here (depending on your Mathematica version). In fact, you may want to follow the instructions at Wolfram's Mac support FAQ if it applies to your processor. Needless to say, this is one of the great advantages of working with UNIX, Linux or Mac OS X: you can in principle start as many copies of your Mathematica Kernel as your license allows, even while the "proper" Notebook Mathematica is running. If you do this kind of thing often, you'll want to define an alias such as alias math="/Applications/Mathematica.app/Contents/MacOS/MathKernel" (in bash, the default shell under OS X) or alias math "/Applications/Mathematica.app/Contents/MacOS/MathKernel" (in csh syntax). That way, the command math will work the same way on UNIX and Mac OS X machines. Another solution is to define a wrapper script in your PATH, e.g., /usr/local/bin/math. In this executable script, I have only two lines: /Applications/Mathematica.app/Contents/MacOS/MathKernel "$@" The last approach is especially useful if you plan to call Mathematica from other programs that don't read your alias definitions. For example, LyX can invoke math as an external computer algebra system. It does so by parsing LaTeX and/or Mathematica code entered within LyX, converting to a Mathematica expression and eventually doing the reverse to add the output from math back into the LyX document as LaTeX. Here is an example: With the input above, we invoke Edit > Math > Use Computer Algebra System > Mathematica and get the following: The interaction between LyX and Mathematica is not perfect (in general I still prefer to have a Notebook and a LyX window open side by side, and copy/paste between them), but it illustrates the utility of the Kernel. To copy and paste equations between Mathematica and Lyx (or any LaTeX editor), follow the examples given on a separate page. There is also an interactive Mathematica mode for xemacs: mathematica-mode by Jim McCann Pivarski. This mode allows you to open an interactive Kernel session from within xemacs, with the added benefit of some useful keyboard shortcuts and (customizable) syntax highlighting. This can be quite useful when combined with JavaGraphics as described below. As another example for a usefull application of the, the Mathematica Kernel can even be invoked from the interactive notebook-like interface of Sage, an open-source comptetitor to Mathematica that is based on Python. • For user-specific customizations that apply to all your Mathematica sessions, you should use the initialization file ~/Library/Mathematica/Kernel/init.m. In this file, you can place Mathematica commands such as AppendTo[$Path,"~/Documents/Mathematica"] (this example assumes that I want to automatically look for external files in ~/Documents/Mathematica). • To create command-line scripts that accept input parameters, follow the instructions in the Mathematica (version 8) tutorial located at tutorial/MathematicaScripts in the Help browser. Note: there is a bug in MathematicaScript that doesn't allow you to pass more than three command line arguments to a script. A better way to write a script is given in the following template (see this #!/Applications/Mathematica.app/Contents/MacOS/MathKernel -script Graphics output from the Kernel Mathematica can produce graphics without invoking the (notebook) FrontEnd. One way is to export to files, as will be discussed below. But there's also a way to make graphics appear on-screen from within a Kernel-only session (note that this requires a windowing system, e.g., an X terminal): Although this is a bit slow because it involves JLink (Java), it provides graphical feedback for your Kernel calculation without the need to open a notebook. Compared to running Mathematica through a remote X-window connection over the internet, the JavaGraphics output is really quite fast. Running the Kernel as a batch job The Kernel is of course much more than just a calculator replacement. Using the command-line invocation just described, you could use the Kernel to run serious computations on a remote server. However, there are two problems: 1. Making a Mathematica program file that can produce figures and other output non-interactively. 2. How to launch a Kernel job that keeps running after you log out. The steps one needs to take are discussed next. 1. The Mathematica Kernel can be fed commands in different ways: □ One approach is to start the Kernel as explained above, then load a text file with extention .m (e.g., foo.m by typing <<foo.m. Here, foo.m contains Mathematica commands such as Print["Hello □ Another way to tell the Kernel what to do is to specify a program file on the command line. Under UNIX, you would type math -run "<<input" The -run option causes the command in quotation marks to be executed by the Kernel. Here, the command is to load the file named input. It does not matter whether that file has the extension .m or not, but its contents have to be Mathematica statements. We will use this below to start a batch job. Turning to the contents of the command file, there is the problem that we need some way of receiving output from the process. Textual output will be displayed directly in your terminal, but graphics output will create error messages if the Kernel does not know where to send the graphics. Here is an example for the file named "input" that shows how this problem is solved (the following is for Mathematica < version 6; for higher versions you shouldn't expect graphics to work without a windowing system): You can of course also enter these same commands interactively to see what they do. The first line has to be given only once, and it makes sure that all subsequent Plot commands produce no direct output. Instead, the output is stored in a variable a. The last line generates an Encapsulated Postscript file in you current working directory, named a.eps. Although the above still works with Mathematica version 6 on Mac OS X, you may have to modify the procedure slightly when logged in through a terminal session. In that case, you may get the error Export::nofe: A front end is not available; export of EPS requires a front end. The quick fix for this is to leave out the SetOptions line and export the expression in a as a text file: Export["a.m",a,"TEXT"]. The resulting file a.m can then be displayed in Mathematica's Front End using <<a.m, from where one can finally proceed to export other graphics formats such as EPS. 2. Having discussed the creation of command files, the last step is to make it possible to run command files over night, or longer, without having to stay logged in on the server. The first step is to modify the above to math -noprompt -run "<<input" or alternatively math -noprompt -script input What this does is to tell Mathematica not to show the interactive command prompt discussed earlier, but to execute silently all the commands contained in the file named "input". In the absence of a prompt, it is essential that the input file terminates the Kernel when finished. This is ensured by making the last line of the input file read Exit[] (or equivalently: Exit). Here is an example that mindlessly generates a list of twenty identical plots, just so we have time to log out before the execution finishes; save it as, e.g., test.m (the file name is arbitrary): Print["Hello, starting plots!"] a=Table[DensityPlot[Sin[Sqrt[x^2 + y^2]]^2/(.001 + x^2 + y^2), {x, -13, 13}, {y, -13, 13}\ , Mesh -> False, PlotPoints -> 300], {i, 1, 20}] Remember, in Mathematica ≥ 6 you'll probably be best off exporting your raw data instead of trying to create graphics in a batch job (because the whole interactive paradigm of versions ≥6 causes graphics to be overwhelmingly handled by the Notebook instead of the Kernel). 3. Submitting your batch job: this depends on the system you're logged in to. If you're running the job remotely on your own Mac, then to launch this program in a way that does not get killed when you log out, type the following: nohup nice math -noprompt -run "<<test.m" > output & The & at the end is important, and the file named output catches any text output that your job might generate. On the other hand, if you're working on a cluster such as hpc, follow the procedure outlined in the cluster documentation: usually it involves simply launching your script with the qsub command. Remote Mathematica Notebook Sessions If you need graphical features specific to version 6 or above, there is no way around a notebook interface. Over a remote connection, this will of course require a lot of bandwidth. My recommendation here would be to avoid a remote X-window connection and use vnc (screen sharing) instead, if the server supports it. The network traffic for a vnc connection is smoother and much more responsive than for a remote X connection with Mathematica's highly interactive notebook features. To access Mathematica in all its graphical interface glory on a remote Unix/Linux server such as the UO hpc, the main issue that can trip you up is a lack of installed fonts. Ideally, all you have to do (on a Mac) is: ssh -Y hpc.uoregon.edu, followed by mathematica at the command prompt. In all likelyhood, this won't work when you try it for the first time. So here are the steps to fix things: • If things freeze up, go to the terminal, press Ctrl-z and type kill -9 %1 (if necessary, replace %1 by the appropriate number output by jobs). • Log out (exit), go to your local home directory in the Terminal (cd) and type scp -r hpc.uoregon.edu:/usr/local/mathematica7/SystemFiles/Fonts . followed by mv Fonts .fonts (replace mathematica7 by whatever version you wish to run). This copies the fonts from the remote server to your own computer. • Type cd .fonts and then mkfontdir BDF Type1. • Now go back by typing cd, create a new folder with mkdir .xinitrc.d (if it doesn't already exist) and then type this as a single line: printf "xset fp+ $HOME/.fonts/BDF\nxset fp+ $HOME/.fonts/ Type1\nxset fp rehash\n" >> ~/.xinitrc.d/setFontPaths.sh • Make this file executable by typing chmod 755 ~/.xinitrc.d/setFontPaths.sh. • Restart your X server and go back to the first paragraph. Now if you launch mathematica on hpc, the notebook should start up. At this point you'll see what I meant when I recommended vnc over X. Last modified: Tue May 8 13:30:15 PDT 2012
{"url":"http://pages.uoregon.edu/noeckel/Mathematica.html","timestamp":"2014-04-21T14:42:01Z","content_type":null,"content_length":"18596","record_id":"<urn:uuid:8e9a88ff-60d3-4af0-b018-0021ebac3582>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Public Function VariancePopulationArray( _ ByRef vValues As Variant _ ) As Variant Population variance of numeric values. More efficient version of the VariancePopulation Function but with restrictions on the arguments it accepts. VariancePopulationArray(Array(12, 34, 56)) = 322.666666666667 VariancePopulationArray(SampleData) = 557.43336483932 See also: VariancePopulation Function VarianceArray Function StandardDeviationPopulationArray Function AverageDeviationArray Function DeviationsSquaredArray Function VARP Function (Microsoft Excel) VARPA Function (Microsoft Excel) vValues: Numeric-type array or Variant array containing only numeric elements. This function is NOT affected by the current setting of the StatVarType Property. It assumes that vValues is an array that contains only numeric elements. Return value: Function returns the population variance of the numeric values. Function returns Null if the array is empty, or if there is some type of error, such as if the argument is a Variant array which contains an Object reference or String. v1.5 Note: VarianceArray and VariancePopulationArray replace the VarianceVariantVector function, which has been removed from ArrayArithmetic Class. This function is slightly different in that it examines all of the elements in the array from its lower bound through its upper bound. Copyright 1996-1999 Entisoft Entisoft Tools is a trademark of Entisoft.
{"url":"http://www.entisoft.com/ESTools/MathStatistics_VariancePopulationArray.HTML","timestamp":"2014-04-19T17:01:08Z","content_type":null,"content_length":"3513","record_id":"<urn:uuid:009fa633-313e-4817-ac75-7396d153284b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Cauchy theorem From Encyclopedia of Mathematics Cauchy's theorem on polyhedra: Two closed convex polyhedra are congruent if their true faces, edges and vertices can be put in an incidence-preserving one-to-one correspondence in such a way that corresponding faces are congruent. This is the first theorem about the unique definition of convex surfaces, since the polyhedra of which it speaks are isometric in the sense of an intrinsic metric. The Cauchy theorem is a special case of the theorem stating that every closed convex surface is uniquely defined by its metric (see [4]). The theorem was first proved by A.L. Cauchy (see [1]). [1] A.L. Cauchy, J. Ecole Polytechnique , 9 (1813) pp. 87–98 [2] A.D. Aleksandrov, "Konvexe Polyeder" , Akademie Verlag (1958) (Translated from Russian) [3] J. Hadamard, "Géométrie élémentaire" , 2 , Moscow (1957) (In Russian; translated from French) [4] A.V. Pogorelov, "Unique definition of convex surfaces" Trudy Mat. Inst. Steklov. , 29 (1949) (In Russian) E.V. Shikin Cauchy's intermediate-value theorem for continuous functions on closed intervals: Let Cauchy's theorem was formulated independently by B. Bolzano (1817) and by A.L. Cauchy (1821). Cauchy's intermediate-value theorem is a generalization of Lagrange's mean-value theorem. If [1] V.A. Il'in, E.G. Poznyak, "Fundamentals of mathematical analysis" , 1–2 , MIR (1982) (Translated from Russian) [2] L.D. Kudryavtsev, "Mathematical analysis" , 1 , Moscow (1973) (In Russian) [3] S.M. Nikol'skii, "A course of mathematical analysis" , 1–2 , MIR (1977) (Translated from Russian) L.D. Kudryavtsev The statement in can be generalized. For continuous real functions (cf. [a1]). [a1] W. Rudin, "Principles of mathematical analysis" , McGraw-Hill (1976) pp. 107–108 Cauchy's theorem in group theory: If the order of a finite group This theorem was first proved by A.L. Cauchy (see [1]) for permutation groups. [1] A.L. Cauchy, "Exercise d'analyse et de physique mathématique" , 3 , Paris (1844) pp. 151–252 [2] A.G. Kurosh, "The theory of groups" , 1–2 , Chelsea (1955–1956) (Translated from Russian) [a1] M. Suzuki, "Group theory" , 1 , Springer (1982) How to Cite This Entry: Cauchy theorem. E.V. Shikin, L.D. Kudryavtsev (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Cauchy_theorem&oldid=16651 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Cauchy_theorem","timestamp":"2014-04-17T15:44:30Z","content_type":null,"content_length":"24699","record_id":"<urn:uuid:5f683a29-d571-4733-bae9-06697db4e183>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
finite fields November 22nd 2009, 01:44 PM #1 Nov 2009 finite fields Find two distinct monic irreducible polynomials f(x) and g(x) of degree 3 over the finite field F_3.(for instance, let f(x)=x^3 + 2x + 1 and g(x)= x^3 + 2x + 2.) Let a be a root of f(x) and be be a root of g(x). i)identify a root of f(x) in F_3(b) (this means we adjoin b to F_3) and similarly identify a root of g(x) in F_3(a). (I didn't understand what this part of the question means.) Then write an explicit isomorphism from F_3(a) to F_3(b). ii) identify all the three roots of f(x) in F_3(a) and that of g(x) in F_3(b). (Okay, i can do this by trail and error, but is there a better procedure for these type of questions?) Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/116153-finite-fields.html","timestamp":"2014-04-21T00:25:38Z","content_type":null,"content_length":"28989","record_id":"<urn:uuid:7eba1038-0e7b-4db4-8433-69250d062232>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
L'Hospital's Rule September 9th 2008, 02:23 PM #1 Junior Member Sep 2007 L'Hospital's Rule use l'hospital's rule to find the limit: 1. lim (tan 2x)^x x-> 0+ i got 1 using ln y, but im not sure if its right.. 2. lim (1+ (a/x))^bx x-> INFINITY again i got 1...but am unsure any help is appreciated...thanks!! I think the second one is the number e but just looked at it real fast I get 1 too 2. lim (1+ (a/x))^bx x-> INFINITY Substitute $t=\frac xa$ Thus $bx=abt$ and $\frac ax=\frac 1t$ $\lim_{x \to \infty} \left(1+\frac ax\right)^{bx}=\lim_{t \to \infty} \left(1+\frac 1t\right)^{t \cdot (ab)}$ Remember the rule $(a^b)^c=(a^c)^b=a^{bc}$ Therefore : $\lim_{t \to \infty} \left(1+\frac 1t\right)^{t \cdot (ab)}=\lim_{t \to +\infty} \left(\left(1+\frac 1t\right)^t\right)^{ab}=e^{ab} \quad (*)$ $(*) \lim_{t \to + \infty} \left(1+\frac 1t\right)^t=e$ (by taking the logarithm and applying l'Hospital's rule) (a little substitution can be done if t tends to $-\infty$) Otherwise, applying the logarithm : $\ln(y)=\lim_{x \to \infty} bx \left(1+\frac ax\right)$ Then apply l'Hospital's rule... September 9th 2008, 02:29 PM #2 September 10th 2008, 06:39 AM #3
{"url":"http://mathhelpforum.com/calculus/48354-l-hospital-s-rule.html","timestamp":"2014-04-17T19:50:19Z","content_type":null,"content_length":"38964","record_id":"<urn:uuid:fa8b1e2c-f31a-4a09-9ac8-761f30eecb51>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
This is what I have to go through to get an Engineering Science Assc. degree - Page 2 - Boxing ForumBoxing Forum Originally Posted by I'm bored so I wanted to show you guys the classes I gotta take. I'm actually excited about all this. General Requirements ENG 101 ENG 201 SPE 100 Fundamentals of Speech 1 6 creds XXX xxx Social Science Electives 2 Choose from anthropology, economics, geography, history philosophy, political science, psychology sociology or any Ethnic Studies social science course. Curriculum Requirements CHE 201 Chemistry I This is the first semester of a two-semester course sequence that involves the study of chemical principles including atomic and molecular theories, molecular structure, and reactivity. The Laboratory will include experiments illustrating the chemical principles. Two terms required. Required in A.S. )Science) and A.S. (Engineering Science). Fulfills science requirements for A.A. (Liberal Arts). CHE 202 College Chemistry II This is the first semester of a two-semester course sequence that involves the study of chemical principles including atomic and molecular theories, molecular structure, and reactivity. The Laboratory will include experiments illustrating the chemical principles. Two terms required. Required in A.S. )Science) and A.S. (Engineering Science). Fulfills science requirements for A.A. (Liberal Arts). ESC 111 Elements of Engineering Design This course provides an introduction to engineering practice through hands-on investigations, computer applications, and design projects in the fields of structures and robotics. All investigations and design projects are performed in groups and presented in oral and /or written form. Computers are used for documentation, data analysis and robot control. ESC 113 Computer Aided Analysis for Engineering This course introduces topics important for engineers computer aided analysis techniques are introduced and used for the design, and modeling of engineering systems such as electrical circuits, pipelines, signal and image processing, aircraft engines, orbits and trajectories, protein molecules, and sewer treatment. MAT 301 Analytic Geometry and Calculus I This is an integrated course in analytic geometry and calculus, applied to functions of a single variable. It covers a study of rectangular coordinates in the plane, equations of conic sections, functions, limits, continuity, related rates, differentiation of algebraic and transcendental functions, Rolle's Theorem, the Mean Value Theorem, maxima and minima, and integration. MAT 302 Analytic Geometry and Calculus II This course provides an introduction to the concepts of formal integration. It covers the differentiation and integration of algebraic, trigonometric, and transcendental functions. Topics include the definite integral, the antiderivative, areas, volumes, and the improper integral. MAT 303 Analytic Geometry and Calculus III This course is an extension of the concepts of differentiation and integration to functions of two or more variables. Topics include partial differentiation, multiple integration, Taylor series, polar coordinates and the calculus of vectors in one or two dimensions. MAT 501 Ordinary Differential Equations This is a first course in the theoretical and applied aspects of ordinary differential equations. Topics include: first-order equations, exact equations, linear equations, series solutions, Laplace transforms, Fourier series and boundary value problems. Pre-Requisite: MAT302 PHY 215 University Physics I This is a two-semester course for students in science and engineering. Concepts of calculus are introduced and used when necessary. The lecture and laboratory exercises pertain to mechanics, fluids, heat and thermodynamics, wave motion, sound, electricity, and magnetism, geometric and physical optics and an introduction to modern physics. Co-Requisite: MAT301 PHY 225 University Physics II This is a two-semester course for students in science and engineering. Concepts of calculus are introduced and used when necessary. The lecture and laboratory exercises pertain to mechanics, fluids, heat and thermodynamics, wave motion, sound, electricity, and magnetism, geometric and physical optics and an introduction to modern physics. Pre-Requisite: PHY210 or PHY215 and MAT301 SCI 120 Computer Methods in Science This course teaches a computer language and emphasizes application of programming methods for the sciences and engineering. Numerical methods will be applied to examples gleaned from physics, chemistry and biology and engineering.Pre-Requisite: MAT206 SCI 121 Computer Methods in Science Program Electives (Choose 13 credits from the following) CHE 230 Organic Chemistry I This two-semester course sequence is the study of the structure and properties of the fundamental classes of organic compounds with emphasis on reactivity, reaction mechanisms, stereochemistry, electronic theory and applications to allied fields. Two terms are required. CHE 240 Organic Chemistry II This two-semester course sequence is the study of the structure and properties of the fundamental classes of organic compounds with emphasis on reactivity, reaction mechanisms, stereochemistry, electronic theory and applications to allied fields. Two terms are required. ESC 130 Engineering Graphics This is a course in fundamental engineering drawing and industrial drafting-room practice. Lettering, orthographic projection, auxiliary views, sessions and conventions, pictorials, threads and fasteners, tolerances, detail drawing dimensioning and electrical drawing; introduction to computer-aided graphics are covered. ESC 201 Engineering Mechanics I (Statics and Par This course is a three-dimensional vector treatment of the static equilibrium of particles and rigid bodies. Topics include: equivalent force and coupled systems, static analysis of trusses, frames machines, friction, properties of surfaces and rigid bodies, particle kinematics, path variables, cylindrical coordinates and relative motion. Elements of design are incorporated in the course. Pre-Requisite: ESC130 and MAT302 Pre-Requisite: PHY225 and SCI120 ESC 211 Thermodynamics I This course covers introductory concepts and definitions; Absolute temperature, Work, heat, First Law and applications, Second Law, Carnot Theorem, entropy, thermodynamic state variables and functions, reversibility, irreversibility, ideal gas mixtures, mixtures of vapors and gas, humidity calculations. ESC 221 Circuits and Systems I This course includes circuit elements and their voltage-current relations; Kirchoff's Laws, elementary circuit analysis; continuous signals; differential and difference equations; first order systems and analysis of RLC circuits. ESC 223 Switching Systems and Logic Design This course includes the analysis and design of cominational and sequential circuits and their applications to digital systems. The use of integrated circuits in the design of digital circuits is illustrated in the laboratory experiments. Pre-Requisite: MAT302 and PHY225 Pre-Requisite: SCI121 or DEPT. PERMIT MAT 315 Linear Algebra This course covers matrices, determinants, systems of linear equations, vector spaces, eigenvalues and eigenvectors, Boolean algebra, switching circuits, Boolean functions, minimal forms, Karnaugh Pre-Requisite: MAT302 or DEPT. PERMIT PHY 240 Modern Physics This is an introduction to atomic and nuclear physics, relativity, solid state physics and elementary particles. Pre-Requisite: MAT056 and PHY225 Co-Requisite: MAT501 or DEPT. PERMIT ESC 202 Engineering Mechanics II This course is a three-dimensional vector treatment of the kinematics of rigid bodies using various coordinate systems. Topics include: relative motion, particle dynamics, Newton’s laws, energy and mechanical vibrations. Elements of design are incorporated in the course. Prerequisites: ESC 130, ESC 201, PHY 225 Co-requisite: MAT 501 or departmental approval GLY 210 Geology I This course covers fundamental principles of geology encompassing the study of minerals and rocks, geological processes, interpretation of topographic and geological maps and techniques of remote sensing. This is a program elective in Engineering Science and an elective in all other curricula. It does not meet the science requirement for Liberal Arts A.A. degree.
{"url":"http://www.boxingscene.com/forums/showthread.php?t=417557&page=2","timestamp":"2014-04-18T01:10:10Z","content_type":null,"content_length":"154162","record_id":"<urn:uuid:4e06031e-6d12-4e30-a506-45555c52e64a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
Above, PythonJS is used to create a dynamic water physics demo that integrates: , and . The yellow 'blob' is created by an array of p2.js springs and particles that are rendered as metaballs using MarchingCubes in Three.js. You can get the new PythonJS 0.8.6 release here. The new release includes many bug fixes and better inter-operation with external JavaScript libraries. Bindings and wrappers are no longer required to work with external JavaScript libraries, types passed to JavaScript functions will be recursively transformed into JavaScript types. Calling Python functions from JavaScript has also been improved. New experimental backends are also included that translate Python into: CoffeeScript, Dart, and Lua. The Pypubjs IDE can now export to Android using Phonegap. This requires installing PhoneGap and the Android SDK. Phonegap is easily installed with one command sudo npm install -g phonegap. The Android SDK is more work to setup, download it and set your PATH environment variable to include: /tools and /platform-tools. You also need Oracle's Java, here is the steps I did to get it running in sudo add-apt-repository ppa:webupd8team/java sudo apt-get update sudo apt-get install oracle-java7-installer sudo apt-get install ant export JAVA_HOME=/usr/lib/jvm/java-7-oracle I have integrated a new backend into PythonJS that translates Python code into a visual graph using vis.js. You can load code into the Pypubjs editor, and by clicking "graph" it will run the translation in the background and display the graph in a new window. This can be helpful if you are working with code written by someone else, and need a way to quickly understand it at a higher source code I am working on an integrated development environment for PythonJS, called Pypubjs. Using Pypubjs you will be able to easily put together HTML5 projects written in CSS, JavaScript and Python, and quickly package binaries for all platforms: Linux, Windows, OSX and mobile using PhoneGap. The IDE itself is written in PythonJS and HTML5 using Node-Webkit and the Ace.js code editor with Darkstrap CSS by Dan Neu (a hack of Twitter's Bootstrap. see source code I had posted earlier this month about the garbage polluting the river that flows into Tarlac City here. 30Km downstream in the center of Tarlac City, things get much worse. Passing through the city I often see people of all ages fishing in the river, despite its extreme level of pollution. Today I saw four young boys with a battery and two long prods shocking fish to catch them. They were also collecting coconuts from the river. I recently refactored PythonJS into a better meta-translator and added new backends for: CoffeeScript and Lua. The regression tests now include benchmarks that generate the bar graphs below. I also merged the Lua.js translator to see what happens when you translate Python to Lua and then to JavaScript, the result is slower than PythonJS direct to JavaScript. The Dart backend has the highest performance. LuaJIT also performs well with about half the Pystones of CPython. Recursive Fibonacci (time in seconds, less is better) (time in seconds, less is better) San Jose, Tarlac Province, has a network of cool streams flowing from the western mountains, it one of the few places in the Philippines that it is still safe to drink water directly from the river. Many small villages can be found near and along the water ways. Until recently the people in San Jose had lived in near perfect harmony with their environment; sadly things have recently changed. Along my short 3km hike to the river, I estimate I encountered about twenty small dumping sites composed mainly of baby diapers. All of these sites were relatively new. Each site had about 20-100 diapers. above: baby diapers torn apart and scattered by wild dogs. Garbage Dynamics Garbage accumulates at different rates depending on what type it is and where it was dumped. Often it starts out as a few pieces of discarded plastic, once a place no longer is pristine, people are more likely to litter more. Once some threshold has been reached, someone will dump the first big bag of trash, this quickly leads others to dump more in the same area. There is a powerful feedback It is routine in the Philippines to burn trash in remote areas, this helps restore the areas to a relatively clean state, and control the feedback effect. Diapers are hard to burn compared to most other trash, so they tend to build up and encourage the growth of dumping sites. The Pinoy style for trash burning is simply to throw some grass on top and set it ablaze. This technique fails with How To Safely Burn Plastics Grass and wood should be placed under the plastic. A ratio of at least 50/50 wood/plastic is the minimum. Less harmful dioxins are released when the fire is hot, so the more dry wood the better. Anything thing with chlorine (like paper products) when burned with plastics creates more dioxins, so burn them separately. Avoid burning black and dark colored plastics. Obviously, try to recycle as much of the plastic as possible, instead of burning it. A new computer model by Eric Wolf, estimates the earth will be destroyed by our sun in 1.5 billion years. [1] It is a popular idea in science and science fiction that we will one day escape destruction on earth by colonizing other planets. In these scenarios the polluted and spent earth is left behind. But, will there be enough space ships to save us all? What about all the plants and animals we would leave behind? This is a bad way of thinking about the future that can have a negative impact on how we think about the present. We more readily accept pollution and things that harm the planet, when we think that in the distant future, we are going to ditch the earth anyways. Instead we should plan to save the earth by turning it into a giant space ship, so we can safely move the earth to a higher orbit as the sun becomes a red giant star. Is it too soon to worry about these things? It might take millions of years to safely move the earth to this higher orbit, and transforming the earth into a spaceship may itself take millions of years as well, so we better start now to be sure we have enough time. Much of the earth as we know it will be lost in this radical transformation into spaceship. In order to preserve natural habitats and places for cities of the future, we will need to greatly increase the total surface area, control the weather, and globally route the flow of fresh water. I have created this animation in Blender to try to capture some of these ideas. I imagine the disk structure as a place for the new cities, where people can enjoy flying around in the lower gravity. The original earth surface enclosed in a protective biosphere, can be maintained for wild animals and returned to its natural state. I placed the rocket booster at the south pole, I imagine we would have to drill to the earths core and reinforce the entire earth through its center from both poles. No one here can escape the foul smell of burning plastic. Last month I ran into the Mayor of Tarlac City ("Ace" Manalang) and confronted him about the widespread dumping of plastic and other waste burning all around the city. He tried to dodge my questions, and kept repeating that: "nothing can be done to change the people". Is it time to give up on clean air and water? I took to the streets to see what people think, and the results surprised me. I spoke with many different people of all ages, and there was a clear pattern in every instance. Women are aware and concerned about the situation, and actively clean and recycle plastics. Children under the age of ten enjoyed picking up plastics, it is seen as a game that can make a little money. Most of the men I spoke with were either drinking beer or gin. They were unaware and not concerned by problems created by burning plastics. When asked to help clean up the community along side the women and children, all quickly refused. This is what surprised me, they clearly had nothing better to do, yet refused to help. What was holding them back? These drunken men were ironically concerned about their public image and status within the community. If they were seen cleaning trash they could be labeled a "scavenger". PythonJS now translates operator overloading from Python to the Dart backend, see my commit here. Dart supports all the same operators as Python, except for in-place operators. In-place operators are very useful, I am curious why the Dart developers have left this feature out? Even if this feature is missing in Dart, it can still be forced to work when PythonJS translates to Dart code, by inlining a special if/else for each in-place assignment. Example x += y if x is a Number or String: then use +=: else use x.__iadd__(y) The Dart2js compiler is smart enough to determine the type of x at compile time, and optimize the if/else away. This opens the door for doing more dumb tricks from PythonJS that get optimized away by Python Input class Vector3: def __init__(self, x=0, y=0, z=0 ): self._x = x self._y = y self._z = z def set(self, x,y,z): self._x = x self._y = y self._z = z def x(self): return self._x def x(self, value): print 'x setter', value self._x = value def y(self): return self._y def y(self, value): print 'y setter', value self._y = value def z(self): return self._z def z(self, value): print 'z setter', value self._z = value def add(self, other): self.set( self.x+other.x, self.y+other.y, self.z+other.z ) return self def __add__(self, other): if instanceof(other, Number): return Vector3( self.x+other, self.y+other, self.z+other ) return Vector3( self.x+other.x, self.y+other.y, self.z+other.z ) def __iadd__(self, other): if instanceof(other, Number): self.addScalar( other ) self.add( other ) def addScalar(self, s): self.set( self.x+s, self.y+s, self.z+s ) return self def sub(self, other): self.set( self.x-other.x, self.y-other.y, self.z-other.z ) return self def __sub__(self, other): if instanceof(other, Number): return Vector3( self.x-other, self.y-other, self.z-other ) return Vector3( self.x-other.x, self.y-other.y, self.z-other.z ) def __isub__(self, other): if instanceof(other, Number): self.set( self.x-other, self.y-other, self.z-other ) self.sub( other ) def multiply(self, other): self.set( self.x*other.x, self.y*other.y, self.z*other.z ) return self def __mul__(self, other): if instanceof(other, Number): return Vector3( self.x*other, self.y*other, self.z*other ) return Vector3( self.x*other.x, self.y*other.y, self.z*other.z ) def __imul__(self, other): if instanceof(other, Number): self.multiplyScalar( other ) self.multiply( other ) def multiplyScalar(self, s): self.set( self.x*s, self.y*s, self.z*s ) return self def divide(self, other): self.set( self.x/other.x, self.y/other.y, self.z/other.z ) return self def divideScalar(self, s): self.set( self.x/s, self.y/s, self.z/s ) return self def __div__(self, other): if instanceof(other, Number): return Vector3( self.x/other, self.y/other, self.z/other ) return Vector3( self.x/other.x, self.y/other.y, self.z/other.z ) def __idiv__(self, other): if instanceof(other, Number): self.divideScalar( other ) self.divide( other ) def show_vec(v): print '-------------' print v.x print v.y print v.z def main(): n = 1 n += 2 print n v1 = Vector3(1, 2, 3) v2 = Vector3(10, 0, 10) print v1, v2 print 'testing +' a = v1 + v2 print 'testing +=' a += 2.5 a += v1 print 'testing -=' a -= v1 a -= 100 print 'testing *' b = v1 * v2 print 'testing *=' b *= 10.0 b *= v2 print 'testing setters' b.x = 1 b.y = 2 b.z = 3 Dart Output class Vector3 { var _z; var _y; var _x; static void __init__(self, [x=0,y=0,z=0]) { self._x = x; self._y = y; self._z = z; Vector3(x,y,z) {Vector3.__init__(this,x,y,z);} set(x,y,z) { return Vector3.__set(this,x,y,z); } static __set(self, x, y, z) { self._x = x; self._y = y; self._z = z; get x { return this._x; set x(value) { print(["x setter", value]); this._x = value; get y { return this._y; set y(value) { print(["y setter", value]); this._y = value; get z { return this._z; set z(value) { print(["z setter", value]); this._z = value; add(other) { return Vector3.__add(this,other); } static __add(self, other) { self.set((self.x + other.x), (self.y + other.y), (self.z + other.z)); return self; operator +(other) { return Vector3.____add__(this,other); } static ____add__(self, other) { if (other is num) { return new Vector3((self.x + other), (self.y + other), (self.z + other)); } else { return new Vector3((self.x + other.x), (self.y + other.y), (self.z + other.z)); __iadd__(other) { return Vector3.____iadd__(this,other); } static ____iadd__(self, other) { if (other is num) { } else { addScalar(s) { return Vector3.__addScalar(this,s); } static __addScalar(self, s) { self.set((self.x + s), (self.y + s), (self.z + s)); return self; sub(other) { return Vector3.__sub(this,other); } static __sub(self, other) { self.set((self.x - other.x), (self.y - other.y), (self.z - other.z)); return self; operator -(other) { return Vector3.____sub__(this,other); } static ____sub__(self, other) { if (other is num) { return new Vector3((self.x - other), (self.y - other), (self.z - other)); } else { return new Vector3((self.x - other.x), (self.y - other.y), (self.z - other.z)); __isub__(other) { return Vector3.____isub__(this,other); } static ____isub__(self, other) { if (other is num) { self.set((self.x - other), (self.y - other), (self.z - other)); } else { multiply(other) { return Vector3.__multiply(this,other); } static __multiply(self, other) { self.set((self.x * other.x), (self.y * other.y), (self.z * other.z)); return self; operator *(other) { return Vector3.____mul__(this,other); } static ____mul__(self, other) { if (other is num) { return new Vector3((self.x * other), (self.y * other), (self.z * other)); } else { return new Vector3((self.x * other.x), (self.y * other.y), (self.z * other.z)); __imul__(other) { return Vector3.____imul__(this,other); } static ____imul__(self, other) { if (other is num) { } else { multiplyScalar(s) { return Vector3.__multiplyScalar(this,s); } static __multiplyScalar(self, s) { self.set((self.x * s), (self.y * s), (self.z * s)); return self; divide(other) { return Vector3.__divide(this,other); } static __divide(self, other) { self.set((self.x / other.x), (self.y / other.y), (self.z / other.z)); return self; divideScalar(s) { return Vector3.__divideScalar(this,s); } static __divideScalar(self, s) { self.set((self.x / s), (self.y / s), (self.z / s)); return self; operator /(other) { return Vector3.____div__(this,other); } static ____div__(self, other) { if (other is num) { return new Vector3((self.x / other), (self.y / other), (self.z / other)); } else { return new Vector3((self.x / other.x), (self.y / other.y), (self.z / other.z)); __idiv__(other) { return Vector3.____idiv__(this,other); } static ____idiv__(self, other) { if (other is num) { } else { show_vec(v) { main() { var a, v1, v2, b, n; n = 1; if (n is num || n is String) { n += 2; } else { v1 = new Vector3(1, 2, 3); v2 = new Vector3(10, 0, 10); print([v1, v2]); print("testing +"); a = (v1 + v2); print("testing +="); if (a is num || a is String) { a += 2.5; } else { if (a is num || a is String) { a += v1; } else { print("testing -="); if (a is num || a is String) { a -= v1; } else { if (a is num || a is String) { a -= 100; } else { print("testing *"); b = (v1 * v2); print("testing *="); if (b is num || b is String) { b *= 10.0; } else { if (b is num || b is String) { b *= v2; } else { print("testing setters"); b.x = 1; b.y = 2; b.z = 3; I have started working on a second backend for PythonJS that outputs Dart code. Dart is a new language by Google that is very similar to JavaScript, it includes all the things missing from JavaScript like: classes, static types, operator overloading, and a compiler that checks all your code. Using dart2js you can translate Dart code into JavaScript code so that it works in all web browsers. One of the limitations of Dart is that it lacks multiple inheritance, [1], [2]. It features Mix-ins and Interfaces, but these can not fully capture all the power that proper multiple inheritance provides. Using PythonJS you can bypass this limitation of Dart and use multiple inheritance. Multiple Inheritance PythonJS implements multiple inheritance in Dart using: static class methods, stub-methods, and interfaces. The method body is placed inside static class methods. The stub-methods are real methods on the instance, and simply forward calls to the static class methods and pass this as the first argument. Sub-classes can extend a method of the parent, and still call the parent's method using the normal Python syntax: parent.some_method(self). Python Input class A: def foo(self): print 'foo' class B: def bar(self): print 'bar' class C( A, B ): def call_foo_bar(self): print 'call_foo_bar in subclass C' ## extend foo ## def foo(self): print 'foo extended' Dart Output class A { foo() { return A.__foo(this); } static __foo(self) { class B { bar() { return B.__bar(this); } static __bar(self) { class C implements A, B { call_foo_bar() { return C.__call_foo_bar(this); } static __call_foo_bar(self) { print("call_foo_bar in subclass C"); foo() { return C.__foo(this); } static __foo(self) { print("foo extended"); bar() { return B.__bar(this); } Firefox has had support for the yield keyword for a long time already, but Google Chrome is only just recently supporting this features as an experimental option. Basically, this means if you want your website to work in most web browsers, you can forget about using yield and the beauty of generator functions for the next couple years, it is going to take awhile before the majority of users upgrade. It is time to give up on yield? There is one option... PythonJS now supports generator functions that will work in all browsers by translating the generator into a class with state-machine at compile time. Using PythonJS to write generator functions also produces JavaScript that runs faster than hand-written JavaScript that uses the native yield keyword. This is likely because the native yield is still a rarely used feature, and JIT's have not tuned their performance for it. PythonJS translates generator functions into simple classes with a next method, which is very JIT friendly. The results below show native yield is more than 10 times slower than yield in PythonJS. I was unable to test native yield in Chrome, for some reason I could not get it to work even after switching on harmony. The Fibonacci series computed 1,000 times to 1,000 places in Firefox28, Python2.7, PyPy2.2, and GoogleChrome. Lower times are better. The code used in this benchmark is below:
{"url":"http://pyppet.blogspot.com/","timestamp":"2014-04-19T20:04:13Z","content_type":null,"content_length":"129308","record_id":"<urn:uuid:ab7cbb1f-0e69-4462-9733-1d75e5726cb0>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Kenmore, MA Math Tutor Find a Kenmore, MA Math Tutor ...I will travel throughout the area to meet in your home, library, or wherever is comfortable for you.Materials Physics Research Associate, Harvard, current Geophysics postdoctoral fellow, MIT, 2010-2012 Physics PhD, Brandeis University, 2010 -Includes experience teaching and lecturing Physics... 16 Subjects: including algebra 1, geometry, precalculus, trigonometry I have three years of experience tutoring in physics, math, biology, and chemistry. I have worked mostly with college level introductory courses and high school students. I have worked with many students of different academic levels from elementary to college students. 19 Subjects: including calculus, ACT Math, SAT math, trigonometry ...I have found with great consistency that students do not learn primarily by hearing explanations. This is especially true in the hard sciences and in math, both of which are skill-intensive. As such, I make sure that all teaching occurs in the context of an activity of some kind. 9 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have worked as a volunteer goalkeeper coach over the past ten years. I was the Junior Varsity coach of a high school girls' soccer team in Dorchester, MA for the past 5 years, and am now the Varsity Coach for Boston Latin Academy. I have my USSF National "D" License and my NSCAA Goalkeeping Level I license. 14 Subjects: including algebra 1, SAT math, Spanish, chemistry ...My name's Isaac, and I am a student in the School of Arts and Sciences at Boston College. I am majoring in psychology (BA) and minoring in Hispanic studies, but as a student at a liberal arts college, I am continuing to explore different interests as much as my schedule allows me to in academic ... 30 Subjects: including algebra 1, biology, calculus, elementary (k-6th) Related Kenmore, MA Tutors Kenmore, MA Accounting Tutors Kenmore, MA ACT Tutors Kenmore, MA Algebra Tutors Kenmore, MA Algebra 2 Tutors Kenmore, MA Calculus Tutors Kenmore, MA Geometry Tutors Kenmore, MA Math Tutors Kenmore, MA Prealgebra Tutors Kenmore, MA Precalculus Tutors Kenmore, MA SAT Tutors Kenmore, MA SAT Math Tutors Kenmore, MA Science Tutors Kenmore, MA Statistics Tutors Kenmore, MA Trigonometry Tutors Nearby Cities With Math Tutor Brookline Village Math Tutors Brookline, MA Math Tutors Cambridgeport, MA Math Tutors East Somerville, MA Math Tutors East Watertown, MA Math Tutors Grove Hall, MA Math Tutors Kendall Square, MA Math Tutors Marina Bay, MA Math Tutors North Quincy, MA Math Tutors Readville Math Tutors Reservoir, MS Math Tutors Squantum, MA Math Tutors West Lynn, MA Math Tutors West Somerville, MA Math Tutors Winter Hill, MA Math Tutors
{"url":"http://www.purplemath.com/Kenmore_MA_Math_tutors.php","timestamp":"2014-04-20T23:59:45Z","content_type":null,"content_length":"23960","record_id":"<urn:uuid:e0aa6cee-c938-4a99-84ac-26500a9efb83>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2006/168 How Fast can be Algebraic Attacks on Block Ciphers ?Nicolas T. CourtoisAbstract: In this paper we give a specification of a new block cipher that can be called the Courtois Toy Cipher (CTC). It is quite simple, and yet very much like any other known block cipher. If the parameters are large enough, it should evidently be secure against all known attack methods. However, we are not proposing a new method for encrypting sensitive data, but rather a research tool that should allow us (and other researchers) to experiment with algebraic attacks on block ciphers and obtain interesting results using a PC with reasonable quantity of RAM. For this reason the S-box of this cipher has only 3-bits, which is quite small. Ciphers with very small S-boxes are believed quite secure, for example the Serpent S-box has only 4 bits, and in DES all the S-boxes have 4 output bits. The AES S-box is not quite as small but can be described (in many ways) by a very small systems of equations with only a few monomials (and this fact can also be exploited in algebraic cryptanalysis). We believe that results on algebraic cryptanalysis of this cipher will have very deep implications for the security of ciphers in general. Category / Keywords: secret-key cryptography / algebraic cryptanalysis, AES, Serpent, solving systems of sparse multivariate polynomial equations.Date: received 13 May 2006, last revised 18 May 2006 Contact author: courtois at minrank orgAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Note: Work in progress. To summarize the main results: it is the first time in the history, that a block cipher with no special algebraic structure and with a (very) large number of S-boxes is being broken in practice by an algebraic attack. Version: 20060518:092545 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2006/168","timestamp":"2014-04-20T21:00:09Z","content_type":null,"content_length":"3283","record_id":"<urn:uuid:8fcb789e-f649-4508-8575-d901b237eb2e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Robotics #3: Forward and Inverse Kinematics In order to plan a robot’s movements, we have to understand the relationship between the actuators that we can control and the robot’s resulting position in the environment. For static arms, this is rather straightforward: if we know the position/angle of each joint, we can calculate the position of its end-effectors using trigonometry. This process is known as forward kinematics. If we want to calculate the position each joint needs to be at, we need to invert this relationship. This is known as inverse kinematics. The goals of this lecture are • to introduce the forward kinematics of mobile robots • show how solutions for the inverse kinematics for both static and mobile robots can be derived • provide an intuition on the relationship between inverse kinematics and path-planning Forward Kinematics of Mobile Robots Whereas the pose of a robotic manipulator is uniquely defined by its joint angles – which can be made available using encoders in almost real-time – this is not the case for a mobile robot. Here, the encoder values simply refer to wheel orientation and need to be integrated over time, which will be a huge source of uncertainty as we will later see. What complicates matters is that for systems, it is not sufficient to simply measure the distance that each wheel traveled, but also each movement was executed. A system is non-holonomic when closed trajectories in its configuration space (reminder: the configuration space of a two-link robotic arm is spanned by the possible values of each angle) may have it return to its original state. A simple arm is holonomic, as each joint position corresponds to a unique position in space. Going through whatever trajectory that comes back to the starting point in configuration space will put the robot at the exact same position. A train on a track is holonomic: moving its wheels backwards by the same amount they have been moving forward brings the train to the exact same position in space. A car and a Roomba are non-holonomic vehicles: performing a straight line and then a right-turn leads to the same amount of wheel rotation than doing a right turn first and then go in a straight line; getting the robot to its initial position requires not only to rewind both wheels by the same amount, but also getting their relative speeds right. (Make a drawing of this.) It should be clear by now, that for a mobile robot, not only traveled distance per wheel matters, but also the speed of each wheel as a function of time. Lets introduce the following conventions. We establish a coordinate system $X_R$ on the robot and express its speed $\dot{\xi}$ as a vector $\dot{\xi}=[\dot{x}, \dot{y}, \dot{\theta}]^T$. Here $\dot{x}$ and $\dot{y}$ correspond to the speed along the x and y directions, whereas $\dot{\theta}$ corresponds to the rotation around the imaginary z-axis, that you can imagine to be sticking out of the ground. We denote speeds with dots over the variable name, as speed is simply the derivative of distance. We will also establish a world coordinate system $X_I$, which is known as the inertial frame by convention. Think about the robot’s position in $X_R$ real quick. It is always zero, as the coordinate system is fixed on the robot. Therefore, $\dot{x_R}$ is the only interesting quantity here and we need to understand how speeds in $X_R$ map to positions in $X_I$, which we denote by $\xi_I=[x, y, \theta]^T$. These coordinate systems are shown in the figure to the right. Notice that the positioning of the coordinate frames and their orientation are arbitrary. Here, we chose to place the coordinate system in the center of the robot’s axle and align $x_R$ with its default driving direction. In order to calculate the robot’s position in the inertial frame, we need to first find out, how speed in the robot coordinate frame maps to speed in the inertial frame. This can be done again by employing trigonometry. There is only one complication: a movement into the robot’s x-axis might lead to movement along both the x-axis and the y-axis of the world coordinate frame. By looking at the figure above, we can derive the following components to $\dot{x}_I$. First, $\dot{x}_{I,x}=cos(\theta) \dot{x}_R$. There is also a component of motion coming from $\dot{y}_R$. For positive $\theta$, we can see that the robot actually moves into negative $X_I$ direction. The projection from $\dot{y}_R$ is therefore given by $\dot{x_{I,y}}=-sin(\theta)\dot{y_R}$. We can now write $\dot{x_I}=cos(\theta) \dot{x_R} - sin(\theta) \dot{y_R}$ Similar reasoning leads to $\dot{y_I}=sin(\theta) \dot{x_R} + cos(\theta) \dot{y_R}$. which is the case because both robot’s and world coordinate system share the same z-axis in this example. We can now conveniently write $T(\theta)=\left(\begin{array}{ccc} cos(\theta) & -sin(\theta) & 0 \\ sin(\theta) & cos(\theta) & 0 \\ 0 & 0 & 1\end{array}\right)$ We are now left with the problem of how to calculate the speed $\dot{\xi_R}$ in robot coordinates. For this, we make use of the kinematic constraints of the robotic wheels. For a standard wheel, the kinematic constraints are that every rotation of the wheel leads to strictly forward or backward motion and does not allow side-way motion or sliding. We can therefore calculate the forward speed of a wheel $\dot{x}$ using its rotational speed $\dot{\phi}$ (assuming the encoder value/angle is expressed as $\phi$) and radius $r$ by $\dot{x}=\dot{\phi}r$. This becomes apparent when considering that the circumference of a wheel with radius $r$ is $2\pi r$. The distance a wheel rolls when turned by the angle $\phi$ (in radians) is therefore $x=\phi r$. Taking the derivative of this expression on both sides leads to the above expression. How each of the two wheels in our example contributes to the speed of the robot’s center – where its coordinate system is anchored – requires the following trick: we calculate the contribution of each individual wheel while assuming all other wheels remaining un-actuated. In this example, the distance traveled by the center point is exactly half of that traveled by each individual wheel, assuming the non-actuated wheel rotating around its ground contact point. (Make a drawing of this!) We can therefore write given the speeds $\dot{\phi_l}$ and $\dot{\phi_r}$ of the left and the right wheel, respectively. Exercise: Think about how the robot’s speed along its y-axis is affected by the wheel-speed given the coordinate system in the drawing above. Think about the kinematic constraints that the standard wheels impose. Hard to believe at first, but the speed of the robot along its y-axis is always zero. This is because the constraints of the standard wheel tell us that the robot can never slide. We are now left with calculating the rotation of the robot around its z-axis. That there is such a thing can be immediately seen when imaging the robot’s wheels spinning in opposite directions. We will again consider each wheel independently. Assuming the left wheel to be non-actuated, spinning the right wheel forwards will lead to counter-clockwise rotation. Given an axle diameter (distance between the robot’s wheels) $d$, we can now write $\omega_l d = \phi_r r$ with $\omega_l$ the angle of rotation around the left wheel. Taking the derivative on both sides yields speeds and we can write $\dot{\omega} = \frac{\dot{\phi_r} r}{d}$ Adding the rotation speeds up (with the one around the right wheel being negative based on the right-hand grip rule), leads to $\dot{\theta}=\frac{\dot{\phi_r} r}{d}-\frac{\dot{\phi_l} r}{d}$ Putting it all together, we can write $\left(\begin{array}{c} \dot{x_I}\\\dot{y_I}\\\dot{\theta}\end{array}\right)=\left(\begin{array}{ccc} cos(\theta) & -sin(\theta) & 0 \\ sin(\theta) & cos(\theta) & 0 \\ 0 & 0 & 1\end{array}\right)\ left(\begin{array}{c}\frac{r\dot{\phi_l}}{2}+\frac{r\dot{\phi_r}}{2}\\0\\\frac{\dot{\phi_r} r}{d}-\frac{\dot{\phi_l} r}{d}\end{array}\right)$ Inverse Kinematics The main problem for the engineer is now to find out how to chose the control parameters to reach a desired position. This problem is known as inverse kinematics. Solving the forward kinematics in closed form is not always possible, however. It can be done for the differential wheel platform we studied above. Lets first establish how to calculate the necessary speed of the robot’s center given a desired speed $\dot{\xi_I}$ in world coordinates. We can transform the expression $\dot{\xi_I}=T(\theta)\dot{\xi_R}$ by multiplying both sides with the inverse of $T(\theta)$: which leads to $\dot{\xi_R}=T^{-1}(\theta)\dot{\xi_I}$. Here $T^{-1}=\left(\begin{array}{ccc}cos \theta & sin \theta & 0 \\ -sin \theta & cos \theta & 0 \\ 0 & 0 & 1\end{array}\right)$ which can be determined by actually performing the matrix inversion or by deriving the trigonometric relationships from the drawing. Similarly, we can now solve $\left(\begin{array}{c} \dot{x_R}\\\dot{y_R}\\\dot{\theta}\end{array}\right)=\left(\begin{array}{c}\frac{r\dot{\phi_l}}{2}+\frac{r\dot{\phi_r}}{2}\\0\\\frac{\dot{\phi_r} r}{d}-\frac{\dot{\phi_l} r} for $\phi_l$, $\phi_r$. (do this!) You will now see that your kinematic constraints actually render some desired velocities, namely those that would lead to non-negative $\dot{y_R}$ unfeasible. Inverse Kinematics of a Manipulator Arm We will now look at the kinematics of a 2-link arm that was introduced in last week’s lecture. Similar to a the process of calculating the required wheel-speed for achieving a desired speed of the local coordinate system, we need to solve the equations determining the robot’s forward kinematics by solving for $\alpha$ and $\beta$. This is tricky, however, as we have to deal with complicated trigonometric expressions. You can give it a shot using Mathematica using the code below. For simplicity, $l_1$ and $l_2$ are assumed to be 1. sol = Solve[Sin[α + β] + Sin[α] == x && Cos[α + β] + Cos[α] == y, {α, β}]; min = sol /. {x -> 1, y -> 1} This will solve for $\alpha$ and $\beta$ for x=1, y=1. The solutions for this case are obviously $\left(0,\frac{\pi}{2}\right)$ and $\left(\frac{\pi}{2},\frac{-\pi}{2}\right)$. (Think about this real quick.) The solutions to this problem are not nice, however, with 8 complicated expressions, 6 of which yielding complex solutions,such as $\alpha \rightarrow -ArcCos(\frac{x^2 y + y^3 - \sqrt(4 x^4 - x^6 + 4 x^2 y^2 - 2 x^4 y^2 - x^2 y^4)}{2 (x^2 + y^2)}$ $\beta \rightarrow -ArcCos(1/2(-2+x^2+y^2))$ As such solutions quickly become unhandy with more dimensions, you can calculate a numerical solution using an approach that we will see is very similar to path planning in mobile robotics. For this, you need to plot the distance of the end-effector from the desired solution in configuration space. To plot this, you need to solve the forward kinematics for every point in configuration space and use the Euclidian distance to the desired target as height. This is shown below and can be accomplished using the commands x = 1; y = 1; Sqrt[(Sin[α + β] + Sin[α] - x)^2 + (Cos[α + β] + Cos[α] - y)^2], {α, -π/2, π/2}, {β, -π, π}], ListPointPlot3D[{α, β, 0.1} /. {min}]] The key point here is that the inverse kinematics problem is equivalent to a path-planning problem in the configuration space. How to find shortest paths in space, that is finding the shortest route for a robot to get from A to B will be a major part of this class. Take-home lessons • For calculating the forward kinematics of a robot, it is easiest to establish a local coordinate frame on the robot and determine the transformation into the world coordinate first. • Forward and Inverse Kinematics of a mobile robot are performed with respect to the speed of the robot and not its position. • For calculating the effect of each wheel on the speed of the robot, you need to consider the contribution of each wheel independently. • Calculating the inverse kinematics analytically becomes quickly infeasible. You can then plan in configuration space of the robot using path-planning techniques. Great lecture! You really made it easy to comprehend. Just wanted to let you know I think there’s a typo in the Forward Kinematics section, just before you first introduce the T(theta) matrix. In the transform equation ( XIr = T(theta)*XIi ), the inertial speed vector and robot speed vector are switched, just the subscripts. □ http://correll.cs.colorado.edu/?page_id=19 Thanks for catching this!
{"url":"http://correll.cs.colorado.edu/?p=896","timestamp":"2014-04-20T05:54:07Z","content_type":null,"content_length":"65399","record_id":"<urn:uuid:ec95e861-0dd9-4618-a608-58a5840226e8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Arithmetic Logical Instructions Integer Addition (add) add{bwl} reg[8|16|32], r/m[8|16|32] add{bwl} r/m[8|16|32], reg[8|16|32] add{bwl} imm[8|16|32], r/m[8|16|32] reg[8|16|32] + r/m[8|16|32] -> r/m[8|16|32] r/m[8|16|32] + reg[8|16|32] -> reg[8|16|32] imm[8|16|32] + r/m[8|16|32] -> r/m[8|16|32] Integer adds operand1 to operand2 and stores the result in operand2. When an immediate byte is added to a word or long, the immediate value is sign-extended to the size of the word or long operand. If you wish to decimal adjust (daa) or ASCII adjust (aaa) the add result, use the form of add that stores the result in AL. Integer adds the 8-bit constant, -126, to the content of the AL register: addb $-126,%al Integer adds the word contained in the effective address (addressed by the EDI register plus an offset of 4) to the content of the DX register: addw 4(%edi),%dx Integer adds the content of the EDX register to the effective address (addressed by the EDI register plus an offset of 4): addl %edx, 4(%edi) Integer Add With Carry (adc) adc{bwl} reg[8|16|32], r/m[8|16|32] adc{bwl} r/m[8|16|32], reg[8|16|32] adc{bwl} imm[8|16|32], r/m[8|16|32] (reg[8|16|32] + CF) + r/m[8|16|32] -> r/m[8|16|32] (r/m[8|16|32] + CF) + reg[8|16|32] -> reg[8|16|32] (imm[8|16|32] + CF) + r/m[8|16|32] -> r/m[8|16|32] Integer adds operand1 and the carry flag to operand2 and stores the result in operand2. adc is typically executed as part of a multi-byte or multi-word add operation. When an immediate byte is added to a word or long, the immediate value is sign-extended to the size of the word or long operand. Integer add the 8-bit content of the effective memory address (ESI register plus an offset of 1) and the carry flag to the content of the address in the CL register: adcb 1(%esi), %cl Integer add the 16-bit content of the effective memory address (EDI register plus an offset of 4) and the carry flag to the content of the address in the DX register: adcw 4(%edi), %dx Integer add the 32-bit content of the address in the EDX register and the carry flag to the effective memory address (EDI register plus an offset of 4): adcl %edx, 4(%edi) Integer Subtraction (sub) sub{bwl} reg[8|16|32], r/m[8|16|32] sub{bwl} r/m[8|16|32], reg[8|16|32] sub{bwl} imm[8|16|32], r/m[8|16|32] r/m[8|16|32] - reg[8|16|32] -> r/m[8|16|32] reg[8|16|32] - r/m[8|16|32] -> reg[8|16|32] r/m[8|16|32] - imm[8|16|32] -> r/m[8|16|32] Subtracts operand1 from operand2 and stores the result in operand2. When an immediate byte value is subtracted from a word, the immediate value is sign-extended to the size of the word operand before the subtract operation is executed. If you wish to decimal adjust (das) or ASCII adjust (aas) the sub result, use the form of sub that stores the result in AL. Integer subtract the 8-bit constant, -126, from the content of the effective address (addressed by the ESI register plus an offset of 1): subb $-126, 1(%esi) Integer subtract the 16-bit constant, 1234, from the content of the effective address (addressed by the EDI register plus an offset of 4): subw $1234, 4(%edi) Integer subtract the 32-bit content of the EDX register from the effective address (addressed by the EDI register plus an offset of 4): subl %edx, 4(%edi) Integer Subtraction With Borrow (sbb) sbb{bwl} reg[8|16|32], r/m[8|16|32] sbb{bwl} r/m[8|16|32], reg[8|16|32] sbb{bwl} imm[8|16|32], r/m[8|16|32] r/m[8|16|32] - (reg[8|16|32] + CF) -> r/m[8|16|32] reg[8|16|32] - (r/m[8|16|32] + CF) -> reg[8|16|32] r/m[8|16|32] - (imm[8|16|32] + CF) -> r/m[8|16|32] Subtracts (operand1 and the carry flag) from operand2 and stores the result in operand2. When an immediate byte value is subtracted from a word, the immediate value is sign-extended to the size of the word operand before the subtract operation is executed. Integer subtract the 8-bit content of the CL register plus the carry flag from the effective address (addressed by the ESI register plus an offset of 1): sbbb %cl, 1(%esi) Integer subtract the 16-bit constant, -126, plus the carry flag from the AL register: sbbw $-126, %al Integer subtract the 32-bit constant, 12345678, plus the carry flag from the effective address (addressed by the EDI register plus an offset of 4): sbbl $12345678, 4(%edi) Compare Two Operands (cmp) cmp{bwl} reg[8|16|32], r/m[8|16|32] cmp{bwl} r/m[8|16|32], reg[8|16|32] cmp{bwl} imm[8|16|32], r/m[8|16|32] r/m[8|16|32] - reg[8|16|32] reg[8|16|32] - r/m[8|16|32] r/m[8|16|32] - imm[8|16|32] Subtracts operand1 from operand2, but does not store the result; only changes the flags. cmp is typically executed in conjunction with conditional jumps and the setcc instruction. If an operand greater than one byte is compared to an immediate byte, the immediate byte value is first sign-extended. Compare the 8-bit constant, 0xff, with the content of the AL register: cmpb $0xff, %al Compare the 16-bit content of the DX register with the effective address (addressed by the EDI register plus an offset of 4): cmpw %dx, 4(%edi) Compare the 32-bit content of the effective address (addressed by the EDI register plus an offset of 4) to the EDX register: cmpl 4(%edi), %edx Increment by 1 (inc) inc{bwl} r/m[8|16|32] r/m[8|16|32] + 1 -> r/m[8|16|32] Adds 1 to the operand and does not change the carry flag. Use the add instruction with an immediate value of 1 to change the carry flag,. Add 1 to the contents of the byte at the effective address (addressed by the ESI register plus an offset of 1): incb 1(%esi) Add 1 to the 16-bit contents of the AX register: incw %ax Add 1 to the 32-bit contents at the effective address (addressed by the EDI register): incl 4(%edi) Decrease by 1 (dec) dec{bwl} r/m[8|16|32] r/m[8|16|32] - 1 -> r/m[8|16|32] Subtracts 1 from the operand. Does not change the carry flag. To change the carry flag, use the sub instruction with an immediate value of 1. Subtract 1 from the 8-bit contents of the effective address (addressed by the ESI register plus an offset of 1): decb 1(%esi) Subtract 1 from the 16-bit contents of the BX register: decw %bx Subtract 1 from the 32-bit contents of the effective address (addressed by the EDI register plus an offset of 4): decl 4(%edi) Logical Comparison or Test (test) test{bwl} reg[8|16|32], r/m[8|16|32] test{bwl} r/m[8|16|32], reg[8|16|32] test{bwl} imm[8|16|32], r/m[8|16|32] reg[8|16|32] and r/m[8|16|32] -> r/m[8|16|32] r/m[8|16|32] and reg[8|16|32] -> reg[8|16|32] imm[8|16|32] and r/m[8|16|32] -> r/m[8|16|32] Performs a bit-wise logical AND of the two operands. The result of a bit-wise logical AND is 1 if the value of that bit in both operands is 1; otherwise, the result is 0. test discards the results and modifies the flags. The OF and CF flags are cleared; SF, ZF and PF flags are set according to the result. Perform a logical AND of the constant, 0xff, and the 8-bit contents of the effective address (addressed by the ESI register plus an offset of 1): testb $0xff, 1(%esi) Perform a logical AND of the 16-bit contents of the DX register and the contents of the effective address (addressed by the EDI register plus an offset of 4): testw %dx, 4(%edi) Perform a logical AND of the constant, 0xffeeddcc, and the 32-bit contents of the effective address (addressed by the EDI register plus an offset of 4): testl $0xffeeddcc, 4(%edi) Shift (sal, shl, sar, shr) shl{bwl} %cl, r/m[8|16|32] sar{bwl} imm8, r/m[8|16|32] sar{bwl} %cl, r/m[8|16|32] shr{bwl} imm8, r/m[8|16|32] sal{bwl} %cl, r/m[8|16|32] shl{bwl} imm8, r/m[8|16|32] sal{bwl} imm8, r/m[8|16|32] shr{bwl} %cl, r/m[8|16|32] shift-left r/m[8|16|32] by imm8 -> r/m[8|16|32] shift-left r/m[8|16|32] by %cl -> r/m[8|16|32] shift-right r/m[8|16|32] by imm8 -> r/m[8|16|32] shift-right r/m[8|16|32] by %cl -> r/m[8|16|32] sal (or its synonym shl) left shifts (multiplies) a byte, word, or long value for a count specified by an immediate value and stores the product in that byte, word, or long respectively. The second variation left shifts by a count value specified in the CL register. The high-order bit is shifted into the carry flag; the low-order bit is set to 0. sar right shifts (signed divides) a byte, word, or long value for a count specified by an immediate value and stores the quotient in that byte, word, or long respectively. The second variation right shifts by a count value specified in the CL register. sar rounds toward negative infinity; the high-order bit remains unchanged. shr right shifts (unsigned divides) a byte, word, or long value for a count specified by an immediate value and stores the quotient in that byte, word, or long respectively. The second variation divides by a count value specified in the CL register. shr sets the high-order bit to 0. Right shift, count specified by the constant (253), the 8-bit contents of the effective address (addressed by the ESI register plus an offset of 1): sarb $253, 1(%esi) Right shift, count specified by the contents of the CL register, the 16-bit contents of the effective address (addressed by the EDI register plus an offset of 4): shrw %cl, 4(%edi) Left shift, count specified by the constant (253), the 32-bit contents of the effective address (addressed by the EDI register plus an offset of 4): shll $253, 4(%edi) Double Precision Shift Left (shld) shld{wl} imm8, reg[16|32], r/m[16|32] shld{wl} %cl, reg[16|32], r/m[16|32] by imm8 shift-left r/m[16|32] bits reg[16|32] -> r/m[16|32] by reg[16|32] shift-left r/m[16|32] bits r/m[16|32] -> r/m[16|32] shld double-precision left shifts a 16- or 32-bit register value into a word or long for the count specified by an immediate value, MODULO 32 (0 to 31). The result is stored in that particular word or long. The second variation of shld double-precision left shifts a 16- or 32-bit register or memory value into a word or long for the count specified by register CL MODULO 32 (0 to 31).The result is stored in that particular word or long. shld sets the SF, ZF, and PF flags according to the value of the result; CS is set to the value of the last bit shifted out; OF and AF are undefined. Use the count specified by the constant, 253, to double-precision left shift a 16-bit register value from the DX register to the effective address (addressed by the EDI register plus an offset of 4): shldw $253, %dx, 4(%edi) Use the count specified (%CL MOD 32) by the 32-bit EDX register to double-precision left shift a 32-bit memory value at the effective address (addressed by the EDI register plus an offset of 4): shldl %cl,%edx, 4(%edi) Double Precision Shift Right (shrd) shrd{wl} imm8, reg[16|32], r/m[16|32] shrd{wl} %cl, reg[16|32], r/m[16|32] by imm8 shift-right r/m[16|32] bits reg[16|32] -> r/m[16|32] by reg[16|32] shift-right r/m[16|32] bits r/m[16|32] -> r/m[16|32] shrd double-precision right shifts a 16- or 32-bit register value into a word or long for the count specified by an immediate value MODULO 32 (0 to 31). The result is stored in that particular word or long. The second variation of shrd double-precision right shifts a 16- or 32-bit register or memory value into a word or long for the count specified by register CL MODULO 32 (0 to 31).The result is stored in that particular word or long. shrd sets the SF, ZF, and PF flags according to the value of the result; CS is set to the value of the last bit shifted out; OF and AF are undefined. Use the count specified by the constant, 253, to double-precision right shift a 16-bit register value from the DX register to the effective address (addressed by the EDI register plus an offset of shrdw $253, %dx, 4(%edi) Use the count specified (%CL MOD 32) by the 32-bit EDX register to double-precision right shift a 32-bit memory value at the effective address (addressed by the EDI register plus an offset of 4) shrdl %cl,%edx, 4(%edi) One's Complement Negation (not) not{bwl} r/m[8|16|32] not r/m[8|16|32] -> r/m[8|16|32] Inverts each bit value of the byte, word, or long; that is, every 1 becomes a 0 and every 0 becomes a 1. Invert each of the 8-bit values at the effective address (addressed by the ESI register plus an offset of 1): notb 1(%esi) Invert each of the 16-bit values at the effective address (addressed by the EDI register plus an offset of 4): notw 4(%edi) Invert each of the 32-bit values at the effective address (addressed by the EDI register plus an offset of 4): notl 4(%edi) Two's Complement Negation (neg) neg{bwl} r/m[8|16|32] two's-complement r/m[8|16|32] -> r/m[8|16|32] Replace the value of the byte, word, or long with its two's complement; that is, neg subtracts the byte, word, or long value from 0, and puts the result in the byte, word, or long respectively. neg sets the carry flag to 1, unless initial value of the byte, word, or long is 0. In this case neg clears the carry flag to 0. Replace the 8-bit contents of the effective address (addressed by the ESI register plus an offset of 1) with its two's complement: negb 1(%esi) Replace the 16-bit contents of the effective address (addressed by the EDI register plus an offset of 4) with its two's complement: negw 4(%edi) Replace the 32-bit contents of the effective address (addressed by the EDI register plus an offset of 4) with its two's complement: negl 4(%edi) Check Array Index Against Bounds (bound) bound{wl} reg[16|32], r/m[16|32] r/m[16|32] bound reg[16|32] -> CC is unchanged Ensures that a signed array index (16- or 32-bit register) value falls within the upper and lower bounds of a block of memory. The upper and lower bounds are specified by a 16- or 32-bit register or memory value. If the signed array index value is not within the bounds, an Interrupt 5 occurs; the return EIP points to the bound instruction. Check the 16-bit signed array index value in the AX register against the doubleword with the upper and lower bounds specified by DX: boundw %ax, %dx Check the 32-bit signed array index value in the EAX register against the doubleword with the upper and lower bounds specified by EDX: boundl %eax, %edx Logical And (and) and{bwl} reg[8|16|32], r/m[8|16|32] and{bwl} r/m[8|16|32], reg[8|16|32] and{bwl} imm[8|16|32], r/m[8|16|32] reg[8|16|32] land r/m[8|16|32] -> r/m[8|16|32] r/m[8|16|32] land reg[8|16|32] -> reg[8|16|32] imm[8|16|32] land r/m[8|16|32] -> r/m[8|16|32] Performs a logical AND of each bit in the values specified by the two operands and stores the result in the second operand. Table 2-2 Logical AND ┃ Values │ Result ┃ ┃ 0 LAND 0 │ 0 ┃ ┃ 0 LAND 1 │ 0 ┃ ┃ 1 LAND 0 │ 0 ┃ ┃ 1 LAND 1 │ 1 ┃ Perform an 8-bit logical AND of the CL register and the contents of the effective address (addressed by the ESI register plus an offset of 1): andb %cl, 1(%esi) Perform a 16-bit logical AND of the constant, 0xffee, and the contents of the effective address (addressed by the AX register): andw $0xffee, %ax Perform a 32-bit logical AND of the contents of the effective address (addressed by the EDI register plus an offset of 4) and the EDX register: andl 4(%edi), %edx Logical Inclusive OR (or) or{bwl} reg[8|16|32], r/m[8|16|32] or{bwl} r/m[8|16|32], reg[8|16|32] or{bwl} imm[8|16|32], r/m[8|16|32] reg[8|16|32] LOR r/m[8|16|32] -> r/m[8|16|32] r/m[8|16|32] LOR reg[8|16|32] -> reg[8|16|32] imm[8|16|32] LOR r/m[8|16|32] -> r/m[8|16|32] Performs a logical OR of each bit in the values specified by the two operands and stores the result in the second operand. Table 2-3 Inclusive OR ┃ Values │ Result ┃ ┃ 0 LOR 0 │ 0 ┃ ┃ 0 LOR 1 │ 1 ┃ ┃ 1 LOR 0 │ 1 ┃ ┃ 1 LOR 1 │ 1 ┃ Perform an 8-bit logical OR of the constant, 0xff, and the AL register: orb $0xff, %al Perform a 16-bit logical OR of the constant, 0xff83, and the contents of the effective address (addressed by the EDI register plus an offset of 4): orw $0xff83, 4(%edi) Perform a 32-bit logical OR of the EDX register and the contents of the effective address (addressed by the EDI register plus an offset of 4): orl %edx, 4(%edi) Logical Exclusive OR (xor) xor{bwl} reg[8|16|32], r/m[8|16|32] xor{bwl} r/m[8|16|32], reg[8|16|32] xor{bwl} imm[8|16|32], r/m[8|16|32] reg[8|16|32] XOR r/m[8|16|32] -> r/m[8|16|32] r/m[8|16|32] XOR reg[8|16|32] -> reg[8|16|32] imm[8|16|32] XOR r/m[8|16|32] -> r/m[8|16|32] Performs an exclusive OR of each bit in the values specified by the two operands and stores the result in the second operand. Table 2-4 Exclusive XOR ┃ Values │ Result ┃ ┃ 0 XOR 0 │ 0 ┃ ┃ 0 XOR 1 │ 1 ┃ ┃ 1 XOR 0 │ 1 ┃ ┃ 1 XOR 1 │ 0 ┃ Perform a 8-bit exclusive OR of the constant, 0xff, and the AL register: xorb $0xff, %al Perform a 16-bit exclusive OR of the constant, 0xff83, and the contents of the effective address (addressed by the EDI register plus an offset of 4): xorw $0xff83, 4(%edi) Perform a 32-bit exclusive OR of the EDX register and the contents of the effective address (addressed by the EDI register plus an offset of 4): xorl %edx, 4(%edi)
{"url":"http://docs.oracle.com/cd/E19620-01/805-4693/6j4emccqj/index.html","timestamp":"2014-04-23T14:35:26Z","content_type":null,"content_length":"28933","record_id":"<urn:uuid:c433e92b-8fb6-464f-bef8-c14d6f750b9c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof of the Schwarz lemma The proof of the Schwarz lemma is surprisingly easy, once you strip away all the obfuscation to which complex analysis seems prone. Using that writeup's notation, we proceed to do the only thing that can possibly work! Since we wish to prove that f(z)=wz for some types of f, we may as well define g(z)=f(z)/z. Now, g(z) is a holomorphic function on the unit disk. The only thing that can possibly go wrong is division by zero when z=0, but we demanded that f(0)=0 for precisely that reason. So if we write out the power series for f, we see that f(z) = a[1]z + a[2]z^2 + ... and therefore g(z) = a[1] + a[2]z + ... is well defined on the unit disk. Now, due to the extremal principle for holomorphic functions, g must attain any maxima and minima on the boundary of any domain. Take a disk of radius 1-e. Then the maximal value of |g(z)| on the disk is attained at some boundary point |z[0]|=1-e. So for all |z| ≤ 1-e, |g(z)| ≤ |g(z[0])| = |f(z[0])| / (1-e) < 1/(1-e). Taking the supremum over all 0<e<1, we see that |g(z)| ≤ 1, or that |f(z)| ≤ |z|, as required. Furthermore, |f'(0)| = |a[1]| = |g(0)| <= 1, which proves the first part of the lemma. The second part follows immediately. Any equality means that we have |g(z)|=1 for some z inside the unit disk. Using the extremal principle again, we see that g attains its maximal size (which is 1, as we've seen) inside the unit disk. It is therefore a constant function, and f(z) = g(0)z.
{"url":"http://everything2.com/title/Proof+of+the+Schwarz+lemma","timestamp":"2014-04-17T09:37:15Z","content_type":null,"content_length":"20275","record_id":"<urn:uuid:94b2bf1d-57db-4dd8-b6d7-9a53396350ab>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Glendale, CA Algebra Tutor Find a Glendale, CA Algebra Tutor ...I am passionate about education and wish to see every student succeed and learn to set high goals for themselves with every possible chance to successfully meet those goals. When I was a student, I developed a strong work ethic early on, and never let an obstacle hold me back. I never hesitated... 22 Subjects: including algebra 1, English, reading, writing ...Playing with COS allowed me to play violin in such venues as the Denver Center for the Performing Arts, Lincoln Center, Rockefeller University and Queens College. In addition to my duties as First Chair of Violin II in COS, I was also concertmaster at my high school orchestra. My studies gave me many years of experience in chamber music and solo performance. 42 Subjects: including algebra 1, reading, English, elementary (k-6th) ...I was chosen as the Voice and Speech Assistant at The Boston Conservatory my senior year, an honor that is only given to two students in the senior class. I also taught Voice and Speech, Dialects, and Acting at the Young Actors Camp in LA for a summer. I won an MET Award for "Best Actress in a Play" for my portrayal of Catherine in "Proof" by David Auburn in 2011. 16 Subjects: including algebra 1, reading, English, writing ...I also ran a successful campaign for student body president (ASB) and can provide sure-fire tips to win the hearts and minds of your classmates for an unforgettable leadership experience.I got a 5 on the Calculus BC AP test and have done very well at calculus courses at MIT. I can explain it in a way that makes it way easier than you think! This is my favorite subject to teach. 42 Subjects: including algebra 2, algebra 1, reading, Spanish ...I am currently in Calculus III at USC, I tested out of Algebra II in high school and instead went straight to accelerated Pre-Calculus, and I tested out of Calculus I at University, shooting me straight into Calculus II where I scored in the top tier of the class on the final exam. I have helped... 22 Subjects: including algebra 1, algebra 2, reading, English Related Glendale, CA Tutors Glendale, CA Accounting Tutors Glendale, CA ACT Tutors Glendale, CA Algebra Tutors Glendale, CA Algebra 2 Tutors Glendale, CA Calculus Tutors Glendale, CA Geometry Tutors Glendale, CA Math Tutors Glendale, CA Prealgebra Tutors Glendale, CA Precalculus Tutors Glendale, CA SAT Tutors Glendale, CA SAT Math Tutors Glendale, CA Science Tutors Glendale, CA Statistics Tutors Glendale, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/glendale_ca_algebra_tutors.php","timestamp":"2014-04-16T07:41:00Z","content_type":null,"content_length":"24251","record_id":"<urn:uuid:5322ca0a-e032-4392-a4c8-b629cbf932fa>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
SAS-L archives -- September 2005, week 3 (#60)LISTSERV at the University of Georgia Date: Thu, 15 Sep 2005 14:21:31 -0400 Reply-To: Arthur Tabachneck <art297@NETSCAPE.NET> Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU> From: Arthur Tabachneck <art297@NETSCAPE.NET> Subject: Re: help on translating Talbot Michael Katz's "discontinue a test Comments: To: Dew Meng <meng4sas@GMAIL.COM> Assuming there were some variable name changes and extra asterisks in your code, and that the following more closely approximates what your code looks like, the explanation is straight forward: data tdm1; input teid age @7 (bds01-bds20)(1.); data tdm2 ; set tdm1; array bds(*) bds: ; drop i triple ; triple = 0; do i = lbound(bds) to hbound(bds) ; if triple < 3 then triple =(triple + 1) * (bds{i} = 0) ; else bds{i} = 0 ; end ; Triple serves as a counter of the number of consecutive 0s. It starts out as zero and, after it finds three consecutive 0s it sets all remaining bds values to 0. The counter works by incrementing by 1 (i.e., triple=triple+1) if the current value is not zero, since it multiplies triple times the logical result (i.e., 0 if false, 1 if true) of whether the current value is equal to zero. So, if it finds a second consecutive zero, it makes triple equal to triple + 1 (or, in this case, 1+1=2) times 1 (since the current value is zero). If the current value was not equal to zero, then triple would be reset to zero. Thu, 15 Sep 2005 12:30:43 -0500, mimi <meng4sas@GMAIL.COM> wrote: > would you please help me on explaning the code that Talbot suggested on >to discontinue a test. > Here is the situation. >I want to score all the items to 0s after 3 consecutive 0 occurs. >here is the fake data:
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0509c&L=sas-l&D=0&P=6574","timestamp":"2014-04-21T07:10:58Z","content_type":null,"content_length":"10591","record_id":"<urn:uuid:b51aeed4-139f-41be-a0f9-d5d0607e795b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
From MozillaWiki Improve font installation Various Unicode fonts are already installed by default on many systems and we should try to support them. However, we should also ensure that supported fonts are really installed so that users do not wonder why MathML is no working correctly on their system. People proposed to add some kind of automated installation (bug 233950, bug 295193). One possible direction could be to use downloadable fonts. However, we need to modify nsMathMLChar to make these fonts usable in stretchy operators (bug 736010). [fred] An add-on adding mathematical fonts in WOFF format is now available. OpenType MATH table and Unicode Fonts The OpenType MATH table contains information about size variants of operators and how to build operators with several glyphs. The purpose is to directly read the table in the fonts that contain this information rather than maintaining our own mathfonts*.properties (bug 407059). This will allow us to get rid of mathfonts*.properties for STIX and Asana Math and give support for more mathematical unicode fonts: Support for MathJax fonts MathJax has its own Open Type fonts, which look closer to LaTeX fonts and thus are often preferred by MathJax users. That may be interesting to see if we can support them. They do not seem to contain an OpenType MATH table, though. See bug 701758, bug 732834 and bug 732832. [fred] First fix pushed to add support for most stretchy construction in the MathJax fonts (bug 701758) [fred] Second fix pushed to use MathJax fonts in mathematical text (bug 732834). It improves a bit mathvariant support too (bug 114365). [fred] MathJax fonts will be used by default in Firefox 13. However, it remains to fix bug 732832 to get a complete support. [DONE] Support for STIX fonts We are mainly relying on STIX Fonts to build Composite Chars and Variants. We should find other variants of characters and add entries for them. See bug 407101 and bug 569195 [fred] A first patch has been pushed for angle brackets and large operators that should fix some bugs reported by users. [fred] A second patch has been pushed for stretchy accents. This fixes all stretchy bugs reported by users, at least for small sizes. [fred] A third patch has been pushed to use STIX fonts 1.0. It also adds size variants for integral symbols. [DONE] Support for Asana Math Asana Math is available under the Open Font License and "includes almost all mathematical Unicode symbols". Unicode Mappings for Composite Chars and Variants are provided, so it would be possible to support them by adding a mathfontAsanaMath.properties. However, it would probably be better to directly read the OpenType MATH table. See bug 407439
{"url":"https://wiki.mozilla.org/MathML:Fonts","timestamp":"2014-04-17T00:56:11Z","content_type":null,"content_length":"22104","record_id":"<urn:uuid:04be58dd-2838-46da-a61e-b291f3c8bebe>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Principles of quantum statistical mechanics Next: The density matrix and Up: No Title Previous: No Title The problem of quantum statistical mechanics is the quantum mechanical treatment of an N-particle system. Suppose the corresponding N-particle classical system has Cartesian coordinates and momenta and Hamiltonian Then, as we have seen, the quantum mechanical problem consists of determining the state vector Denoting the corresponding operators, and the many-particle coordinate eigenstate The Schrödinger equation can be cast as a partial differential equation by multiplying both sides by where the many-particle wave function is Mark Tuckerman Tue May 9 19:40:24 EDT 2000
{"url":"http://www.nyu.edu/classes/tuckerman/stat.mech/lectures/lecture_13/node1.html","timestamp":"2014-04-16T07:22:13Z","content_type":null,"content_length":"4828","record_id":"<urn:uuid:149d9fa3-fedf-491e-b04a-5a6c8a171294>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
The epistemic value of rationality Popp, Alexandru W. A. (2008): The epistemic value of rationality. Download (159Kb) | Preview Models of rational choice use different definitions of rationality. However, there is no clear description of the latter. We recognize rationality as a conceptual conglomerate where reason, judgment, deliberation, relativity, behavior, experience, and pragmatism interact. Using our definition, the game theoretic idealized principle of rationality becomes absolute. Our model gives a more precise account of the players, of their true behavior. We show that the Rational Method (RM) is the only process that can be used to achieve a specific goal. We also provide schematics of how information, beliefs, knowledge, actions, and purposes interact with and influence each other in order to achieve a specific goal. Furthermore, ration, the ability to think in the RM framework, is a singularity in time and space. Having a unilateral definition of rationality, different models and theories have now a common ground on which we can judge their soundness. Item Type: MPRA Paper Original Title: The epistemic value of rationality Language: English Keywords: conceptual conglomerate, traditional rationality, rational method, ration C - Mathematical and Quantitative Methods > C7 - Game Theory and Bargaining Theory > C79 - Other C - Mathematical and Quantitative Methods > C9 - Design of Experiments > C99 - Other Subjects: C - Mathematical and Quantitative Methods > C7 - Game Theory and Bargaining Theory > C70 - General D - Microeconomics > D8 - Information, Knowledge, and Uncertainty > D83 - Search; Learning; Information and Knowledge; Communication; Belief B - History of Economic Thought, Methodology, and Heterodox Approaches > B0 - General > B00 - General Item ID: 17618 Depositing User: Alexandru W. A. Popp Date Deposited: 02. Oct 2009 10:15 Last Modified: 20. Feb 2013 13:11 Aumann, J. Robert (1976). Agreeing to disagree. The Annals of Statistics vol. 4, no. 6, pp. 1236-1239. Barker, S.F. (1965). The Elements of Logic. McGraw-Hill. Bernheim, D. (1984). Rationalizable strategic behavior. Econometrica, vol. 52, pp. 1007-1028. Blanshard, Brand (1962). Reason and Analysis. Open Court Publishing Company. Colman, Andrew M. (1982). Game Theory and Experimental Games: The Study of Strategic Interaction. Pergamon Press. Damme, Eric van. (1983). Lectures Notes in Economics and mathematical Systems: Refinements of the Nash Equilibrium Concept. Springer–Verlag. Davis, Morton D. (1970). Game Theory. A Nontechnical Introduction. Basic Books. Dufwenberg, M., Kirchsteiger, G. (2001). A theory of sequential reciprocity. Discussion paper. University of Vienna. Ellsberg, D. (1956). Theory of the Reluctant Duelist. American Economic Review, vol. 46, pp. 909-923. Fehr, E., Schmidt, K. (1999). A theory of fairness, competition, and cooperation. Quart. J. Econ., vol. 114, pp. 817-868. Feinberg, Y. (2005a). Subjective reasoning – dynamic games. Games and Economic Behavior, vol. 52, pp. 54-93. Feinberg, Y. (2005b). Subjective reasoning – solutions. Games and Economic Behavior, vol. 52, pp. 94-132. Hartshorne, C. and Weiss, P. eds. (1958). Collected Papers of Charles Sanders Peirce. I-VII vols., Harvard University Press. Levine, D. (1998). Modeling altruism and spitefulness in experiments. Rev. Econ. Dynam. vol. 1, pp. 593-622. Moschovakis, Y. N. (1994). Notes on Set Theory. Springer-Verlag. Nathanson, Stephen (1985). The Ideal of Rationality. Humanities Press International. Newell, A. (1982). The knowledge level. Artificial Intelligence, vol. 18, pp. 87-127. Newell, A. (1990). Unified Theories of Cognition. Harvard University Press. Pearce, D. (1984). Rationalizable strategic behavior and the problem of perfection. Econometrica, vol. 52, pp. 1029-1050. Pearl, J. (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press. Rabin, M. (1993). Incorporating fairness into game theory and economics. Amer. Econ. Rev. vol. 83, pp. 1281-1302. Shanahan, M. (1997). Solving the Frame Problem. MIT Press. Simon, H. (1961). Administrative Behavior. Macmillan. Stahl, Saul. (1999). A Gentle Introduction to Game Theory. American Mathematical Society. Straffin, Philip D. (1993). Game Theory and Strategy. The Mathematical Association of America. Wald, H. (1952). Notes on strong independence axiom. Econometrica, vol.20, pp. 661-679. URI: http://mpra.ub.uni-muenchen.de/id/eprint/17618
{"url":"http://mpra.ub.uni-muenchen.de/17618/","timestamp":"2014-04-18T03:31:49Z","content_type":null,"content_length":"23827","record_id":"<urn:uuid:0992353c-94e2-4325-b4ed-499d67667c53>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Anthony W. Wickstead I am currently Professor of Pure Mathematics in the Pure Mathematics Research Centre at Queen's University Belfast. I studied for a PhD at Chelsea College, London under the supervision of Francis Jellett working initially on the theory of compact convex sets but completing a thesis with the title of Linear operators between partially ordered Banach spaces and some related topics. Shortly after that, I switched direction to work on vector lattices and decided to draw a line under my study of compact convex sets by writing up a set of notes on Affine functions on compact convex sets. My current research is into Banach lattices and linear operators on them. In recent years I have increasingly concentrated on the order structure of spaces of linear operators between two Banach lattices and the relationship between various spaces. Typical questions I have investigated are when the space of regular operators (i.e. those which are the difference of two positive operators) is a lattice or when it coincides with the space of all bounded operators. A further strand concerns domination properties which ask when a positive operator must inherit some desirable property which is possessed by a positive operator which dominates it. A list of those of my publications that have reached MathSciNet may be found here, whilst this list contains some more recent publications as well as items like book reviews, articles on computing and mathematics teaching plus some papers in theoretical physics where I have contributed my (rather minor!) computing skills. Below are either slides or the text for various talks that I have given over the years which may still be of some interest to people either for expository reasons or because they contain results or problems that have not been published elsewhere. In March 2008, I was to talk at a workshop in Sevilla, but was prevented by illness. I was to give a series of talks on The order structure of regular operators between Banach lattices. This contains some material that is not yet published elsewhere so may be of interest. At the workshop preceding the Positivity V conference which was held here in July 2007, I gave two series of expository talks. One was on Vector and Banach Lattices and the other on Operators on Banach Lattices. In the UK there is a long standing tradition that newly appointed or promoted professors must give a public lecture entitled an inaugural lecture. The audience for this is likely to include not only academic staff from a wide variety of disciplines but also friends and relatives. I gave mine in 1994 and it was the closest that I have ever come to preparing a popular talk. The text (but not the accompanying graphics) is at Pure Mathematics? Positively Applied! I have to admit that afterwards my father confessed that the lecture was the most boring hour of his life! In 1994 I gave a series of lectures at the Middle East Technical University in Ankara. For a theme I assembled some material, that was familiar, in a novel way by concentrating on Operator Theoretic Characterizations of Completeness in Riesz Spaces. I think that the very first talk that I gave at a major conference was at Silivri in Turkey in 1976. I prepared far too much material but did at least prepare handouts. This assembles results on the Structure Space and Ideal Centre of a Banach Lattice in a way that I have not seen elsewhere and includes some results (unfortunately only the statements) that have never been published. In April 2011 I visited Chengdu and Xi'an in China and took many photographs during the visit. I gave several lectures in Chengdu as well as visiting the Wuhou Memorial Temple, followed by Jinli Street, a tourist shopping area; the Water Conservancy Project at Duijangyan as well as lunch afterwards; the Jinsha site museum and the Giant Panda Breeding Centre where I saw not only Giant Pandas but also Red Pandas, which are more closely related to the fox than the giant panda. These photographs show some of the many people who looked after us in Chengdu. I then visited Xi'an where after lecturing I visited the Terracotta Army; the Huaqing Hot Springs which was the location of Chiang Kai-shek's headquarters at the time of the Xi'an incident in 1936; the Big Grey Goose Pagoda; the Forest of Stone Steles museum; climbed the Xi'an Bell Tower and looked at the city views from there and climbed the Xi'an City Walls. On the final night in Xi'an we had a great meal. Finally, here are some photographs of interesting restaurant scenes, some of primary school children practising for their May Day display and some street scenes, including a lattice shop!
{"url":"http://www.qub.ac.uk/puremaths/Staff/Anthony%20Wickstead/Wickstead%20Home%20Page.html","timestamp":"2014-04-21T05:52:19Z","content_type":null,"content_length":"9796","record_id":"<urn:uuid:dd13e589-3b69-4e77-9d7f-86e8b2f65c8e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
Marietta, GA SAT Math Tutor Find a Marietta, GA SAT Math Tutor ...The focus of much of his teaching in recent years is to students who are missing key math fundamentals, students that need help understanding new math concepts and students that have diverse learning styles and special needs. Settings include: private one-on-one sessions; community group classes... 5 Subjects: including SAT math, geometry, algebra 1, prealgebra ...I currently teach Mathematics at the university level. While pursuing my undergraduate degrees in mathematics & computer science, I served as a mathematics tutor for a period of 5 years in the university's mathematics lab. As a result, I became immersed in many levels of mathematics and exposed to the joy of illuminating others with the beauty of mathematics. 11 Subjects: including SAT math, calculus, statistics, geometry ...Again, I am here to help you to be successful. Statistics is a science of data. In order to draw any sensible conclusions from collected data, we must examine the patterns that are revealed in 23 Subjects: including SAT math, calculus, geometry, Chinese ...Tutoring allows innovative teaching methods which require doing what it takes so learning can take place. I'm willing to adapt to meet students' needs and getting them more involved with the mathematical process. My teaching and tutoring methods will create opportunities for students to learn in a safe and trusting environment. - Mrs. 14 Subjects: including SAT math, statistics, geometry, algebra 1 ...I start off the SAT writing section by emphasizing the aspects of writing that test graders want to see. I then give students standard frameworks that they can use to answer any question. By the end of our sessions students feel confident writing about any topic, and these skills carry over to AP tests and history essays very well. 17 Subjects: including SAT math, chemistry, physics, writing Related Marietta, GA Tutors Marietta, GA Accounting Tutors Marietta, GA ACT Tutors Marietta, GA Algebra Tutors Marietta, GA Algebra 2 Tutors Marietta, GA Calculus Tutors Marietta, GA Geometry Tutors Marietta, GA Math Tutors Marietta, GA Prealgebra Tutors Marietta, GA Precalculus Tutors Marietta, GA SAT Tutors Marietta, GA SAT Math Tutors Marietta, GA Science Tutors Marietta, GA Statistics Tutors Marietta, GA Trigonometry Tutors Nearby Cities With SAT math Tutor Acworth, GA SAT math Tutors Alpharetta SAT math Tutors Atlanta SAT math Tutors College Park, GA SAT math Tutors Decatur, GA SAT math Tutors Douglasville SAT math Tutors Dunwoody, GA SAT math Tutors Johns Creek, GA SAT math Tutors Kennesaw SAT math Tutors Lawrenceville, GA SAT math Tutors Mableton SAT math Tutors Roswell, GA SAT math Tutors Sandy Springs, GA SAT math Tutors Smyrna, GA SAT math Tutors Woodstock, GA SAT math Tutors
{"url":"http://www.purplemath.com/Marietta_GA_SAT_Math_tutors.php","timestamp":"2014-04-18T06:08:27Z","content_type":null,"content_length":"24116","record_id":"<urn:uuid:dc3c84e7-173b-4149-9592-ee6074983e77>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
Arch Haskell News: May 3 2009 Another update, since there was a bit of a package backlog. Hackage now has 1251 (+1) Haskell packages, of which 1095 (+43) (87.5%) have been natively packaged for Arch in AUR. All these packages are available via AUR, using the “yaourt” tool. Here’s today’s updates, broken down by category, so you can get a sense for what’s new in your area of interest: • haskell-data-reify-0.2: Reify a recursive data structure into an explicit graph. • haskell-jsmw-0.1: Javascript Monadic Writer base package. • haskell-preprocessor-tools-0.1.1: A framework for extending Haskell’s syntax via quick-and-dirty preprocessors 2 responses to “Arch Haskell News: May 3 2009” 1. I’m sorry but do you have to list all these packages in your posts? It’s being syndicated on the planet and it is a bit annoying to have to scroll down all this. How about linking to page with a single regularly updated list? Thank you. □ I think that’s a great idea! This entry was posted in Weekly News and tagged algorithm, codec, concurrency, control, crypto, data, database, devel, games, graphics, language, math, network, sound, system, testing, text, web. Bookmark the permalink.
{"url":"https://archhaskell.wordpress.com/2009/05/03/arch-haskell-news-may-3-2009/","timestamp":"2014-04-16T16:13:13Z","content_type":null,"content_length":"66426","record_id":"<urn:uuid:a9820301-c468-4de3-bba4-b98f4749beba>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Brentwood, MD Precalculus Tutor Find a Brentwood, MD Precalculus Tutor ...These courses included fundamental theory and techniques (including numerical) of linear algebra and its applications to engineering and physics. I have also taught undergraduate and graduate courses involving linear algebraic techniques. I am an instructor in Electrical Engineering. 16 Subjects: including precalculus, calculus, physics, statistics ...I am an extremely patient person, and I am usually able to explain math problems in several different ways until they are understood. I also have scored very well on standardized tests:SAT (Old): 1450ACT: 32GRE - Math: 167/170, Verbal: 170/170I can easily relay the strategies needed to go throug... 32 Subjects: including precalculus, reading, algebra 2, calculus ...By learning to learn, the investment the student made for the Algebra 1 class that is giving them such a hard time today can help them in their organic chemistry class 4 years from now in college.I have been using MS Excel since its inception and use it everyday. I can help you with anything fro... 28 Subjects: including precalculus, English, calculus, reading I have an extensive background in both Math and Physics. I have taught several sections of college-level Physics, the Physics sections of the MCAT and OAT tests, and have helped improve the GRE Math score of several students. Not only do I know the material that I tutor, but I can help you learn how to take a test to improve your score as it is not always just about knowing the 13 Subjects: including precalculus, chemistry, calculus, physics ...As for athletics, I am skilled in the following areas: Swimming: I was a competitive swimmer for three years and enjoy teaching stroke mechanics and customizing drills to help you progress through swimming. Golf: I was on my high school varsity team for three years. I cover swing mechanics (I... 13 Subjects: including precalculus, writing, calculus, algebra 1 Related Brentwood, MD Tutors Brentwood, MD Accounting Tutors Brentwood, MD ACT Tutors Brentwood, MD Algebra Tutors Brentwood, MD Algebra 2 Tutors Brentwood, MD Calculus Tutors Brentwood, MD Geometry Tutors Brentwood, MD Math Tutors Brentwood, MD Prealgebra Tutors Brentwood, MD Precalculus Tutors Brentwood, MD SAT Tutors Brentwood, MD SAT Math Tutors Brentwood, MD Science Tutors Brentwood, MD Statistics Tutors Brentwood, MD Trigonometry Tutors Nearby Cities With precalculus Tutor Berwyn Heights, MD precalculus Tutors Bladensburg, MD precalculus Tutors Colmar Manor, MD precalculus Tutors Cottage City, MD precalculus Tutors Edmonston, MD precalculus Tutors Fairmount Heights, MD precalculus Tutors Hyattsville precalculus Tutors Mount Rainier precalculus Tutors North Brentwood, MD precalculus Tutors Riverdale Park, MD precalculus Tutors Riverdale Pk, MD precalculus Tutors Riverdale, MD precalculus Tutors Seat Pleasant, MD precalculus Tutors University Park, MD precalculus Tutors West Hyattsville, MD precalculus Tutors
{"url":"http://www.purplemath.com/brentwood_md_precalculus_tutors.php","timestamp":"2014-04-19T20:22:54Z","content_type":null,"content_length":"24656","record_id":"<urn:uuid:5af795fe-871c-4b78-b383-488e985227f3>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Atherton Algebra Tutor Find an Atherton Algebra Tutor ...All these factors combined can result in an undesirable learning environment for some students. One-on-one tutoring eliminates many types of distractions. There is no wonder many of my students improve on their understanding of their math subjects after having been tutored by the teacher. 5 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...Pre-algebra, algebra, geometry, analytic (e.g., Cartesian) geometry, calculus, simple differential equations, matrices; classical mechanics, electricity and magnetism, elementary atomic and quantum physics. I teach as far north as Burlingame and as far south as San Carlos.Before we jump into Alg... 17 Subjects: including algebra 1, algebra 2, calculus, physics ...I am currently teaching various subjects to 8th grade students in East Palo Alto and have been teaching in various classroom settings for about 4 years. My approach is very similar to the way that Common Core should work--I like to foster critical thinking by getting to the core of an issue and ... 13 Subjects: including algebra 1, reading, English, writing ...I am bilingual in Spanish, and have six years of tutoring and teaching experience. I have done private tutoring for students in 3rd, 5th, and 6th grade, as well as tutored geometry and physics for a 10th grade student and Calculus AB for a 12th grade student. My favorite subjects to tutor are high school math, physics, and Spanish. 27 Subjects: including algebra 2, algebra 1, reading, Spanish ...My name is Maya and I love to work with students of all ages! My tutoring experience includes all elementary school subjects, middle school math and English, and Pre-algebra, Algebra 1, and ESL /reading tutoring for high school and college. I have attended Kumon and a variety of educational options, so I am familiar with many different teaching methods and learning styles. 24 Subjects: including algebra 1, reading, writing, ESL/ESOL
{"url":"http://www.purplemath.com/atherton_algebra_tutors.php","timestamp":"2014-04-19T06:56:22Z","content_type":null,"content_length":"24026","record_id":"<urn:uuid:97a5ae50-7926-4622-b0ab-8186371f7c5c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
"In the sequel" - outdated mathematical jargon or precise technical term? up vote 4 down vote favorite Possible Duplicate: http://math.stackexchange.com/questions/907/correct-usage-of-the-phrase-in-the-sequel-history-alternatives As a non-native speaker of English, I have been perplexed by the phrase "in the sequel" as used in textbooks, lecture notes, and even research articles. None of my linguistically inclined native speaker friends have seen it outside of mathematical literature, and the relevant Oxford English Dictionary definition of "sequel" suggests mathematical usage is non-standard: The ensuing narrative, discourse, etc.; the following or remaining part of a narrative, etc.; that which follows as a continuation; esp. a literary work that, although complete in itself, forms a continuation of a preceding one. What I've inferred from context is that it means something along the lines of "for the rest of this book/paper/text", especially as it's usually used to introduce notation and/or convention. Some examples: We hope that the relation between linear transformations and matrices is by now sufficiently clear that the reader will not object if in the sequel, when we wish to give examples of linear transformations with various properties, we content ourselves with writing down a matrix. --Paul Halmos, Finite-Dimensional Vector Spaces, p. 86 [Here, and in the sequel, Card(S) denotes the number of elements in the finite set S.] --J.P. Serre, Local Fields, p. 64 In the sequel we shall denote by ∅ the empty set and by {pt} a set with one element. -- Pierre Schapira, Algebra and Topology course notes, p. 8 In the sequel, we will denote by L(C ) the configuration space of any convergent CFG C. --E. Goles et. al., Sandpile Models and Lattices: a comprehensive survey, Theoretical Computer Science, 2004, Vol 322, Issue 2, p. 398 So my actual question is two-fold. 1. What does the phrase actually mean? 2. When is its use warranted over the use of phrases such as "for the rest of the book/paper/text"? soft-question mathematical-writing 9 It means exactly what the OED says it means. – Robin Chapman Jul 28 '10 at 23:32 2 .. This was also on mathunderflow, where Pete Clark gave a satisfactory answer to it. – Harry Gindi Jul 28 '10 at 23:38 3 I wonder why was this copied from Math.SE? – Mariano Suárez-Alvarez♦ Jul 28 '10 at 23:43 2 Indeed, it is because of this brilliant answer math.stackexchange.com/questions/907/… that I gained the ability to comment on posts at Math.SE. I agree that it is a little curious that this question appeared here less than a day after it got asked and answered on the other site. – Pete L. Clark Jul 28 '10 at 23:59 4 I don't understand why people are voting to close this question. – Kevin H. Lin Jul 29 '10 at 1:11 show 6 more comments closed as off topic by Robin Chapman, Harry Gindi, Mariano Suárez-Alvarez♦, Akhil Mathew, Andrew Stacey Jul 29 '10 at 8:00 Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question. 3 Answers active oldest votes It means "from now on." (Such as, "in the sequel, $K$ will denote a perfect field...") This does, in fact, tend to appear only in older books, or in books in translation. I also was up vote 4 down confused when I first saw the term. add comment From Online etymology dictionary sequel: early 15c., "train of followers," from O.Fr. sequelle, from L.L. sequela "that which follows, result, consequence," from sequi "to follow," from PIE base *sekw- (cf. Skt. sacate up vote 3 "accompanies, follows," Avestan hacaiti, Gk. hepesthai "to follow," Lith. seku "to follow," L. secundus "second, the following," O.Ir. sechim "I follow"). Meaning "consequence" is attested down vote from late 15c. Meaning "story that follows and continues another" first recorded 1510s. So maybe you can also use it as "The sequel of this theorem is that this and that happen." or in "As a sequel of the intermediate value theorem you can ascertain the existence of solutions of many equations." – ABC Jul 28 '10 at 23:42 add comment I have always understood "in the sequel" to mean "in what follows". I have never checked but I assumed this was from the Latin as in "sequence" and "non sequitur" http://en.wikipedia.org/ up vote 2 In the usage you refer to it means what follows in the same article. The sequel to a film is the next film. down vote I see Franklin has checked the etymology. Related also is the Latin abbreviation et seq., which means "and what follows." Use it like this, "For a summary of this fact, see Lemma 2.11 et seq. in Fakename's article." – Willie Wong Jul 29 '10 at 0:24 1 Right. This is an abbreviation for "et sequitur"; similarly we have et al. for "et aliter" (and others) and etc. for "et cetera" (and things). – Bruce Westbury Jul 29 '10 at 1:22 Actually, it is 'et sequens'. The verb is 'sequor'. 'Sequitur' is the third person singular form from which 'sequitur' the noun is derived. 'Sequens' is the present active participle. 4 This is not just a quibble on etymology: 'sequitur' the noun has a specific meaning as logically following, whereas 'sequens' the participle just means "following" in general. So as 'et seq.' is usually used to refer to the "narrative which follows", a reader thinking of "et sequitur" runs the risk of finding a non sequitur. =) Okay, this is getting too much to be LatinOverflow... – Willie Wong Jul 29 '10 at 1:48 2 (Okay, I can't resist: I was taught that 'et al.' expands to 'et alii/aliae/alia' for people, and 'et alibi' for places. I don't recall the word 'aliter', so I looked it up, and it seems to be the adverb meaning 'otherwise'.) – Willie Wong Jul 29 '10 at 1:57 Willie, Thanks. – Bruce Westbury Jul 29 '10 at 7:08 show 5 more comments Not the answer you're looking for? Browse other questions tagged soft-question mathematical-writing or ask your own question.
{"url":"http://mathoverflow.net/questions/33730/in-the-sequel-outdated-mathematical-jargon-or-precise-technical-term?sort=votes","timestamp":"2014-04-20T13:53:36Z","content_type":null,"content_length":"66766","record_id":"<urn:uuid:7e29a9e8-1cf9-4bfc-9e86-d8710238149a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00021-ip-10-147-4-33.ec2.internal.warc.gz"}