content
stringlengths
86
994k
meta
stringlengths
288
619
Houston Prealgebra Tutor Find a Houston Prealgebra Tutor ...My field is essentially applied physics. Whether the course involves classical Newtonian, electromagnetism, or special relativity, I have no difficulty guiding a student through these physics courses. In high school, I took AP Physics and earned a 5 on the test. 37 Subjects: including prealgebra, chemistry, calculus, writing ...Even IF a chemistry teacher/tutor understands chemistry, they do not necessarily teach it well or EVEN LIKE IT! This saddens me, because every teacher should LOVE what they teach. I earned a bachelor's degree in chemistry as well as a minor in education from Texas A&M University and loved every minute of it! 5 Subjects: including prealgebra, chemistry, algebra 2, geometry Most math problems can be solved in 3-5 steps! As a certified math teacher in Cy-Fair and as a tutor, I believe that anyone can learn and understand math. Those problems that appear complicated are just a series of simple concepts woven together. 16 Subjects: including prealgebra, reading, GRE, algebra 1 ...During my medical school curriculum, I took a course in biostatistics. We learned of the many methods to perform statistical analysis in biological research. I also used these methods in various research projects that I conducted. 42 Subjects: including prealgebra, Spanish, chemistry, writing ...The strategies I taught in the Mathcounts program are identical to the strategies used to solve SAT math problems. While in college, I worked for my professors and tutored college students in Calculus I, College Math, and Geometry. My experience as a teacher taught me that every child does not learn math the same way. 24 Subjects: including prealgebra, calculus, geometry, statistics
{"url":"http://www.purplemath.com/Houston_Prealgebra_tutors.php","timestamp":"2014-04-19T05:27:15Z","content_type":null,"content_length":"23864","record_id":"<urn:uuid:0e4ebac5-af56-4540-85e2-918ca19f993a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Infinite ordinals in Zermelo set theory Robert Black mongre at gmx.de Sun Feb 8 04:24:37 EST 2009 Frode Bjørdal wrote: > Apart from an historic interest, my main, and systematic, interest in this > is whether (and if so how, and how to do it most elegantly/optimally) > Zermelo set theory will be strong enough to account for infinite ordinals > in some other sense than von Neumann's, e.g in some sense related to > ordinal notation. Such a question seems relevant as e.g. Saunders Mac Lane > has been on record stating that bounded Zermelo may suffice as a > foundation for mathematics. (If I remember correctly, Adrian Mathias has > stated that Mac Lane was not fond of von Neumann ordinals.) If you don't have replacement and want plenty of ordinals, define them using Dana Scott's trick as equivalence classes of orderings *of lowest possible rank*. This is done in Michael Potter's book _Set Theory and its Philosophy_. It obviously gives you all the ordinals below beth_omega. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2009-February/013394.html","timestamp":"2014-04-17T16:23:53Z","content_type":null,"content_length":"3463","record_id":"<urn:uuid:91b62bb6-67f3-4e7a-a28b-e6f0230bd050>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Vector Reflection April 11th 2006, 11:31 AM #1 Apr 2006 Vector Reflection First of all, sorry if i posted this in the wrong forum. This is for my Discrete preparation course for univerisity and there was no Discrete forum under the highschool heading Anyways here is the question: Vectors 'a' and 'b' are drawin tail-to-tail. Vector 'c' is the reflection of 'a' in the line containing vector 'b'. Express 'c' as a linear combination of 'a' and 'b'. I tried showing it through the use of the projection of a vector formula but I keep getting stuck. Could any of you please offer some guidance or a suggestion on where to start? Anyways here is the question: Vectors 'a' and 'b' are drawin tail-to-tail. Vector 'c' is the reflection of 'a' in the line containing vector 'b'. Express 'c' as a linear combination of 'a' and 'b'. Could any of you please offer some ... suggestion on where to start? I've attached an image to demonstrate, what I've done: 1.: $|\vec{a}|=|\vec{a'}|$ 2.: $\vec{a}+\vec{a'}=c \cdot \vec{b}$ (c is a constant factor) Therefore: $\vec{a'}=c \cdot \vec{b}-\vec{a}$ My "solution" looks a little bit too easy. The other possibility I can think of is, that you use matrices. First of all, sorry if i posted this in the wrong forum. This is for my Discrete preparation course for univerisity and there was no Discrete forum under the highschool heading Anyways here is the question: Vectors 'a' and 'b' are drawin tail-to-tail. Vector 'c' is the reflection of 'a' in the line containing vector 'b'. Express 'c' as a linear combination of 'a' and 'b'. I tried showing it through the use of the projection of a vector formula but I keep getting stuck. Could any of you please offer some guidance or a suggestion on where to start? Resolve $\bf a$ into componets parallel and orthogonal to $\bf b$. Parrallel component: $<br /> (\bf a . \bf b) \hat{\bf b}<br />$ Othogonal component: $<br /> \bf a - (\bf a . \bf b) \hat{\bf b}<br />$ $<br /> \bf c=(\bf a . \bf b) \hat{\bf b}-(\bf a - (\bf a . \bf b) \hat{\bf b})=2(\bf a . \bf b) \hat{\bf b}-\bf a<br />$ Which may be rewritten: $<br /> \bf c=2\ \frac{\bf a . \bf b}{|\bf b|}\ \bf b-\bf a<br />$ which is the form required. Wow, thanks guys. I totally forgot about the orthogonal vector edit- Why do you subtract the parallel from the orothogonal? Wouldnt that just give you the length of 'a'? Last edited by dimatt; April 14th 2006 at 07:34 PM. Wow, thanks guys. I totally forgot about the orthogonal vector edit- Why do you subtract the parallel from the orothogonal? Wouldnt that just give you the length of 'a'? It gives a vector with the orthogonal component about $b$ flipped from one side of $b$ to the other, that is it is reflecting the orthogonal component in $b$. Oh, i see where i went wrong. I kept looking at c as the projection of a on b; for some reason April 12th 2006, 09:12 PM #2 April 13th 2006, 02:17 AM #3 Grand Panjandrum Nov 2005 April 14th 2006, 05:35 PM #4 Apr 2006 April 17th 2006, 07:06 PM #5 Apr 2006 April 17th 2006, 07:54 PM #6 Grand Panjandrum Nov 2005 April 18th 2006, 11:32 AM #7 Apr 2006
{"url":"http://mathhelpforum.com/discrete-math/2549-vector-reflection.html","timestamp":"2014-04-20T18:34:11Z","content_type":null,"content_length":"52203","record_id":"<urn:uuid:5be24798-cf60-4912-b88f-f197a668d737>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Technological Change, Distributive Bias and Labor Transfer in a Two Sector Economy Uma Lele John W. Mellor Reprinted from the Oxford Economic Papers Vol. 33, No 3, November 1981 "IN1T WO By UMA LELE and JOHN W. MELLOR* SLOW growth in overall employment and unequal distribution of benefits from the new foodgrain technologies continue to be two of the most pressing current problems of many low income countries. There have been efforts to increase employment rapidly, without substantial increase in the rate of growth of food production, e.g. in India following the 1971 election. However, such attempts have generally been accompanied by high rates of inflation, particularly of food prices. This is because as much as 60 percent of the increase in income of low income wage earners in developing countries is spent on consumption of cereals alone (John W. Mellor and Uma Lele 1973). And yet, the growth in food production in developing countries has barely kept pace with the growth of population. The foodgrain sector has thus not only been a slow generator of additional employment and income; through inadequate supply of wage goods it has also constituted a major constraint to the growth of nonagricultural employment. The question of labor transfers has, of course, received extensive treat- ment in development literature and especially in two-sector models' (most notably by W. Arthur Lewis, Fei-Ranis, Jorgensen, Todaro and Harris). A few formulations, such as those by Dixit and Hornsby, also deal with increasing production of wage goods, but do not allow for technological change.2 Various others treat the question of marketed surpluses of food, but do not incorporate it formally in models of growth or relate it to labor supply as a separate but interacting variable.3 The variations in the distribu- tive bias of the different types of new technologies in foodgrain production have, however, been extensively documented in the empirical literature.4 The critical role of the wage goods constraint in creating nonagricultural employment has also been recognized by policymakers, but only implicitly. Consequently, unlike Mainland China, few developing countries have had the political will or the institutional mechanisms to mobilize the limited *Uma Lele is Senior Economist, the World Bank and John W. Mellor is Director, International Food Policy Research Institute, Washington, D.C., U.S.A. We are grateful to Chandrashekhar Ranade for considerable assistance on the paper particularly in developing the necessary proof. We also acknowledge the contribution of an anonymous reviewer in correcting inaccuracies and improving clarity of presentation. 1 For a detailed review of two sector models see Mellor (1974). 2 See Mellor (1974). 3 See Mellor (1974). 4Mellor and Lele (1973). For a detailed analysis of several innovations in two major locations in the Philippines, see Chandrashekhar G. Ranade (1977). See also, for India, C. H. H. Rao (1975). UMA LELE AND JOHN W. MELLOR domestic food surpluses for consumption of wage earners without causing the prices of food to rise in relation to those in the nonagricultural sector. These price increases have discouraged decisionmakers from following a policy of expanding employment.5 Similarly, few developing countries have relied on rapidly increasing imports of cereals as a way of expanding employment, partly arising out of a perception of inelastic demand for their own exportable surpluses. In agriculture, as the classic sector of diminishing returns, the production increase necessary to release the wages good constraint is of course achieved largely through technological change. Agricultural technologies, however, vary substantially in their distributive bias. They therefore have important implications for the generation of employment directly in the agricultural sector. In addition, the different demand elasticities among various income classes of food producers also affect the size of the marketable surplus of the wage goods that is generated by the foodgrains sector. The initial employ- ment effect, and the consequent size of the marketed surplus, thus in turn affect the prices of food relative to nonfood output as well as the level of real wages in the nonfoodgrain sector. These factors are thus crucial in determining the rate at which the wages goods constraint is released and off-farm employment is generated. In this context we analyze the effect of alternative assumptions with respect to distributive bias of technological change in the foodgrain sector on (a) marketable surplus from that sector, (b) the rate of growth of nonfoodgrain sector employment, (c) the price of foodgrain in relation to the nonfoodgrain output and (d) the degree of factor intensity in the nonfood- grain sector. We examine these relationships with the use of a two-sector model similar to the large family of dualistic models so as to focus on the critical role of food production in influencing labor transfers, and to analyze the complex interactions of the food and the labor markets. The distinguishing features of the two-sector model developed in this paper are: (1) incorporation of biased technological change in the foodgrain sector and (2) separation of the food and labor markets into two indepen- dent but interacting markets. Rather than assuming that food moves com- mensurately and automatically with labor, we assume the marketable sur- plus of food to be influenced by the distribution of income and the different price and income elasticities of demand of landowners and laborers in the foodgrain producing sector for domestic consumption of foodgrains. Tech- nologically induced changes in income distribution in the foodgrain sector therefore affect the demand for food in the foodgrain sector, the marketable surplus, the price of foodgrains in terms of nonfoodgrains output and the rate of labor transfers to the nonfoodgrain sector. 5 For a critical analysis of such policies in India, see Lele (1971). The model also provides results relating to the factor intensity in the nonfoodgrain sector. It illustrates how the directions of change in these two factors are influenced by the direction of distributive bias and the nature of interaction between the food and the labor markets. These results are substantially different from those in previous models. The sharp differences between low and high income consumers in their elasticities of demand for food are well documented. In India, for example, cross-sectional estimates of income elasticities of demand indicate levels of about 0.8 and 0.2 for bottom two and top two deciles respectively.6 On the whole, income elasticities of demand for foodgrains are, however, observed to be less than one and are assumed to be so in this model.7 In order to focus on the most important relationships from the point of view of development policy, some additional assumptions have been made. For instance, the sum of the absolute magnitudes of income elasticity of demand (n) and the elasticity of budget share with respect to the change in relative price of foodgrains (e) is assumed to be less than 1, as empirically the absolute magnitude of e is usually expected to be small, i.e. closer to zero than to 1. In the labor market, the formulation assumes perfect mobility between sectors so that, at equilibrium, the ratio between the wage rate in the nonfoodgrain sector and the average labor income in the foodgrain sector is constant. The average labor income in the foodgrain sector is determined by the total labor income generated by the flow of labor in the foodgrain sector divided equally among the total stock of labor. Per capital income of the workers in the foodgrain sector then maintains a constant relationship to the level of real wages in the nonfoodgrain sector. We assume an underemploy- ment equilibrium in the foodgrain sector at a given wage W as depicted in Fig. 1. The conditions of low productivity and the labor-leisure choices in traditional agriculture which lead to such an underemployment equilibrium have been well analyzed in the literature (Nakajima, 1961, Mellor, 1963 and Sen, 1966). The assumption of underemployment equilibrium should not be confused with an assumption of zero marginal productivity of labor.8 Rather our assumption reflects the widely noted reality of highly elastic supply of labor from agriculture, if the wage goods constraint is relaxed. 6 Mellor and Lele (1973). For the Philippines, Goldman and Ranade (1976) find that income elasticity of demand for cereals, mainly rice, in the lowest income decile is 1.05 while it is 0.41 for the top decile. 7The results of the model remain unchanged irrespective of whether wage rate in the nonfoodgrain sector is a multiple of or equal to the average labor income in the foodgrain sector. It should be noted, much conventional wisdom to the contrary, that when the physical environment dictates a short, peak work period, the wage rate in agriculture at that season may be higher than that in nonagriculture at that or any other season, while concurrently the average product or total yearly income is lower in agriculture than nonagriculture. For empirical evidence, see Ranade (1977), p. 108. SFor a full analysis of this important distinction, see Mellor (1963) and Sen (1966). UMA LELE AND JOHN W. MELLOR -o \Lemanu Labor employed (1 ) 01A* 'A LA Total product Labor employed (1A) 0 1A* 1AL Total product FIG. 1. Equilibrium in foodgrain sector labor market. 1. Analytical framework The production function for food grains, assumed to have constant returns to scale and diminishing marginal rates of substitution, is as follows: A = F(N, E) such that F aF aF a2 A a2A FE=>O and <0 aE aN2 'aE where A is the foodgrains output, and N and E are the levels of land and labor inputs, respectively. Both land and labor are measured in efficiency units such that N= xZ and E = ylA, where x and Z are respectively the efficiency and the fixed amount of land, and y and 1A are respectively the efficiency and the amount of labor employed. Both x and y are exogenously given and depend upon technology t.9 It is assumed that technological change increases the efficiency of land faster than that of labor, that is, dxl dyl S= Az > A = (2) dt x dt y where Az and A. are rates of growth of the efficiency of land and labor In the foodgrain labor market an equilibrium is reached at a constant real wage (fW) equalizing the marginal physical productivity of labor and hence, W= =yFE (3) such that A < LA where LA is the total foodgrain labor force. Equilibrium in the foodgrain sector labor market is shown in Fig. 1. Then the relative share of foodgrain labor is SL A E (4) A A Further, the average income of laborers in the foodgrain sector is, IAW_ SLA y (5) LA rL where r = proportion of foodgrain labor force in total labor force L, that is r = LAL. Marketed supply of foodgrains, M,, to the nonfoodgrain sector is the difference between output and consumption in the foodgrain sector so that M,=A-C-bSLA (6) where, C = constant consumption of foodgrains by landlords, and b = budget share of foodgrains for laborers such that, b = b(P, y) (7) where P is the relative price of foodgrain output with the price of nonfood- grain output as the "numeraire". Further, ab p ab y e<0O and -1<0 ap b ay b 9 For convenience, time and technology are denoted by the same variable t. UMA LELE AND JOHN W. MELLOR where e is the elasticity of budget share with respect to change in price and Ti is income elasticity of demand for foodgrains. Note that the model thus allows for different income elasticities of demand for landlords (assumed to be equal to zero) and laborers (assumed to be less than one). The production function for the nonfoodgrain sector is a Cobb-Douglas linear homogeneous of the first degree as follows: Q = K"L'- (8) where, Q = nonfoodgrain output, K = exogenously given capital stock, L, = labor input in the nonfoodgrain sector, and a = relative share of capital In the nonfoodgrain sector laborers are employed at a wage rate W equalling marginal productivity of labor, i.e., W= (1-a) = (1 -a) 1 (9) L, (D"(1- r)- Labor migrates from the foodgrain sector to the nonfoodgrain sector until the wage rate in the nonfoodgrain sector is equal to a constant proportion 3 of per capital income of foodgrain laborers. 1 1 AWA W = ) =pPlA where f3-1 (10) S(1 r) LA depending upon marginal productivities of labor in the two sectors. Market demand for food in the nonfoodgrain sector, M,, is equal to the budget share allocated to food consumption out of wage income by the nonfoodgrain laborers, i.e. b(W/p) L. Thus in the foodgrain market, equilibrium is attained when M,=A-C-bSLA=b-L =MD, (11) That is, 1 AW A-C- A--b=0 (12) This describes the general equilibrium system. The formulation consists of six predetermined variables, namely, capital (K), total labor (L), quantity of land (Z), foodgrain wage (W), and efficiencies of land (x) and labor (y). It can be shown that given these variables all the endogenous variables (lA, A, SL, r, P, M, and W/P) can be uniquely determined. Note, given W, Z, x and y, one can uniquely determine the labor input (lA), output (A) and the share of labor (SL) from equation (3), (1) and (4) respectively (Fig. 1). Further, differentiating (10) and (12) partially with respect to r, we get, respectively, the following P= P 1 +- a ... for labor market, and (13) 9r r 1-r P P?1 =-... for foodgrain market. (14) Or re All the terms on the right hand side of (13) are positive i.e. OP/ar>0, and hence the price of foodgrain relative to nonfoodgrain output declines when the proportion of population in the foodgrain sector declines; both with respect to the labor market. This is explained by the fact that, ceteris paribus, as the proportion of population in the foodgrain sector declines, per capital income in that sector increases, and for the equilibrium in the labor market to be maintained the adjustment has to come from a decline in the price of foodgrain relative to nonfoodgrain output. Additionally, since r > 0 > e the right hand side of equation (14) is negative. Therefore the price of foodgrain relative to nonfoodgrain output increases as r declines with respect to the foodgrain market. Again, this is explained by the fact that ceteris paribus, as Foodgrain market labor market 0 r* r-l Proportion of labor in foodgrain sector (r) FIG. 2. General equilibrium. UMA LELE AND JOHN W. MELLOR the proportion of population in the foodgrain sector declines and per capital income in that sector increases, the wage rate in the industrial sector also increases, raising effective demand for foodgrain and their price relative to nonfoodgrain output. These opposite phenomena lead to the unique values of P and r given the predetermined variables and the values of IA, A and SL as shown in Fig. 2. Then, finally, W/P and Ms can be determined from equations (10) and (6). Stability of this equilibrium is shown in Appendix A. II. Sensitivity analysis The technological change affects first the efficiencies of land and labor. Change in the efficiencies would in turn affect the relative share of labor depending upon the nature of substitution between land and labor as dSL 1 ( dy 1 =(o- 1) (15) dt SL dt y where o- is the elasticity of substitution between land and labor. This equation implies that the relative share of labor would decrease, remain constant, or increase depending whether o- is less than, equal to, or greater than one.o1 The sensitivity of each of the endogenous variables such as foodgrain labor, price of foodgrains in relation to nonfoodgrain output, marketed surplus and real wages with respect to effect of technological change on labor's share is shown in the following sensitivity matrix. It also shows the sensitivity of these variables to population growth and growth of nonfood- grain capital separately. The most interesting results are obtained in the case of an increase in foodgrain output that is accompanied by a change in relative factor shares. The results obtained for a constant labor share are reinforced when labor's share declines as a result of an increase in foodgrain output. In the case of W/P, the real wage rate in the nonfoodgrain sector, the effect of increased foodgrain output accompanied by decline in labor's share directly depresses per capital income of the labor force in the foodgrain sector while decline in labor's share causes a decrease in the proportion of population in the foodgrain sector. This latter phenomenon acts to increase per capital income of the existing population in the foodgrain sector. Thus, the direction of change of the equilibrium level depends upon the relative magnitudes of these opposite influences. When an increase in foodgrain output is accompanied by an increase in labor's relative share, the effect on the proportion of the labor force in o This relation can be derived by using equations (3) and (4). TABLE 1 Sensitivity matrix Increase in foodgrain output (A) Growth of when relative share of labor (SL) Endogenous when relative share of labor (S Capital stock Population variable Increases Constant Decreases (K) (L) Proportion of + foodgrain labor in total labor (r) Price of + + relative to output (P) Real wage in + + + sector (W/P) Marketable + + + surplus (M,) a See Appendix B for the mathematical steps in deriving the sensitivity matrix on the basis that 0< q <1, e <0 and 0 < <1. Negative (positive) sign means decline (increase) in that variable. "" means the direction of change in that endogenous variable is indeterminate. agriculture (r), on the price of foodgrains relative to nonfoodgrains (P) and on marketable surplus (M,) may take either sign. If labor's relative share increases only slightly, relative to the increase in foodgrain output, the effect of increased foodgrain output on r, P and M, will be greater relative to that of increased labor's share. However, if the labor's share increases substan- tially as a result of the increase in foodgrain output, the effect on r, P and M, may be opposite to that when increased foodgrain output is not accom- panied by changing labor share. These interactions are discussed in the dynamic analysis in the next section. The preceding discussion does suggest that in the context of growth the most interesting results in the sensitivity matrix are those relating to labor's share in foodgrain output. They show that with an increased labor share, as exemplified by production increases in a traditional foodgrain sector, the marketed surplus of foodgrain may decline and the real wage in the nonfoodgrain sector may increase. Converse changes may be expected when technological change decreases labor's share in foodgrain output. The factor shares in the foodgrain sector are thus of crucial importance in the growth of the nonfoodgrain sector in a dualistic economy. This analysis suggests not only that change in factor shares may be a particularly important feature of current "green revolution" agricultural technology, but also helps remove a growing anomaly in the perception of UMA LELE AND JOHN W. MELLOR Japanese economic history. Recent downward revision of estimates of the growth rate for agricultural output in the early Meiji period are consistent with retention of the earlier estimates of growth in nonagricultural employ- ment if one takes into account the acceleration in agricultural marketing associated with change in agricultural technology (See Thomas Smith 1959 and James Nakamura 1966). The yield increasing agricultural technology associated with the Meiji period shifted factor shares away from labor as compared to the highly labor-intensive methods of production increase in the preceding Tokugawa period (Sen 1966). Thus we see agriculture's contribution to overall Japanese growth as arising from the effect of tech- nological change on both the level of output and the change in factor shares arising from that increased output. III. Dynamic analysis The dynamic analysis involves the simultaneous effect of change in factor shares through change in factor efficiencies, population and capital stock on nonfoodgrain employment, real wages, terms of trade and marketable surplus. These results are presented in the following equations. drl dSL 1 dA 1 dLl\ /d l1 dL1\ --=cc c2 3 -Cc4 (16) dt r dt SL \dt A dt L dtQ dt L c 6 dW, 1 dK 1 dL 1(17) dt W a\dtK dt L, dP 1 dSL 1 /dAl dL l\ dA 1 dQ 1\ _= d,-- d2 -- 3-d -t-d4 ( dt P dt S, dt A dtL L d3t A dt (18) dM, dr = el--e (19) dt dt r where c's, d's and e's are all positive given that 0 < l < 1 and 0 < j e < 1.1 From equation (16), the influence of various factors on the rate of growth of nonfoodgrain employment can be derived. For example, the greater the rate of growth of foodgrain output, the faster the rate of growth of nonfoodgrain employment. The rate of growth of employment in the non- foodgrain sector is inversely related to the rate of change of labor's share in foodgrain output. Technological change in the foodgrain sector which increases labor's share in output dampens the rate of growth of nonfoodgrain employment. This occurs through: (1) decreasing the marketed supply of foodgrain, and (2) increasing the level of wages in the nonfoodgrain sector required to with- draw labor from foodgrain production. Technological change that reduces 1 See Appendix C for derivation of the table. labor's share of foodgrain output may increase the growth of nonfoodgrain employment. Equation (19) shows the identity between the rate of growth of nonfoodgrain employment and marketable surplus. Thus it can be seen that the same factors shown on the right hand side of equation (16) determine in the same manner the rate of growth of marketable surplus. Equation (17) shows that there is a monotonically increasing relation between the capital-labor ratio in the nonfoodgrain sector and per capital income in the foodgrain sector. Also, since a <1 the capital-labor ratio increases more rapidly than the rate of growth of per capital income. It is S *-A W interesting to note here that since Y= -per capital income in the rL P foodgrain sector may increase, not only because of an increase in foodgrain output, but also because of an increase in labor's share or a decline in the labor force in the foodgrain sector. It, therefore, seems highly probable that the capital-labor ratio in the nonfoodgrain sector would rise over time, for even if foodgrain output increases only as rapidly as the population growth, and even if labor's share does not increase, just the withdrawal of popula- tion from the foodgrain sector would cause an increase in per capital income of foodgrain sector laborers. However, the faster foodgrain production grows and the more labor augmenting technological change in the foodgrain sector, by keeping the capital-labor ratio in the nonfoodgrain sector from rising as rapidly as it would otherwise, the more likely is the comparative advantage to continue in the production and export of labor-intensive commodities in a dualistic economy such as that depicted here. Equation (18) shows that the movement of relative prices of food and nonfoodgrain output is dependent upon the relative share of labor and growth of foodgrain production relative to that of population and nonfood- grain output, and may move in either direction depending upon the mag- nitudes of these several parameters and variables. It should be noted that the relative prices between sectors are determined by the price and income elasticities on the one hand and by the factor shares in the foodgrain sector and average propensities to consume of the two income classes on the other hand. However, it can be seen that a foodgrain output increase accompanied by a reduced factor share to labor will certainly turn the relative price against the foodgrain sector. IV. Conclusions By assuming the existence of labor and food markets as two separate but interacting markets in a dualistic economy, the model highlights the adverse effect of the wages good constraint on growth of employment in the non-agricultural sector in a situation of traditional low productivity agricul- ture faced in many developing countries. Further, it demonstrates the UMA LELE AND JOHN W. MELLOR relationship of increased agricultural production and especially of factor shares with growth of employment in the nonagricultural sector. This it does by showing that technological change which increases labor's share in agriculture may well lead to a decline in the marketed surplus of foodgrains and an increase in the real wages in the nonfood sector. On the other hand, in situations of biased technological change even if the direct employment effect of new technology in agriculture is limited, by generating a marketed surplus of foodgrains, such technological change may relax the wages goods constraint, thus facilitating an increase in employment in the nonagricultural World Bank, Washington, D.C. International Food Policy Research Institute, Washington D.C. Let us hypothesize that the terms of trade increase over time if demand for the marketable surplus exceeds its supply, P= H[MD-M] (A.1) such that H' > 0 and that labor migrates to the nonfoodgrain sector when the demand price for nonfoodgrain sector labor exceeds its supply price. S= G[--AWI (A.2) such that G'<0. A necessary and sufficient condition for local stability of the system (A.1) and (A.2) are that1 aP aO aP a OP 9t -+-<0 and ----->0 (A.3) OP ar aPOr r OP Differentiating equations (12) and (10) with respect to P and r we get aP ar SLbA e SLA ra r (1 -+-= H' + G'PSLA [1+ 1 -<0 (A.4) aP Or r P rL 1-r r aP ai P a H WbA I ra -----= -H'G'- S'"b-S)be a1+-r\ >0 (A.5) O8Pr raP r2 L \1 When -q >0, e <0, H'>0 and G'<0. Note that these are sufficient conditions for the system to satisfy (A.3) and hence they are the sufficient conditions for local stability of the system. The effect of changes in exogenous variables x, y, K or L on endogenous variables IA, r, P, M, and (W/P) can be determined as follows: let 0 = t, K or L. Note that change in t, technological change, implies change in x and y. SThese conditions are derived by using the theoretical discussion in P. A. Samuelson, Foundations of Economic Analysis, New York 1947 pp. 266-67. Differentiate (3) logarithmically with respect to 0 and note Az > AL. Then 1A 1 A--AL+ L > 0 when 0 = t. 80 IA 0 when 0 =K or L. Further, substitute the value of P from equation (10) in (12) and then differentiate (12) partially with respect to 0. After rearranging terms, r 1 SLbe,(O) + h,2(0)-SL[r-b(n -e)l-- A (B.2) 80 r || t o -II where e and -t are, respectively, the elasticity of budget share with respect to price and income elasticities of demand for foodgrains by laborers; t,(0) and #2(0) are functions of 0; and AI =Sb[ -(1 + -r)] >0, (B.3) since O<71<1 and e<0. Differentiating (10) logarithmically with respect to 0 and then rearranging terms gives the following two equations: aP e 9 I ra 9r a =(0 ) --+ +- ar (B.4) ao P 0 'A 1,-r) t' 8(WIP) 0 alA 0 Or 0 z(0)+ (B.5) aO (W/P) aO AA 8 r where 13(0) is the function of L. Differentiating both the sides of the marketable surplus equation (6) with respect to 0 and then rearranging the terms gives: SM, / lA 0 laP O\ -- =04(0) + SLA(1-b b) -I- SLAbs--P ar 0 + SLAbS-1) where 1(4(0) is a function of 0. Substituting (B.2) and B.4) in the above equation and then rearranging terms gives: =4() + A2(0)+ SLA(1 r) SAb(B.6) TABLE B.1 Different Values of qi's Value of 0 41(0) 02(0) #3(0) 04(0) S=t 0 rSzAz rSLAL 0 Szz -SLL O=K a 0 0 0 O=L 1-a -SLb(q-l1) -1 ASLb(q-1) UMA LELE AND JOHN W. MELLOR 439 Substitute tib's for different 0 and (B.1) in (B.2), (B.4), (B.5) and (B.6). Then, When 0= t: arO1 r t -IAI= -rSzAz-rSLAL--SL[r-b(t-e)]AZ-AL + A)L at r Sz =-rSzAz -rSLAL-[r-rSz -SLb(1 -) ]Z [ -AL + AL = L(-1)-[r-SLb(t-e)][Az AL]. (B.7) Since C>0, from (12) we get, r-SLb>O. Further, since O r--lqSLb>0 and r-(q -e)SLb >0. Using these inequalities and (B.1) in (B.7) we get ar 1< < dSL 1 < --<0 when 1, tha w hen wth hen --=0 (B.8) t r > dt SL > This gives the first three elements in the first row of the sensitivity matrix. Using (B.8) the first three elements of the remaining row of the sensitivity matrix can be derived from (B.4), (B.5) and (B.6). When 0 = K: ar K aSLbe aP K aSLbnq a(W/P) <0 >0, OK r Al < K P |Ad aK K ar K M, or 1\ --->0 and = SLAb >0. (W/P) OK r aK \(K r These inequalities give the fourth column of the sensitivity matrix. When = L: Or L -L AI= SLbe(1-a)-SLb( 1)= -aSLbe+SLb[1- (- e)]>0, OL r OP L =ra \/Or L\ aL P 1-r aL r O(W/P) L ar L\ aM, Or 1 -(W/P -1-- <0 and = -SLAb-- aL (W/P) \(L r aL aL r From these inequalities the last column of the sensitivity matrix is derived. APPENDIX C: TO DERIVE GROWTH RATES OF r, P, W AND M~ Equation (10) and (12) can be written respectively as follows: SLA Q 1 PS= (1-a) (1-=_a) =KW rL L (1-r) A C-SLA b = 0. (C.2) Substitute the value of P from (C.1) in (C.2) and then differentiate (C.2) totally with respect to t. After rearranging the terms, drl 1 1 dA 1 dt- = SLbe [la + a2- a+D SLb qaI +Srb( -1)t2-(r-SLb) dt r ID| I dt A dSL 1 dAl dL1 dQ1 dL1 a"l dt SL, 2 dt t dtL a dt Q dt L' (DI= SLb r > 0. IDrl S E)a-1 SLb(l Tl+e)a2+SLbea3-(r- SLb) dA 1] dt r [D Ii dt A = Cla C22 C33 C4, (C.3) where C, >0(i = 1,...4) because O< <1 00. Differentiating (C.1) logarithmically with respect to t and then substituting (C.3), dP 1 1 d --= -at-a2 +3+ 1-- [SLb(Tl -e)az -SLb(1 -TI +e)a2 dt P +DI(1-r) 1 SLb-q SLb(1 -l) +Sebsea3l C4= 2 1-r (1-r)IlD (1-r)IDI a SLb, l 1 (a2 a3) C4 DI 1-r = da d2a2 d(a2 a3) d4, (C.4) where all di's>0. Differentiating the marketable surplus equation (6) with respect to t, dM, dA 1dPl1 =A(1-SLb) -ASLba, -ASLb( --l1)a2- ASLbe d dt dt A \dtP) + ASLb( 1dr- 1 (C.5) (\dt r) Substituting (C.4) in (C.5) and rearranging the terms it can be shown that dM, A dr 1 dr A(1-r)- ASLb' = e e2 (C.6) dt A \dtr dt r where el and e2>0. Finally, differentiating (C.1) logarithmically with respect to t, dW 1 dK 1 dL1 () dt W \dt K dt L GOLDMAN H. W. and RANADE, C. G. "Analysis of Income Effect on Food Consumption in Rural and Urban Philippines". Journal of Agricultural Economics and Development, Vol. II, No. 2, July 1977, pp. 150-165. LELE, U. J. Food Grain Marketing in India, Ithaca 1971. MELLOR, J. W. "Models of Economic Growth and Land-Augmenting Technological Change in Foodgrain Production," in N. Islam, ed., Agricultural Policy in Developing Countries, London 1974. "The Use and Productivity of Farm Family Labor in Early Stages of Agricultural Development," Journal of Farm Economics, Vol. XLV, No. 3, Aug. 1963, pp. UMA LELE AND JOHN W. MELLOR 441 MELLOR J. W. and LELE, U. J. "Growth Linkages of the New Foodgrain Technologies," Indian Jour. Agr. Econ., Jan-Mar. 1973, 28, 35-55. NAKAJIMA, CHIHIRO "Technological Innovation and Subjective Equilibrium of Family Farm", Osaka Daigata-Keizoigaku, II, Nos. 1 and 2, October 1961. NAKAMURA, J. I. Agricultural Production and the Economic Development of Japan, 1873-1922, Princeton 1966. RANADE, C. G. "Distribution of Benefits from New Agricultural Technologies-A Study at Farm Level", unpublished doctoral dissertation, Cornell Univ. 1977. RAo, C. H. H. Technological Change and Distribution of Gains in Indian Agriculture, Delhi, Macmillan Co. of India, 1975. SEN, A. K. "Peasants and Dualism with or without Surplus Labor", Jour. Pol. Econ., Oct. 1966, 74, 425-450. SMrrH, T. C. The Agrarian Origins of Modem Japan, Stanford 1959.
{"url":"http://ufdc.ufl.edu/UF00085393/00001","timestamp":"2014-04-20T18:59:31Z","content_type":null,"content_length":"48887","record_id":"<urn:uuid:25d8a905-6763-4236-a94f-9d634ec33ec4>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: (Round 2) Proposed Extensions to OWL From: Thomas B. Passin <tpassin@comcast.net> Date: Mon, 7 Jul 2003 18:04:10 -0400 Message-ID: <001c01c344d3$b1890100$6401a8c0@tbp1> To: <www-rdf-interest@w3.org> Cc: "Costello,Roger L." < [Roger L. Costello] > I would like to take a stab at defining the conceptual model for > unitSpecification: > unitSpecification > Units > M15) The value of unitSpecification is Units. > M16) Kilometers, Miles are types of Units. > Units > | > ---------------- > | | > Kilometers Miles > M17) Kilometers = Miles * 1.609344 > M18) Miles = Kilometers * (100000 / 160934.4) I dunno, Roger. Just sticking with the values for now, why just that precision? And the numerator of the fraction has a different precision than the denominator. That just isn't right. Any thoughts about how to specify the multiplication in OWL? Any unit specification should be capable of being reduced to a combination of fundamental units (e.g., MKSQ, cgs, etc). Given a length measurement and its units, it seems to me that the measurement value should be capable of being compared with some international standard. I am not saying that these capabilities always have be be included in the statement of a measurement value or a unit, but we ought to make sure that it can be done when needed. Furthermore, some units are fundamental - meter, second, kilogram, coulomb for example. Others are considered to be combinations of the fundamental ones. This needs to be accounted for somehow (again, not in every case). Let's see. There is a relationship between the "miles" value and the "kilometers" value of a length. In this case, we know that their ratio is constant. More generally, there is going to be some formula that relates them. Degrees F vs degrees C is a simple example but common enough in (US) practice. The formula may or may not be linear. Suppose that we have a standard kilometer and a standard mile, sort of like a standard meter rod and a standard foot rod. The the length of the two rods is related by L-std-km-rod = f(L-std-mile-rod) // f(x) = 0.625*x L-std-mile-rod = g(L-std-km-rod) // f(x) = 1.6*x (neglecting numerical errors!). Here, g in the inverse function of f: g == f^-1 // Can't format this decently in plain text! Also, the relation between the numerical values of a length measurement is Value-in-km = g(Value-in-miles) = f^-1(Value-in-miles) Now f and g are established by national or international standards bodies. So we ought to be able to create resources to represent them, something like this - Transform-from-miles-to-km sameTransformAs nist:g Transform-from-km-to-miles sameTransformAs nist:f So here is the first thing we can establish - X1) The relation between a "miles" measurement value and a "kilometers" measurement value of the same measurement is the same as the relation "g" between a standard mile and a standard kilometer. This is something that can be stated in OWL (although we may not be able to spell out how f and g work with OWL). In fact, it looks like a samePropertyAs predicate between the transforms. A math-aware processor could figure things out. Note that f and g are directional - you have to know which side gets miles and which side gets kilometers. This works by appeal to standard rods, and not to the dimensions of "units" per se. But they are closely related. A more dimensionally complex unit could be built up in a similar way, I would think. Next, what does a conversion between units of measure actually operate on? This is interesting because the transform to use depends on the input unit spec and the target unit spec, but the transform operates on both the numerical value and the unit sec (since miles get changed to kilometers). This could be represented symbolically something like this - Value-in-units-A = g[units-A, units-B] (Value-in-units-B) Here, g[...] indicates that we have selected a specific "g" transform based on the two units specs. What is a "Value-in-miles" measurement value? Simple - it is the value of a LengthMeasurement for which the unitSpec equals Miles. That can be stated in OWL. Thus, a processor could infer a Value-in-miles type by examining the unitSpec object. I think I can now see how to represent the relation between a length value in miles the kilometer version of the same length. They are related by the transform "g" , and also by the allowed tolerance of the comparison. Value-in-Miles // the value of a specific measurement Value-in-Km // another value With this formulation, a processor that understood how to apply the "G" transformation could check these statements for consistency. A query processor could return the right value in kilometers by applying the transform to the miles version. In fact, I suspect that an xslt stylesheet could do these tasks, given a few templates for the transforms of interest (recall that the right transformation is specified by the two unit specs, so we are looking at a lookup table kind of thing). It ought to be possible to take the specific transform stated above and make it into a general constraint on any transformation between any length-in-miles and the corresponding length-in-kilometers. Notice that the Transform is a statement (a compound one, of course). It is not a procedure nor a formula. The statement may be true or false, consistent with the other two statements or not. Up until now I had talked glibly about a "transform" but had not really though about how to represent the units conversions we havebeen talking about in an RDF-ish declarative language. Very interesting! With this approach, all the challenges are localized down to just two areas - 1) How to specify the "f" and "g" base transformations, and 2) How to deal with complex units specifications. For named units, 2) is not needed, so many practical cases boil down to solving 1). Item X1 above is an attempt to get at 1), but it is not really The "f" and "g" functions could be put into a namespace, and a processor that claims to be aware of that namespace should know what to do with them. That is probably better that trying to figure out how to work out all the math stuff in an OWL-only way. Non-aware processors could infer something about equivalent values - at least, to know if a claimed transform could be consistent with the other data - but not be able to work out the numbers. That seems reasonable to me. That is all I have time for right now. Sorry to be so rambling but I am working this out as I go. I do not think I am quite ready to say I have a conceptual model I feel confortable with, but I think the bits I have sketched out here are probably OK. Once we understand things better, we will probably see how they can be simplified for common use. Tom P Received on Monday, 7 July 2003 18:00:09 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:52:00 GMT
{"url":"http://lists.w3.org/Archives/Public/www-rdf-interest/2003Jul/0069.html","timestamp":"2014-04-23T09:43:53Z","content_type":null,"content_length":"15328","record_id":"<urn:uuid:9706eadf-a3fa-4160-93aa-3e515ce504da>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Please answer these? September 15th 2010, 11:41 AM #1 Sep 2010 Please answer these? 1. a book costs £7.34 work out the cost of 25 books 2. 20 of the maths books have plastic covers. write 20 out of 25 as a percentage 3. simplify fully - 6n + 7n-8n, 4a x 5b 4. multiply out - 3(2x + 5) 5. Work out 60% of 5300kg find the simple interest on £2500 for 2 years at 6% per year. 6. a shop sells different sound systems on credit. the weekly payment is w pounds. the total amount paid, c pounds, is given by the formula c = 10w + 35. the total amount paid for a mini sound system is £260. calculate the weekly payment for a mini sound system 1. a book costs £7.34 work out the cost of 25 books 2. 20 of the maths books have plastic covers. write 20 out of 25 as a percentage 3. simplify fully - 6n + 7n-8n, 4a x 5b 4. multiply out - 3(2x + 5) 5. Work out 60% of 5300kg find the simple interest on £2500 for 2 years at 6% per year. 6. a shop sells different sound systems on credit. the weekly payment is w pounds. the total amount paid, c pounds, is given by the formula c = 10w + 35. the total amount paid for a mini sound system is £260. calculate the weekly payment for a mini sound system You have blatantly disregarded several forum rules. Just answer the questions or don't write anything read forum rules before posting ... You have blatantly disregarded several forum rules. Specifically Rules #4, #8 and #11. And in hindsight, rule #3 might also be worth reading. The members who have suggested in this thread that the forum Rules be read have acted appropriately and in good faith in order to maintain the smooth running of MHF. However, to avoid unpleasantness I would suggest to all members that if they come across a post that they think breaks forum rules, it is best to report that post to a Moderator using the Report Post tool (click on the triangle at bottom left of all posts). September 15th 2010, 11:47 AM #2 September 15th 2010, 12:08 PM #3 Sep 2010 September 15th 2010, 12:21 PM #4 September 15th 2010, 12:25 PM #5 September 15th 2010, 01:44 PM #6
{"url":"http://mathhelpforum.com/algebra/156290-please-answer-these.html","timestamp":"2014-04-20T06:48:26Z","content_type":null,"content_length":"46936","record_id":"<urn:uuid:97fad524-25e4-4495-adf1-f5b0e46be418>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
permutation of projective limits with inductive limits up vote 4 down vote favorite Hi everybody, I have a lack of references concerning projective limits and injective limits. Up to my faults in Bourbaki there are only proj and inj limits indexed by a partially ordered set (not a category), and this set is almost systematically directed (i.e. for all $i,j$ there exists $k$ such that $k\geq i$, and $k\geq j$). The problem of combining projective limits with inductive limits is not treated at all as far as I can see. I need a reference for this. Can someone help me ? Thank you ! My specific problem is the following: Assume given a system of modules $(M_{i,j})_{i,j}$ and arrows among them, over the same ring $A$. We consider both 1) the projective limit with respect to the index $j$ of the injective limits with respect to the indexes $i$. Call it $\projlim_j \;\;(\injlim_i \;\; (M_{i,j}))$ 2) the injective limit with respect to the index $i$ of the projective limits with respect to the indexes $j$. Call it $\injlim_i \;\;(\projlim_j\;\; (M_{i,j}))$ -- (Up to errors) There exists a canonical map $\displaystyle CAN : \injlim_i \;\;(\projlim_j\;\; (M_{i,j})) \;\to\; \projlim_j\;\; (\injlim_i\;\; (M_{i,j})) $ Under which assumptions this map is INJECTIVE ? As an example if there is no arrows at all between the $M_{i,j}$, then one has a direct sum instead of the injective limit, and a product instead of a projective limit. The arrow CAN becomes $\displaystyle CAN : \oplus_i \;\;\prod_j \;M_{i,j} \;\to\; \prod_j \;\;\oplus_i \;M_{i,j}$ $((a_{i,j})_i)_j \mapsto ((a_{i,j})_j)_i$ In this case the arrow CAN is always injective independently on the nature of the $M_{i,j}$ (and up to errors it is an isomorphism if and only if one of the sets of index "$i$" or "$j$" is finite). I suspect that in the general case the injectivity only depends on the nature of the arrows, but not of the nature of the objects. Does anyone have a useful comment or a reference? Many thanks ! ct.category-theory limits colimits ac.commutative-algebra 1 Sorry for all the edits. Feel free to roll-back if you want to reduce the total number of edits. Just remember the trick of using backticks ` around the dollar signs to deal with math that doesn't render correctly. Also, you have a typo but I don't want to pile on more edits: "insteand" is written where "instead" is meant. – David White Feb 13 '12 at 19:43 add comment 1 Answer active oldest votes The canonical morphism $\alpha : \mathrm{colim}_{i \in I} ~ \mathrm{lim}_{j \in J} M_{ij} \to \mathrm{lim}_{j \in J} ~ \mathrm{colim}_{i \in I} M_{i,j}$ is injective in the following 1) $I$ is a directed set. 2) For every $j \in J$ and every $i \to i'$ in $I$ the morphism $M_{i,j} \to M_{i',j}$ is injective. More detailed: Let us still assume 1) and let $x \in \mathrm{ker}(\alpha)$. Choose $i$ such that $x$ comes from an element in $\lim_j M_{i,j}$, say $x = (x_{ij})_j$. Now $\alpha(x)=0$ up vote 6 says that for all $j$ there is some $i(j) \geq i$ such that the image of $x_{ij}$ in $M_{i(j),j}$ vanishes. If we have 2), this already gives us $x_{ij}=0$ for all $j$, thus $x=0$. down vote accepted More generally, if $j \mapsto i(j)$ is a bounded function, say by $i_{\infty}$, we see that the image of $x$ in $\lim_j M_{i_{\infty},j}$ vanishes, which implies $x=0$ in the colim-lim. Now it's easy to find an example where this function is not bounded and, in fact, $\alpha$ is not injective: Let's take $I=\mathbb{N}$ as a partial order and $J = \mathbb{N}$ as a discrete category, so that we consider the canonical map $\mathrm{colim}_n \prod_m M_{nm} \to \prod_m \mathrm {colim}_n M_{n,m}$. For all $n,m$ let $M_{n,m} = A[X]$ and let $M_{n,m} \to M_{n+1,m}$ be the formal derivative of polynomials. Then $(1,X,X^2,X^3,\dotsc)$ represents a nontrivial element in the kernel of $\alpha$, since every $X^m$ satisfies $\partial^{m+1} X^m = 0$, but there is no $n$ with $\partial^{n} X^m = 0$ for all $m$. Thank you for the answer ! Do you have a counterexample to the general case ? – PULITA ANDREA Feb 13 '12 at 20:48 1 Have you tried to find one by yourself? – Martin Brandenburg Feb 13 '12 at 21:19 Yes, I have tried, but I think I am basically stupid. My "psycological problem" is that the two cases that we know (direct sum case, and the case that you found of a directed set with injective maps) seems to be somehow "orthogonal". What is the right assumption that should include both these two cases cases ? For this reason I am thinking that maybe the map CAN is always injective ? What is your feeling ? Thanks again – PULITA ANDREA Feb 13 '12 at 21:38 I've added an example and more details. – Martin Brandenburg Feb 13 '12 at 22:35 add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory limits colimits ac.commutative-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/88362/permutation-of-projective-limits-with-inductive-limits","timestamp":"2014-04-20T01:11:36Z","content_type":null,"content_length":"60196","record_id":"<urn:uuid:59b0f58b-d479-4065-aeae-0b56bf4a328a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Orbital Motion Newtonian mechanics & universal gravitation. 2-body problem. Reduced problem. Derivation of Kepler's laws for non-circular orbits. Conservation laws. Orbit calculations. Orbits in the Solar System. Please read the following in Carroll & Ostlie. Chapter 2.2 may be quite familiar; if so, feel free to skim. Chapter 2.3 will probably be new to you. 2.2 Newtonian Mechanics The Observations of Galileo — Newton's Three Laws of Motion — Newton's Law of Universal Gravitation — The Orbit of the Moon — Work and Energy 2.3 Kepler's Laws Derived The Center-of-Mass Reference Frame — The Derivation of Kepler's First Law — The Derivation of Kepler's Second Law — The Derivation of Kepler's Third Law Problem Set #2: Due 4 September 2012 Restricted 3-Body Problems: A description of the Sitnikov problem, with connections to Solar System stability. Joshua E. Barnes (barnes at ifa.hawaii.edu) Updated: 29 August 2012
{"url":"http://www.ifa.hawaii.edu/~barnes/ast241_f12/om.html","timestamp":"2014-04-21T00:14:09Z","content_type":null,"content_length":"2666","record_id":"<urn:uuid:3869e3fe-364c-4e6e-bd50-42e23834fd25>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Google pays tribute to 'Fermat's Last Theorem' August 17th, 2011 in Technology / Internet Google paid tribute on Wednesday to 17th century French mathematician Pierre de Fermat, transforming its celebrated homepage logo into a blackboard featuring "Fermat's Last Theorem." Google paid tribute on Wednesday to 17th century French mathematician Pierre de Fermat, transforming its celebrated homepage logo into a blackboard featuring "Fermat's Last Theorem." Google paid tribute on Wednesday to 17th century French mathematician Pierre de Fermat, transforming its celebrated homepage logo into a blackboard featuring "Fermat's Last Theorem." Google marked what would have been Fermat's 410th birthday by replacing its logo, known as the Google "doodle," with the problem that vexed mathematicians for centuries. In the margin of a book, Fermat wrote that he had found a "truly marvelous proof" to the math puzzle but the margin was too narrow for him to write it out. When scrolled over with a mouse, the Google doodle echoes Fermat's famous words saying: "I have discovered a truly marvelous proof of this theorem, which this doodle is too small to contain." Fermat's Last Theorem was finally solved by a British mathematician in the 1990s. (c) 2011 AFP "Google pays tribute to 'Fermat's Last Theorem'." August 17th, 2011. http://phys.org/news/2011-08-google-tribute-fermat-theorem.html
{"url":"http://phys.org/print232814569.html","timestamp":"2014-04-18T15:04:49Z","content_type":null,"content_length":"5969","record_id":"<urn:uuid:d8a2080d-ba1a-449a-b8ee-67fcd2bf53fe>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Hölder inequality From Encyclopedia of Mathematics The Hölder inequality for sums. Let $\{a_s\}$ and $\{b_s\}$ be certain sets of complex numbers, $s\in S$, where $S$ is a finite or an infinite set of indices. The following inequality of Hölder is valid: $$\tag{1} \Bigl|\sum\limits_{s\in S}a_sb_s\Bigr| \leq \Bigl(\sum\limits_{s\in S}|a_s|^p\Bigr)^{\frac1p}\Bigl(\sum\limits_{s\in S}|b_s|^q\Bigr)^{\frac1q},$$ where $p>1$ and $\frac1p + \frac1q = 1$; this inequality becomes an equality if and only if $|a_s|^p = C|b_s|^q$, and $\arg(a_sb_s)$ and $C$ are independent of $s\in S$. In the limit case, when $p=1$, $q=+\infty$, Hölder's inequality has the form \begin{equation*} \Bigl|\sum\limits_{s\in S}a_sb_s\Bigr| \leq \Bigl(\sum\limits_{s\in S}|a_s|\Bigr)\sup\limits_{s\in S}|b_s|. \end{equation*} If $0<p<1$, Hölder's inequality is reversed. The converse proposition of Hölder's inequality for sums is also true (M. Riesz): If \begin{equation*} \Bigl|\sum\limits_{s\in S}a_sb_s\Bigr| \leq AB \end{equation*} for all $\{a_s\}$ such that \ begin{equation*} \sum\limits_{s\in S}|a_s|^p \leq A^p, \end{equation*} then \begin{equation*} \sum\limits_{s\in S}|b_s|^q \leq B^q. \end{equation*} For sums of a more general form, Hölder's inequality takes the form $$\tag{2} \Bigl|\sum\limits_{s\in S}\rho_sa_{1s}\cdot\dots \cdot a_{ms}\Bigr|\leq \prod\limits_{k=1}^m\Bigl(\sum\limits_{s\in S}\ rho_s|a_{ks}|^{p_k}\Bigr)^{\frac{1}{p_k}}\quad \rho_s \geq 0,$$ if $\frac{1}{p_1} +\dots + \frac{1}{p_m}=1,\quad p_k>1$, and $1\leq k\leq m$. The Hölder inequality for integrals. Let $S$ be a Lebesgue-measurable set in an $n$-dimensional Euclidean space $\mathbb R^n$ and let the functions \begin{equation*} a_k(s) = a_k(s^1,\dots,s^n),\quad 1\leq k\leq m, \end{equation*} belong to $L_{p_k}(S)$. The following inequality of Hölder is then valid: \begin{equation*} \Bigl|\int\limits_{S}a_1(s)\cdot\dots \cdot a_m(s)\, ds\Bigr|\leq \prod\ limits_{k=1}^m\Bigl(\int\limits_{S}|a_k(s)|^{p_k}\Bigr)^{\frac{1}{p_k}}. \end{equation*} If $m=p=q=2$, one obtains the Bunyakovskii inequality. Analogous remarks (concerning the sign and the limit case) as were made for the Hölder inequality are also valid for the Hölder inequality for integrals. In the Hölder inequality the set $S$ may be any set with an additive function $\mu$ (e.g. a measure) specified on some algebra of its subsets, while the functions $a_k(s)$, $1\leq k\leq m$, are $\ mu$-measurable and $\mu$-integrable to degree $p_k$. The generalized Hölder inequality. Let $S$ be an arbitrary set, let a (finite or infinite) functional $\phi:a\to\phi(a)$ be defined on the totality of all positive functions $a:S\to\mathbb R$ and let this functional satisfy the following conditions: a) $\phi(0)=0$; b) $\phi(\lambda a)=\lambda\phi(a)$ for all numbers $\lambda>0$; c) if $0<a(s)\leq b(s)$, then the inequality $\phi(a)\leq \phi(b)$ is valid; and d) $\phi(a+b) \leq \phi(a) + \phi(b)$. If the conditions are also met, the generalized Hölder inequality is valid for the functional: \begin{equation*} \phi(|a_1\cdot\dots \cdot a_m|)\ leq \prod\limits_{k=1}^m[\phi(|a_k|^{p_k})]^{\frac{1}{p_k}}. \end{equation*} [1] O. Hölder, "Ueber einen Mittelwerthsatz" Nachr. Ges. Wiss. Göttingen (1889) pp. 38–47 [2] G.H. Hardy, J.E. Littlewood, G. Pólya, "Inequalities" , Cambridge Univ. Press (1934) [3] E.F. Beckenbach, R. Bellman, "Inequalities" , Springer (1961) The Bunyakovskii inequality is better known as the Cauchy–Schwarz inequality in the English-language literature. [a1] N. Dunford, J.T. Schwartz, "Linear operators. General theory" , 1 , Interscience (1958) How to Cite This Entry: Hölder inequality. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=H%C3%B6lder_inequality&oldid=28956 This article was adapted from an original article by L.P. Kuptsov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/H%c3%b6lder_inequality","timestamp":"2014-04-19T07:02:56Z","content_type":null,"content_length":"23119","record_id":"<urn:uuid:24f6d7c4-170e-468e-aac1-9cb77587424d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 6 4/5 + 6 3/5 • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. just add all the numbers but not the denominators. Best Response You've already chosen the best response. 13 2/5 Best Response You've already chosen the best response. you added wrong Best Response You've already chosen the best response. no se didn't but she's breaking the rules: "Don’t post only answers - guide the asker to a solution." Best Response You've already chosen the best response. but i cant really show her. this what you learn in grammer school Best Response You've already chosen the best response. Here's how to convert 13.4 to a fraction... There is not much that can be done to figure out how to write 13.4 as a fraction, except to literally use what the decimal portion of your number, the .4, means. Since there are 1 digits in 4, the very last digit is the "10th" decimal place. So we can just say that .4 is the same as 4/10. The fraction 13 4/10 is not reduced to lowest terms. We can reduce this fraction to lowest terms by dividing both the numerator and denominator by 2. Why divide by 2? 2 is the Greatest Common Divisor (GCD) or Greatest Common Factor (GCF) of the numbers 4 and 10. So, this fraction reduced to lowest terms is 13 2/5 So your final answer is: 13.4 can be written as the fraction 13 2/5 Best Response You've already chosen the best response. 64+63= 127 not 132. Best Response You've already chosen the best response. 6 4/5 + 6 3/5 = (6 + 6) + (4/5 + 3/5) = 12(7/5) = 13 (2/5) = 13 2/5 Best Response You've already chosen the best response. wait is that a sixty four or six is the whole number? Best Response You've already chosen the best response. It's not difficult to figure out bro Best Response You've already chosen the best response. it's \[6\frac{ 4 }{ 5 } + 4\frac{ 3 }{ 5}\] Best Response You've already chosen the best response. Yeah, just use Lay-text if you have to. Best Response You've already chosen the best response. the number looks close to eachother. i couldnt tell. trust me i know how to do this Best Response You've already chosen the best response. I just want to make a correction. It was: \[6\frac{ 4 }{ 5 } + 6\frac{ 3 }{ 5}\] Best Response You've already chosen the best response. why did u unfan me lol? Best Response You've already chosen the best response. I unfanned everybody bro. It was starting to slow me down. Best Response You've already chosen the best response. Believe me, I didn't want to do that. Doing took away my smart score points. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50772f54e4b0ed1dac506589","timestamp":"2014-04-18T18:25:32Z","content_type":null,"content_length":"68845","record_id":"<urn:uuid:13f9afc2-9763-44e1-b4ad-71b994c64663>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Incremental HMM with an improved Baum-Welch Algorithm when quoting this document, please refer to the following http://drops.dagstuhl.de/opus/volltexte/2012/3761/ Chis, Tiberiu S. Harrison, Peter G. Incremental HMM with an improved Baum-Welch Algorithm There is an increasing demand for systems which handle higher density, additional loads as seen in storage workload modelling, where workloads can be characterized on-line. This paper aims to find a workload model which processes incoming data and then updates its parameters "on-the-fly." Essentially, this will be an incremental hidden Markov model (IncHMM) with an improved Baum-Welch algorithm. Thus, the benefit will be obtaining a parsimonious model which updates its encoded information whenever more real time workload data becomes available. To achieve this model, two new approximations of the Baum-Welch algorithm are defined, followed by training our model using discrete time series. This time series is transformed from a large network trace made up of I/O commands, into a partitioned binned trace, and then filtered through a K-means clustering algorithm to obtain an observation trace. The IncHMM, together with the observation trace, produces the required parameters to form a discrete Markov arrival process (MAP). Finally, we generate our own data trace (using the IncHMM parameters and a random distribution) and statistically compare it to the raw I/O trace, thus validating our model. BibTeX - Entry author = {Tiberiu S. Chis and Peter G. Harrison}, title = {{Incremental HMM with an improved Baum-Welch Algorithm}}, booktitle = {2012 Imperial College Computing Student Workshop}, pages = {29--34}, series = {OpenAccess Series in Informatics (OASIcs)}, ISBN = {978-3-939897-48-4}, ISSN = {2190-6807}, year = {2012}, volume = {28}, editor = {Andrew V. Jones}, publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2012/3761}, URN = {urn:nbn:de:0030-drops-37613}, doi = {http://dx.doi.org/10.4230/OASIcs.ICCSW.2012.29}, annote = {Keywords: hidden Markov model, Baum-Welch algorithm, Backward algorithm, discrete Markov arrival process, incremental workload model} Keywords: hidden Markov model, Baum-Welch algorithm, Backward algorithm, discrete Markov arrival process, incremental workload model Seminar: 2012 Imperial College Computing Student Workshop Issue date: 2012 Date of publication: 2012
{"url":"http://drops.dagstuhl.de/opus/volltexte/2012/3761/","timestamp":"2014-04-19T05:12:32Z","content_type":null,"content_length":"9048","record_id":"<urn:uuid:67c626ed-f7b4-4798-8698-c40bce480dfd>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Com S 633: Randomness in Computation Lecture 12 Scribe: Aaron Sterling Today's Topic: Randomized Rounding Algorithm for MAX-SAT Problem Statement: Given a CNF-formula = c 1 ^ c 2 ^ ^ c m , nd an assignment that satises the maximum number of clauses. Exact solution of this problem is known to be NP-hard. We will improve on the randomized approximation algorithm of Lecture 2 by using the Randomized Rounding technique introduced in the previous lecture. 1 A randomized algorithm for MAX-SAT MAX-SAT is a generalization of the MAX-3CNF problem that we discussed in Lecture 2. In MAX-3CNF, each formula is a conjunction of clauses c i , such that each c i is a disjunction of three literals, x i 1 ; x i 2 ; x i 3 , where either x i j or :x i j appears in c i . In the case of MAX-SAT, each c i is a disjunction of nitely many literals (or their negations), but the number of literals in each clause may vary.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/112/3971470.html","timestamp":"2014-04-20T07:31:53Z","content_type":null,"content_length":"8027","record_id":"<urn:uuid:fee49c39-a6ac-4b08-af66-6999d000e402>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Arithmetic Operators 7.4 Arithmetic Operators The arithmetic operators take numeric operands and produce numeric results. a + b a - b a * b Yields the product of a and b. If either a or b is 0, then the result is 0, even if the other operand is missing. a / b Divides a by b and yields the quotient. If a is 0, then the result is 0, even if b is missing. If b is zero, the result is system-missing. a ** b Yields the result of raising a to the power b. If a is negative and b is not an integer, the result is system-missing. The result of 0**0 is system-missing as well. - a Reverses the sign of a.
{"url":"http://www.gnu.org/software/pspp/manual/html_node/Arithmetic-Operators.html","timestamp":"2014-04-18T15:53:23Z","content_type":null,"content_length":"4314","record_id":"<urn:uuid:040fb6a6-03f1-4f5a-b2ff-f9d0e54bbf2d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Looking at Compound Interest Date: 01/08/2007 at 11:34:09 From: Ken Subject: Nature of compounding interest. I was playing with compounding interest. In the process of looking at how interest works I noticed something that seemed counter to what I initially would have expected. Then when I looked at it another way it acted yet differently. Say you have $100. You invest it and over the next four years you get 20% returns for two years and 10% returns for two years. I don’t know why, but I just assumed that if you earned the higher interest first you would make more in the long run. It turns out that it’s the same result either way: Low Interest then higher interest $100 x 1.1 = $110 $110 x 1.1 = $121 $121 x 1.2 = $145.20 $145.20 x 1.2 = $174.24 High interest then low interest $100 x 1.2 = $120 $120 x 1.2 = $144 $144 x 1.1 = $158.40 $158.40 x 1.1 = $174.24 Next, I tried averaging the interest to see what would happen. 20 + 20 + 10 + 10 = 60 60/4 = 15 $100 x 1.15 = $115 $115 x 1.15 = $132.25 $132.25 x 1.15 = $152.0875 $152.0875 x 1.15 = $174.90 It appears as though there is a slight gain by using a constant amount rather than a rate that fluctuates but averages the constant amount. None of this makes sense to me. I would have assumed that earning a higher percentage early would pay off, but obviously I was wrong. Then I assumed that being there is no difference if the rate fluctuates that there would be no difference if the rate was averaged, and again I was wrong. Date: 01/08/2007 at 16:29:53 From: Doctor Peterson Subject: Re: Nature of compounding interest. Hi, Ken. In your work you came very close to seeing WHY the order doesn't matter. If you write out the work for the first two cases (low first, and high first) in one line each, it looks like this: $100 * 1.1 * 1.1 * 1.2 * 1.2 = $174.24 $100 * 1.2 * 1.2 * 1.1 * 1.1 = $174.24 You're doing the same four multiplications in a different order, so you get the same result--that's called the commutative property of multiplication, and you probably never thought it would show up in such a practical way! When you averaged, you used what is called the "arithmetic mean": AM = 1/n * sum of n numbers This gives the number with which each number could be replaced if you wanted them to give the same SUM. But here, you are MULTIPLYING by the numbers, so the "average" that works is what we call the "geometric mean": GM = nth root of product of n numbers There is a theorem that says the GM is always less than the AM: Arithmetic/Geometric Mean Inequality Theorem So when you took the arithmetic mean, you used an interest rate slightly larger than that which would give the same total interest over the four years. This shows that if you want to find the average effect of a fluctuating interest rate, you should take the geometric mean of 100% plus the rate. You're discovering some interesting practical facts based on some basic (and some not-so-basic) math! If you have any further questions, feel free to write back. - Doctor Peterson, The Math Forum Date: 01/08/2007 at 20:02:29 From: Ken Subject: Thank you (Nature of compounding interest.) Thank you so very much for your explanation. You answered all aspects of my question in a very thorough and professional manner. Your response should be used as an example in customer service and customer support training. I am extremely impressed and will pass word about this site to all my friends, co-workers and online programmer communities.
{"url":"http://mathforum.org/library/drmath/view/69824.html","timestamp":"2014-04-16T16:35:39Z","content_type":null,"content_length":"8885","record_id":"<urn:uuid:bc343177-8508-4086-b213-2fda38630e1f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Authors: Alessandro Chiesa Eran Tromer Bibliographic information: Proceedings of the 1st Symposium on Innovations in Computer Science. (BibTeX) The security of systems can often be expressed as ensuring that some property is maintained at every step of a distributed computation conducted by untrusted parties. Special cases include integrity of programs running on untrusted platforms, various forms of confidentiality and side-channel resilience, and domain-specific invariants. We propose a new approach, proof-carrying data (PCD), which sidesteps the threat of faults and leakage by reasoning about properties of a computation’s output data, regardless of the process that produced it. In PCD, the system designer prescribes the desired properties of a computation’s outputs. Corresponding proofs are attached to every message flowing through the system, and are mutually verified by the system’s components. Each such proof attests that the message’s data and all of its history comply with the prescribed properties. We construct a general protocol compiler that generates, propagates, and verifies such proofs of compliance, while preserving the dynamics and efficiency of the original computation. Our main technical tool is the cryptographic construction of short non-interactive arguments (computationally-sound proofs) for statements whose truth depends on "hearsay evidence": previous arguments about other statements. To this end, we attain a particularly strong proof-of-knowledge property. We realize the above, under standard cryptographic assumptions, in a model where the prover has black- box access to some simple functionality --- essentially, a signature card. Online material:
{"url":"http://people.csail.mit.edu/alexch/research/pcd/pcd-ics.html","timestamp":"2014-04-19T12:22:23Z","content_type":null,"content_length":"4395","record_id":"<urn:uuid:a66a36af-2bc9-49c7-8cab-8cdd0cd12ca6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Some asymptotic approximation theorems in real and complex analysis Abstract (Summary) (Uncorrected OCR) .. _~-~--,,--,_--:...._._------_._-------_.- Abstract of thesis titled 'SOIl-IE .f~Y1'IP.l.'OTIC APPROXIivIATION THEORE1LS llJ REAL AND COMPLEX ANALYSIS I submitted by LIU Hll{G-CHIT for t.he degree of DOCTOR OF PIULOSOPHY at -1:;he UNrVERSITY OF HONG KONG in February 197.3. Two par-~s, totally six chapters, comprise the thesis. The re~in result of the first part is majorant functions of power serief,. Dr. Y.M. Chen, the author's supervisor and the author obtained: Let fez) be defined by the power series 11 h;-l A z + ah+l z 11-:-2 + ah+2 z + ?? Isl < 1 and h is some integer ~ 0 ? 11' (A) h the family of anal;ytic functions def'i:iled above such that If(z)1 ~ 1 in Izi < 1 ~ and let t1'tef;r) =Arh + la11+11 rh+l + lah+21 I' + ?? be the majorant function of fez) with Izi = r. We obtained some es1(imates on the upper bound of -(;he function Bo CA) , where BoCA) is tiefined as m,(f';r) ~ 1 when I' ~ Bo(A) and fCz) E FoCA); and 9rt(fJr) > 1 for r > BoCA) for some fCz) E FO(A) ? The main result of the second part is simultaneous appro- ximation of real numbers. The author provaiL that (1) ]'Ol' 8ver'Y rea: 6' and every posH;ive integer N" there is an in'~eger 11 satisfying lie 211 1?N--1/2+C(N) l~Il~I\T .. 1:"1. < ... , whe:r:; L i[j em absolute constant, 8 (N) == 1/10g log Nand I "I! l',~3a:;,n the a.istance from . ~ ?o the nearest integer. each N ~ Hi the above inequalitier are true for A == 1.?? Fo~ integer k ~ 2 let K == 2 ? o be any arbitrary l'0:o:;iti'.rE! number. lilor any E > 0 there exist some positive C0113tants C(k., E) and C(O, E) such that for any- real nurubers 0, ?and integer N ~ 1 there exists an in-teger 1'1 sD.tisf"y:L1g ? I 1 '" n :::; N , rl I \ IIOn211 < C( 0, E)N-1/(7+o)+? , lI?2'1 < C(o, E )N-1/(7+o )+E; II I lI0l1'~'1 < C(k, )i'rl/C3K+l)+E 111>11 kll < C(k, E )N-l/(3K:.l)+? Ii I E 1 , - Bibliographical Information: School:The University of Hong Kong School Location:China - Hong Kong SAR Source Type:Master's Thesis Keywords:approximation theory asymptotes functional analysis Date of Publication:01/01/1973
{"url":"http://www.openthesis.org/documents/Some-asymptotic-approximation-theorems-in-509655.html","timestamp":"2014-04-16T19:01:33Z","content_type":null,"content_length":"10056","record_id":"<urn:uuid:32022d51-968b-439a-ab98-c79b078dbbfe>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Section: LAPACK auxiliary routine (version 1.5) (l) Updated: 12 May 1997 Local index Up PDLAUU2 - compute the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) UPLO, N, A, IA, JA, DESCA ) CHARACTER UPLO INTEGER IA, JA, N INTEGER DESCA( * ) DOUBLE PRECISION A( * ) PDLAUU2 computes the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U' or 'u' then the upper triangle of the result is stored, overwriting the factor U in sub( A ). If UPLO = 'L' or 'l' then the lower triangle of the result is stored, overwriting the factor L in sub( A ). This is the unblocked form of the algorithm, calling Level 2 BLAS. No communication is performed by this routine, the matrix to operate on should be strictly local to one process. Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. M_A (global) DESCA( M_ ) The number of rows in the global array A. N_A (global) DESCA( N_ ) The number of columns in the global array A. MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A UPLO (global input) CHARACTER*1 Specifies whether the triangular factor stored in the matrix sub( A ) is upper or lower triangular: = 'U': Upper triangular, = 'L': Lower triangular. N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the order of the triangular factor U or L. N >= 0. A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor L or U. On exit, if UPLO = 'U', the upper triangle of the distributed matrix sub( A ) is overwritten with the upper triangle of the product U * U'; if UPLO = 'L', the lower triangle of sub( A ) is overwritten with the lower triangle of the product L' * L. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. This document was created by man2html, using the manual pages. Time: 21:52:13 GMT, April 16, 2011
{"url":"http://www.makelinux.net/man/3/P/pdlauu2","timestamp":"2014-04-20T11:44:14Z","content_type":null,"content_length":"15524","record_id":"<urn:uuid:8e74e353-d79e-448c-9b02-36fe6109b786>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Writing Equations Horizontal Lines Horizontal lines have a slope of 0 . Thus, in the slope-intercept equation y = mx + b , m = 0 . The equation becomes y = b , where b is the y -coordinate of the y -intercept. Example 1: Write an equation for the following line: always takes the value , an equation for the line is y = - 1 Example 2: Write an equation for the horizontal line that passes through (6, 2) . Since the line is horizontal, y is constant--that is, y always takes the same value. Since y takes a value of 2 at the point (6, 2) , y always takes the value 2 . Thus, the equation is y = 2 . Vertical Lines Similarly, in the graph of a vertical line, x only takes one value. Thus, the equation for a vertical line is x = a , where a is the value that x takes. Example 3: Write an equation for the following line: always takes the value , the equation for the line is x = Example 4: Write an equation for the vertical line that passes through (6, 2) . Since the line is vertical, x is constant--that is, x always takes the same value. Since x takes a value of 6 at the point (6, 2) , x always takes the value 6 . Thus, the equation is x = 6 . Incidentally, the lines y = 2 and x = 6 are perpendicular to each other. In fact, all horizontal lines y = b are perpendicular to all vertical lines x = a . The usual relationship between the slopes of perpendicular lines does not work here, as we might expect, because the slope of a horizontal line is 0 which has no reciprocal.
{"url":"http://www.sparknotes.com/math/algebra1/writingequations/section4.rhtml","timestamp":"2014-04-20T16:18:15Z","content_type":null,"content_length":"57199","record_id":"<urn:uuid:2ba4d1f0-542c-4897-acd7-b6791242de40>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Bladensburg, MD ACT Tutor Find a Bladensburg, MD ACT Tutor ...All levels of ecology can be taught with extensive knowledge base and research-based techniques through my educational supplemental guidance and support. Current news and resources are utilized by me to keep you abreast of the most intriguing topics of the year, as well! You CAN become an excellent speller and not have to reference a book or dictionary so much! 64 Subjects: including ACT Math, chemistry, English, reading ...I can help with subject mastery and test taking tips and strategies. I have particular strengths in verbal reasoning and writing (13 on MCAT Verbal Reasoning, 780 writing on SAT, and 36 writing on ACT). Please note that due to the technical nature of MCAT tutoring I charge $75 per hour for MCAT prep. My tutoring style involves coaching the student into reaching the answers on their own. 39 Subjects: including ACT Math, Spanish, chemistry, writing ...By learning these study skills and more, students are much more likely to achieve their goals. In addition to going through the college admissions process myself, I was in charge of my younger (home-schooled) sister's application process, including SAT/ACT prep, finding colleges based on her fin... 26 Subjects: including ACT Math, English, reading, ESL/ESOL ...As an undergraduate, I took a course in linear algebra and a course in differential equations that involved applications of it. As a graduate student, I took several applied mathematics and statistics courses that involved applications of linear algebra. In fact, an understanding of linear algebra was a requirement for the master's program that I attended. 15 Subjects: including ACT Math, calculus, geometry, statistics John received his Bachelor's Degree in Computer Science from Morehouse College and a Master of Business Administration (MBA) from Georgia Tech with concentrations in Finance and Information Technology. He has served as a Life Leadership Adviser for the NBMBAA Leaders of Tomorrow Program (LOT) for the past 7 years, and has provided students with instruction for financial literacy 18 Subjects: including ACT Math, statistics, geometry, algebra 1 Related Bladensburg, MD Tutors Bladensburg, MD Accounting Tutors Bladensburg, MD ACT Tutors Bladensburg, MD Algebra Tutors Bladensburg, MD Algebra 2 Tutors Bladensburg, MD Calculus Tutors Bladensburg, MD Geometry Tutors Bladensburg, MD Math Tutors Bladensburg, MD Prealgebra Tutors Bladensburg, MD Precalculus Tutors Bladensburg, MD SAT Tutors Bladensburg, MD SAT Math Tutors Bladensburg, MD Science Tutors Bladensburg, MD Statistics Tutors Bladensburg, MD Trigonometry Tutors Nearby Cities With ACT Tutor Brentwood, MD ACT Tutors Capitol Heights ACT Tutors Cheverly, MD ACT Tutors Colmar Manor, MD ACT Tutors Cottage City, MD ACT Tutors Edmonston, MD ACT Tutors Glenarden, MD ACT Tutors Hyattsville ACT Tutors Landover Hills, MD ACT Tutors Mount Rainier ACT Tutors North Brentwood, MD ACT Tutors Riverdale Park, MD ACT Tutors Riverdale Pk, MD ACT Tutors Riverdale, MD ACT Tutors Rogers Heights, MD ACT Tutors
{"url":"http://www.purplemath.com/Bladensburg_MD_ACT_tutors.php","timestamp":"2014-04-16T10:11:51Z","content_type":null,"content_length":"24420","record_id":"<urn:uuid:9c0be4dd-aa78-4e04-97f1-d15937173422>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
1160 -- Post Office Post Office Time Limit: 1000MS Memory Limit: 10000K Total Submissions: 15093 Accepted: 8182 There is a straight highway with villages alongside the highway. The highway is represented as an integer axis, and the position of each village is identified with a single integer coordinate. There are no two villages in the same position. The distance between two positions is the absolute value of the difference of their integer coordinates. Post offices will be built in some, but not necessarily all of the villages. A village and the post office in it have the same position. For building the post offices, their positions should be chosen so that the total sum of all distances between each village and its nearest post office is minimum. You are to write a program which, given the positions of the villages and the number of post offices, computes the least possible sum of all distances between each village and its nearest post Your program is to read from standard input. The first line contains two integers: the first is the number of villages V, 1 <= V <= 300, and the second is the number of post offices P, 1 <= P <= 30, P <= V. The second line contains V integers in increasing order. These V integers are the positions of the villages. For each position X it holds that 1 <= X <= 10000. The first line contains one integer S, which is the sum of all distances between each village and its nearest post office. Sample Input Sample Output IOI 2000 [Submit] [Go Back] [Status] [Discuss] All Rights Reserved 2003-2013 Ying Fuchen,Xu Pengcheng,Xie Di Any problem, Please Contact Administrator
{"url":"http://poj.org/problem?id=1160","timestamp":"2014-04-19T12:03:03Z","content_type":null,"content_length":"6820","record_id":"<urn:uuid:411949a0-da90-464d-b219-c45c14055218>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
Output negative digits Next: Extending division to the Up: Division Previous: Output positive digits Negative digits are generated in the output if the numerator starts The strategy here is identical to the one used when we output positive digits. We require three cases as before, and output either -1, Martin Escardo
{"url":"http://www.dcs.ed.ac.uk/home/mhe/plume/node68.html","timestamp":"2014-04-17T01:48:54Z","content_type":null,"content_length":"3102","record_id":"<urn:uuid:25cb4d01-0c21-484a-b335-f9029f7b358c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
East Newark, NJ Calculus Tutor Find an East Newark, NJ Calculus Tutor ...I will come to your home or meet you at a mutually convenient location (such as the library). I am happy to work with individuals or groups. Group rates can be negotiated. I am available on weekends and some evenings. 10 Subjects: including calculus, statistics, algebra 2, geometry ...I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year. I have a lot of experience tutoring physics and math at all levels. I have been tutoring since high school so I have more than 10 years of experience, having tutored students of all ages, starting from elementary school all the way to college-level. 11 Subjects: including calculus, Spanish, physics, geometry ...My tutoring services mainly focus on test preparation. Scoring in the 99th percentile on the math and chemistry sections of the GRE, GMAT, MCAT, SAT, SAT Subject and AP exams, I have also obtained high combined scores in the math and verbal sections of the SAT (99th percentile) and GRE (98th per... 24 Subjects: including calculus, chemistry, physics, biology ...I participate in NaNoWriMo every year! ** NOTE: I can't travel farther than 10 miles to meet with you, due to an increase in tutees. Sorry! **I got 5s in the following AP tests: Physics B, Physics C Mechanics, Physics C E&M. I have been designing websites in HTML and CSS for several years. (I ... 26 Subjects: including calculus, English, physics, writing ...I can speak from experience as a student that poor tutoring could not come in any worse form than this. I would like you to gain as much knowledge and appreciation for the subject to the extent that you are capable of! I can almost guarantee you will actually take notice of your personal growth! 28 Subjects: including calculus, chemistry, writing, geometry Related East Newark, NJ Tutors East Newark, NJ Accounting Tutors East Newark, NJ ACT Tutors East Newark, NJ Algebra Tutors East Newark, NJ Algebra 2 Tutors East Newark, NJ Calculus Tutors East Newark, NJ Geometry Tutors East Newark, NJ Math Tutors East Newark, NJ Prealgebra Tutors East Newark, NJ Precalculus Tutors East Newark, NJ SAT Tutors East Newark, NJ SAT Math Tutors East Newark, NJ Science Tutors East Newark, NJ Statistics Tutors East Newark, NJ Trigonometry Tutors Nearby Cities With calculus Tutor Belleville, NJ calculus Tutors Bloomfield, NJ calculus Tutors East Orange calculus Tutors East Rutherford calculus Tutors Elizabethport, NJ calculus Tutors Glen Ridge calculus Tutors Harrison, NJ calculus Tutors Hillside, NJ calculus Tutors Kearny, NJ calculus Tutors Little Ferry calculus Tutors Lyndhurst, NJ calculus Tutors Newark, NJ calculus Tutors North Arlington calculus Tutors South Kearny, NJ calculus Tutors Verona, NJ calculus Tutors
{"url":"http://www.purplemath.com/East_Newark_NJ_Calculus_tutors.php","timestamp":"2014-04-18T05:39:38Z","content_type":null,"content_length":"24237","record_id":"<urn:uuid:2e1d7444-7f8f-480b-8d77-db536084efa7>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
From Encyclopedia of Mathematics A finite simplicial complex with the following properties: a) it is non-branching: Each b) it is strongly connected: Any two c) it has dimensional homogeneity: Each simplex is a face of some If a certain triangulation of a topological space is a pseudo-manifold, then any of its triangulations is a pseudo-manifold. Therefore one can talk about the property of a topological space being (or not being) a pseudo-manifold. Examples of pseudo-manifolds: triangulable, compact connected homology manifolds over Homology manifold); complex algebraic varieties (even with singularities); and Thom spaces (cf. Thom space) of vector bundles over triangulable compact manifolds. Intuitively a pseudo-manifold can be considered as a combinatorial realization of the general idea of a manifold with singularities, the latter forming a set of codimension two. The concepts of orientability, orientation and degree of a mapping make sense for pseudo-manifolds and moreover, within the combinatorial approach, pseudo-manifolds form the natural domain of definition for these concepts (especially as, formally, the definition of a pseudo-manifold is simpler than the definition of a combinatorial manifold). Cycles in a manifold can in a certain sense be realized by means of pseudo-manifolds (see Steenrod problem). [1] H. Seifert, W. Threlfall, "A textbook of topology" , Acad. Press (1980) (Translated from German) MR0575168 Zbl 0469.55001 [2] E.H. Spanier, "Algebraic topology" , McGraw-Hill (1966) MR0210112 MR1325242 Zbl 0145.43303 [a1] J.R. Munkres, "Elements of algebraic topology" , Addison-Wesley (1984) MR0755006 Zbl 0673.55001 [a2] J. Dieudonné, "A history of algebraic and differential topology 1900–1960" , Birkhäuser (1989) MR0995842 Zbl 0673.55002 How to Cite This Entry: Pseudo-manifold. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Pseudo-manifold&oldid=24541 This article was adapted from an original article by D.V. Anosov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Pseudo-manifold","timestamp":"2014-04-20T18:23:19Z","content_type":null,"content_length":"19592","record_id":"<urn:uuid:02161216-276a-4572-b34e-dcaa71ab3352>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: New Systolic Architectures for Inversion and Division in GF(2^m) November 2003 (vol. 52 no. 11) pp. 1514-1519 ASCII Text x Zhiyuan Yan, Dilip V. Sarwate, "New Systolic Architectures for Inversion and Division in GF(2^m)," IEEE Transactions on Computers, vol. 52, no. 11, pp. 1514-1519, November, 2003. BibTex x @article{ 10.1109/TC.2003.1244950, author = {Zhiyuan Yan and Dilip V. Sarwate}, title = {New Systolic Architectures for Inversion and Division in GF(2^m)}, journal ={IEEE Transactions on Computers}, volume = {52}, number = {11}, issn = {0018-9340}, year = {2003}, pages = {1514-1519}, doi = {http://doi.ieeecomputersociety.org/10.1109/TC.2003.1244950}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - New Systolic Architectures for Inversion and Division in GF(2^m) IS - 11 SN - 0018-9340 EPD - 1514-1519 A1 - Zhiyuan Yan, A1 - Dilip V. Sarwate, PY - 2003 KW - Finite fields KW - field arithmetic KW - inversion KW - division KW - systolic KW - extended Euclidean algorithm. VL - 52 JA - IEEE Transactions on Computers ER - Abstract—We present two systolic architectures for inversion and division in GF(2^m)based on a modified extended Euclidean algorithm. Our architectures are similar to those proposed by others in that they consist of two-dimensional arrays of computing cells and control cells with only local intercell connections and have O(m^2) area-time product. However, in comparison to similar architectures, both our architectures have critical path delays that are smaller, gate counts that range from being considerably smaller to only slightly larger, and latencies that are identical for inversion but somewhat larger for division. One architecture uses an adder or an (m + 1)-bit ring counter inside each control cell, while the other architecture distributes the ring counters into the computing cells, thereby reducing each control cell to just two gates. [1] R.E. Blahut, Theory and Practice of Error-Control Codes. Reading, Mass.: Addison-Wesley, 1983. [2] W. Diffie and M.E. Hellman, New Directions in Cryptography IEEE Trans. Information Theory, vol. 22, pp. 644-654, 1976. [3] D.E.R. Denning, Cryptography and Data Security. Addison-Wesley, 1983. [4] I.S. Reed and T.K. Truong, The Use of Finite Fields to Compute Convolutions IEEE Trans. Information Theory, vol. 21, pp. 208-213, Mar. 1975. [5] K.K. Parhi, VLSI Digital Signal Processing Systems. New York: John Wiley&Sons, 1999. [6] K. Araki, I. Fujita, and M. Morisue, Fast Inverters over Finite Field Based on Euclid's Algorithm Trans. IEICE, vol. 72E, no. 11, pp. 1230-1234, Nov. 1989. [7] E.D. Mastrovito, VLSI Architectures for Computations in Galois Fields PhD thesis, Linköping Univ., 1991. [8] H. Brunner, A. Curiger, and M. Hofstetter, On Computing Multiplicative Inverses in${\rm GF}(2^m)$ IEEE Trans. Computers, vol. 42, no. 8, pp. 1010-1015, Aug. 1993. [9] C.-T. Huang and C.-W. Wu, High-Speed C-Testable Systolic Array Design for Galois-Field Inversion Proc. European Design and Test Conf., pp. 342-346, Mar. 1997. [10] J.H. Guo and C.L. Wang, Systolic Array Implementation of Euclid's Algorithm for Inversion and Division in$GF(2^m)$ IEEE Trans. Computers, vol. 47, no. 10, pp. 1161-1167, Oct. 1998. [11] J.-H. Guo and C.-L. Wang, Hardware-Efficient Systolic Architecture for Inversion and Division in${\rm GF}(2^m)$ IEE Proc. Computers and Digital Techniques, pp. 272-278, 1998. [12] S.-W. Wei, VLSI Architectures for Computing Exponentiations, Multiplicative Inverses, and Divisions in${\rm GF}(2^m)$ Proc. Int'l Symp. Circuits and Systems (ISCAS '94), pp. 203-206, 1994. [13] S.-W. Wei, VLSI Architectures for Computing Exponentiations, Multiplicative Inverses, and Divisions in${\rm GF}(2^m)$ IEEE Trans. Circuits and Systems-II: Analog and Digital Signal Processing, vol. 44, no. 10, pp. 847-855, Oct. 1997. [14] C.-L. Wang and J.-H. Guo, New Systolic Arrays for$C+AB^2$, Inversion, and Division in${\rm GF}(2^m)$ IEEE Trans. Computers, vol. 49, no. 10, pp. 1120-1125, Oct. 2000. [15] E.R. Berlekamp, G. Seroussi, and P. Tong, A Hypersystolic Reed-Solomon Decoder Reed-Solomon Codes and Their Applications, S.B. Wicker and V.K. Bhargava, eds., chapter 10, Piscataway, N.J.: IEEE Press, 1994. [16] C. Paar, Some Remarks on Efficient Inversion in Finite Fields Proc. 1995 Int'l Symp. Information Theory, 1995. [17] C. Paar, Fast Inversion in Composite Galois Fields${\rm GF}(2^m)$ Proc. 1998 Int'l Symp. Information Theory, 1998. [18] M.A. Hasan and V.K. Bhargava,"Bit-Serial Systolic Divider and Multiplier for Finite FieldsGF(2m)," IEEE Trans. Computers, vol. 41, no. 8, pp. 972-980, Aug. 1992. [19] M.A. Hasan, Double-Basis Multiplicative Inversion over${\rm GF}(2^m)$ IEEE Trans.n Computers, vol. 47, no. 9, pp. 960-970, Sept. 1998. [20] D.V. Sarwate and N.R. Shanbhag, High-Speed Architectures for Reed-Solomon Decoders IEEE Trans. VLSI Systems, vol. 9, no. 5, pp. 941-955, Oct. 2001. [21] R.P. Brent and H.T. Kung, Systolic VLSI Arrays for Polynomial GCD Computation IEEE Trans. Computers, vol. 33, pp. 731-736, 1984. [22] N. Takagi, A VLSI Algorithm for Modular Division Based on the Binary GCD Algorithm IEICE Trans. Fundamentals of Electronics, Comm., and Computer Sciences, vol. E81-A, no. 5, pp. 724-728, May [23] C.H. Wu, C.M. Wu, M.D. Shieh, and Y.T. Wang, Systolic VLSI Realization of a Novel Iterative Division Algorithm over${\rm GF}(2^m)$: A High-Speed, Low-Complexity Design Proc. Int'l Symp. Circuits and Systems (ISCAS '01), pp. 33-36, 2001. [24] C.H. Wu, C.M. Wu, M.D. Shieh, and Y.T. Wang, An Area-Efficient Systolic Division Circuit over${\rm GF}(2^m)$for Secure Communication Proc. Int'l Symp. Circuits and Systems (ISCAS '02), pp. 733-736, 2002. [25] Y. Watanabe, N. Takagi, and K. Takagi, A VLSI Algorithm for Division in${\rm GF}(2^m)$Based on Extended Binary GCD Algorithm IEICE Trans. Fundamentals of Electronics, Comm., and Computer Sciences, vol. E85-A, no. 5, pp. 994-999, May 2002. [26] Z. Yan and D.V. Sarwate, Systolic Architectures for Finite Field Inversion and Division Proc. Int'l Symp. Circuits and Systems (ISCAS '02), pp. 789-792, 2002. [27] H.T. Kung, Why Systolic Architectures? Computer, vol. 15, no. 1, pp. 37-46, Jan. 1982. [28] E.R. Berlekamp, Algebraic Coding Theory. New York: McGraw-Hill, 1968. [29] H.T. Kung, C.-C. Wang, C.-J. Huang, and K.-C. Tsai, A 1.00 GHz 0.6-µm 8-Bit Carry Lookahead Adder Using PLA-Styled All-n-Transistor Logic IEEE Trans. Circuits and Systems-II: Analog and Digital Signal Processing, vol. 47, pp. 133-135, 2000. [30] C.-C. Wang, P.-M. Lee, R.-C. Lee, and C.-J. Huang, A 1.25 GHz 32-Bit Tree-Structured Carry Lookahead Adder Proc. Int'l Symp. Circuits and Systems (ISCAS '01), vol. 4, pp. 80-83, 2001. Index Terms: Finite fields, field arithmetic, inversion, division, systolic, extended Euclidean algorithm. Zhiyuan Yan, Dilip V. Sarwate, "New Systolic Architectures for Inversion and Division in GF(2^m)," IEEE Transactions on Computers, vol. 52, no. 11, pp. 1514-1519, Nov. 2003, doi:10.1109/ Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/2003/11/t1514-abs.html","timestamp":"2014-04-16T19:11:02Z","content_type":null,"content_length":"57577","record_id":"<urn:uuid:78ffa06d-3a0b-4890-b468-98187f6644e8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
ACM SIGGRAPH Education Committee Online Size 1 kB File type text/plain File contents The 'Progressive' Radiosity Algorithm. The 'progressive' radiosity solution is an incremental method, yielding intermediate results at much lower computation and storage costs. Each iteration of the algorithm requires the calculation of form factors between a point on a single surface and all other surfaces, rather than all n 2 form factors (where n is the number of surfaces in the environment). After the form factor calculation, radiosity values for the surfaces of the environment are updated. This method will eventually produce the same complete solution as the 'full matrix' method, though, unlike the 'full matrix' method,it will also produce intermediate results, each more accurate than the last. It can be halted when the desired approximation is reached. It also exacts no large (n 2) storage cost.
{"url":"http://education.siggraph.org/resources/cgsource/instructional-materials/slide-sets/data/1993_radiosity/13.txt/view","timestamp":"2014-04-19T17:19:38Z","content_type":null,"content_length":"37068","record_id":"<urn:uuid:78fe7811-800f-466b-a4d0-e9251f1b70db>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Excel Pivot Table Calculated Field Excel Pivot Table Calculated Field Pivot Table Calculated Field Video: Add a Simple Calculated Field Add a Simple Calculated Field Add a Complex Calculated Field Remove a Pivot Table Calculated Field Programmatically Remove Pivot Table Calculated Field Create List of Pivot Table Formulas Video: Create List of Pivot Table Formulas List All Formulas For All Pivot Tables Download the Sample File Pivot Table Tutorials and Videos Pivot Table Calculated Field In a pivot table, you can create a new field that performs a calculation on the sum of other pivot fields. For example, in the screen shot below, a calculated field, named Bonus, has been created, and it will calculate 3% of the Total, if the sum of Units is greater than 100. About Calculated Fields □ For calculated fields, the individual amounts in the other fields are summed, and then the calculation is performed on the total amount. □ Calculated field formulas cannot refer to the pivot table totals or subtotals □ Calculated field formulas cannot refer to worksheet cells by address or by name. □ Sum is the only function available for a calculated field. □ Calculated fields are not available in an OLAP-based pivot table. Video: Add a Simple Calculated Field Watch this video to see the steps for creating a simple calculated field. The written instructtions are below the video. Click here to download the sample file for this video: Simple Calculated Field Add a Simple Calculated Field In this example, the pivot table shows the total sales for each sales representative per product, and the Units field summarizes the number of units sold. Click here to download the sample file for this tutorial: Simple Calculated Field The sales reps will earn a 3 percent bonus on their Total Sales. To show the bonuses, you can add a calculated field to the pivot table. In this example, the formula will multiply the Total field by 3%. To add a calculated field: 1. Select a cell in the pivot table, and on the Excel Ribbon, under the PivotTable Tools tab, click the Options tab. 2. In the Calculations group, click Fields, Items, & Sets, and then click Calculated Field. 3. Type a name for the calculated field, for example, RepBonus. 4. In the Formula box, type =Total * 3% 5. Click Add to save the calculated field, and click Close. 6. The RepBonus field appears in the Values area of the pivot table, and in the field list in the PivotTable Field List. Add a Complex Calculated Field In this example, the pivot table shows the total sales for each sales representative per product, and the Units field summarizes the number of units sold. The sales reps will earn a 3 percent bonus if they have sold more than 100 units of any product. To show the bonuses, you can add a calculated field to the pivot table. In this example, the formula will test the Units field, to see if more than 100 units were sold, and multiply the Total field by 3%. To add a calculated field: 1. Select a cell in the pivot table, and on the Excel Ribbon, under the PivotTable Tools tab, click the Options tab. 2. In the Tools group, click Formulas, and then click Calculated Field. 3. Type a name for the calculated field, for example, Bonus. 4. In the Formula box, type =IF(Units>100,Total*3%,0). 5. Click Add to save the calculated field, and click Close. The Bonus field appears in the Values area of the pivot table, and in the field list in the PivotTable Field List. Remove a Pivot Table Calculated Field In this example, the pivot table has a calculated field named Bonus. It appears in the Values area as Sum of Bonus. You could temporarily hide the Bonus calculated field, or permanently delete it from the pivot table. Temporarily Remove a Calculated Field To temporarily remove a calculated field from a pivot table, follow these steps: 1. In the pivot table, right-click a cell in the calculated field. In this example, we’ll right-click the Bonus field. 2. In the popup menu, click the Remove command that shows the name of the calculated field. The calculated field is removed from the pivot table layout, but remains in the PivotTable Field List. Later, you can add a check mark to the calculated field in the PivotTable Field List, to return it to the pivot table layout. Permanently Remove a Calculated Field To permanently remove a calculated field, follow these steps to delete it: 1. Select any cell in the pivot table. 2. On the Ribbon, under the PivotTable Tools tab, click the Options tab. 3. In the Tools group, click Formulas, and then click Calculated Field. 4. From the Name drop down list, select the name of the calculated field you want to delete. 5. Click Delete, and then click OK to close the dialog box. Programmatically Remove Pivot Table Calculated Field In Excel VBA, if you try to change the Orientation for a calculated field, Excel displays the error message "Run-time error '1004': Unable to set the Orientation property of the PivotField class" You can manually uncheck the calculated field boxes, and remove them from the pivot table, then check the box again, to put it back into the layout. However, if you record code while removing the calculated field, that recorded code shows the same error message when you try to run it. So, I wrote the following code that deletes each calculated field, then immediately adds it back to the pivot table field list, but not into the pivot table layout. If you've been having the same trouble with calculated fields, I hope this helps! Sub RemoveCalculatedFields() Dim pt As PivotTable Dim pf As PivotField Dim pfNew As PivotField Dim strSource As String Dim strFormula As String Set pt = ActiveSheet.PivotTables(1) For Each pf In pt.CalculatedFields strSource = pf.SourceName strFormula = pf.Formula Set pfNew = pt.CalculatedFields.Add(strSource, strFormula) Next pf End Sub Create List of Pivot Table Formulas With a built-in pivot table command, you can quickly create a list of the calculated fields and calculated items in the selected pivot table. List the Pivot Table Formulas in Excel 2010 and Excel 2013 1. Select any cell in the pivot table. 2. On the Ribbon, under the PivotTable Tools tab, click the Options tab. 3. In the Calculations group, click Fields, Items & Sets 4. Click List Formulas. List the Pivot Table Formulas in Excel 2007 1. Select any cell in the pivot table. 2. On the Ribbon, under the PivotTable Tools tab, click the Options tab. 3. In the Tools group, click Formulas 4. Click List Formulas. A new sheet is inserted in the workbook, with a list of the calculated fields and a list of the calculated items. List the Pivot Table Formulas in Excel 2003 1. Select any cell in the pivot table. 2. On the Pivot toolbar, click PivotTable. 3. Click Formulas, then click List Formulas. A new sheet is inserted in the workbook, with a list of the calculated fields and calculated items (see the Excel 2007 example above). Video: Create a List of Pivot Table Formulas With a built-in command, you can quickly create a list of the calculated fields and calculated items in the selected pivot table. To see the steps, please watch this short video tutorial. List All Formulas For All Pivot Tables To create a list of all the formulas in a specific pivot table, you can use the List Formulas command, as shown above. There is no built-in command that will list the formulas for all of the pivot tables in a workbook, but you can use programming to do that. In the sample code shown below, a new worksheet is added to the active workbook, with a list of all the calculated items and calculated fields, in all of the pivot tables. To download the sample file, which contains the code, go to the Download section, below. Sub ListAllPivotFormulas() 'print all the pivot table formulas 'in the active workbook Dim lRow As Long Dim wb As Workbook Dim ws As Worksheet Dim wsFP As Worksheet Dim pt As PivotTable Dim pf As PivotField Dim cf As Variant 'calculated field Dim ci As Variant 'calculated item Dim strSh As String Dim lPI As Long On Error Resume Next Application.DisplayAlerts = False Set wb = ActiveWorkbook strSh = "FP_" & Format(Date, "yyyymmdd") On Error Resume Next On Error GoTo exitHandler Set wsFP = Worksheets.Add With wsFP .Name = strSh .Columns("A:E").NumberFormat = "@" 'text format .Range(.Cells(1, 1), .Cells(1, 7)).Value _ = Array("ID", "Sheet", "PivotTable", _ "Type", "Field", "Name", "Formula") .Rows(1).Font.Bold = True End With lRow = 2 For Each ws In wb.Worksheets If ws.PivotTables.Count > 0 Then For Each pt In ws.PivotTables For Each cf In pt.CalculatedFields wsFP.Range(wsFP.Cells(lRow, 1), _ wsFP.Cells(lRow, 7)).Value _ = Array(lRow - 1, _ ws.Name, pt.Name, _ "Calc Field", , cf.Name, _ " " & cf.Formula) lRow = lRow + 1 Next cf For Each pf In pt.PivotFields On Error Resume Next lPI = 0 lPI = pf.CalculatedItems.Count On Error GoTo errHandler If lPI > 0 Then For Each ci In pf.CalculatedItems wsFP.Range(wsFP.Cells(lRow, 1), _ wsFP.Cells(lRow, 7)).Value _ = Array(lRow - 1, _ ws.Name, pt.Name, _ "Calc Item", pf.Name, _ ci.Name, " " & ci.Formula) lRow = lRow + 1 Next ci End If Next pf Next pt End If Next ws Application.DisplayAlerts = True Exit Sub MsgBox "Could not list formulas" Resume exitHandler End Sub Download the Sample File You can download the Calculated Field sample file which has pivot tables with a calculated field and calculated items, and the code to create a list of all pivot table formulas. The file is zipped, and is in Excel 2007 / 2010 format (xlsm). The file contains macros, so enable them to test the macro.
{"url":"http://www.contextures.com/excel-pivot-table-calculated-field.html","timestamp":"2014-04-19T19:34:19Z","content_type":null,"content_length":"40789","record_id":"<urn:uuid:dffee816-c7b7-4ca1-8fed-d8fc3badf85c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
41 - Hunter agility per crit wrong 41 - Hunter agility per crit wrong The agility to crit calculations seem to be very off, my ranged weapon is showing as adding a total of 11.61% crit, with 10.73% of that coming from agility, in reality this weapon only increases my crit percentage by a total of 1.75% crit. Is there any way I can help assist in getting these values correct? I'm a level 90 Pandren Hunter User When Change mikari Oct 05, 2012 at 15:52 UTC Create You must login to post a comment. Don't have an account? Register to get one! • <<reply 993435="">> i checked in a change for "Crit from Agility" for a level 90 hunter. • mikari Oct 06, 2012 at 18:14 UTC - 0 likes All the rating stuff, including haste added in r173 checks out perfectly in game. Once the agility per crit is in the addon should be pretty much 100% accurate for level 90 Hunters. Thanks so much for all the work you're putting in updating the library. • The relavent Hunter Critical Strike values at level 90 are: critChance = -1.53 + agility/1259.51809 + critRating/600 If agility and crit rating are not enough to get you at least 1.53% Crit Chance (making the above formula negaive), then the value caps at zero: critChance = max(critChance = -1.53 + agility/1259.51809 + critRating/600, 0) Now i just have to find the spot in LibStatLogic that deals with □ agility per crit □ base crit chance being negative □ lower cap at 0% and work it all in. • <<reply 993100="">> Does this make sense? □ Crit Rating: 880 (Increases Critical Chance by 1.47%) □ Crit Chance: 0.96% That sounds very strange on Blizzard's part. • mikari Oct 06, 2012 at 14:29 UTC - 0 likes I have the data from Tank Points but I don't have Excel on this machine and don't seem to be able to save the data as a CSV, trying to import the data into Open Office isn't giving me columns and Here's the data directly from the LUA file with the leading and trailing stuff stripped though. Think I may have managed to do it properly now. Last edited Oct 06, 2012 • <<reply 992653="">> Something is wrong. Mixed in with the rows of data is a "header" row: PlayerLevel PlayerClass PlayerRace SpecializationIndex MasterySpell ShapeshiftForm Strength BaseStrength Agility BaseAgility Stamina BaseStamina Intellect BaseIntellect Armor DodgeRating DodgeRatingBonus DodgeChance Free Dodge DR Dodge Dodge Theo ParryRating ParryRatingBonus ParryChance Free Parry CritRating CritRatingBonus CritChance BlockRating BlockRatingBonus BlockChance MasteryRating MasteryRatingBonus Mastery MasteryEffect MasteryFactor MeleeHitRating MeleeHitRatingBonus MeleeHitChance SpellHitRating SpellHitRatingBonus SpellHitChance MeleeHasteRating MeleeHasteRatingBonus MeleeHaste DR Parry Parry Residual Ideally you would cut/paste that row to the top. But if i take the header from anyone else, it looks like columns are missing. e.g. □ the first column should be playerLevel, but it's not there If i ignore that, and assume only that column is missing, then other values make no sense: There's no way you have a crit chance of over 1000 percent. Also, and this is a google spreadsheet issue, your spreadsheet has 50 rows - exactly 50 rows. If you want to add more you have to scroll to the bottom and click to add more. The way i usually transfer the stuff into a google spreadsheet is to □ strip off the leading and trailing lua stuff, leaving only comma separated values □ first save it as CSV □ open the CSV in Excel (as Google doesn't know how to open a CSV file) □ move the header row to the top □ select all □ paste into google □ if google complains that it can only paste 50k at a time, then i copy 5 or 10 columns at a time. • mikari Oct 06, 2012 at 07:18 UTC - 0 likes • There definitely is a way you can help. What we need are a lot of numbers giving your: □ Agility (e.g. 1172) □ base Agility (e.g. 97) □ Crit Rating (e.g. 2174) □ Crit Rating Bonus (e.g. 1.75%) □ Crit Chance (e.g. 21.62%) And we need those for as many different items equipped as you can stand, i.e.: □ over a large range of Agility (all your agility gear on, down to naked and just your base agility) □ over a large range of Crit Rating (all your crit gear on, down to naked and zero crit rating) The additional problem is that the percentages shown in your tooltip only show two decimal places. While this is better than nothing, it's not good enough. So you'll have to type a command to output the full value: /dump GetCombatRatingBonus(CR_CRIT_RANGED) /dump GetCritChance() i got sick of typing these commands, and typing them out into Excel, so i added the capability to another addon a wrote. Read about it here and follow those instructions. Oct 05, 2012 New - Issue has not had initial review yet. Defect - A shortcoming, fault, or imperfection Medium - Normal priority.
{"url":"http://www.wowace.com/addons/libstatlogic-1-2/tickets/41-hunter-agility-per-crit-wrong/","timestamp":"2014-04-17T13:10:29Z","content_type":null,"content_length":"44198","record_id":"<urn:uuid:f6f52820-c069-438d-8032-5006c695fc8a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the fastest growing function? August 22nd 2013, 11:51 AM What is the fastest growing function? I heard somewhere that it was $e^x$ but if I compare $e^x$ and $x^{100}$ I get this: Attachment 29054 With $e^x$ being the one on the far right. So it seems to me that $x^{100}$ is growing faster...? (has a higher slope). August 22nd 2013, 06:06 PM Re: What is the fastest growing function? Hey Paze. If you allow discontinuities then the fastest growing function at a particular point is the delta function at x = 0. Dirac delta function - Wikipedia, the free encyclopedia August 23rd 2013, 01:37 AM Re: What is the fastest growing function? Sure, it grows faster at first. But what happens for $x = 1000$? $e^{1000}>e^{900}=(e^3)^{300} >10^{300}=(10^3)^{100}=1000^{100}$. Then what happens when we increase x by 1? $e^{x+1}/e^x=e$ for all x. In contrast, $(x+1)^{100}/x^{100}\to 1$ as $x\to\infty$ because the numerator and the denominator are polynomials of the same degree with the same leading coefficient 1. For example, $1001 ^{100}/1000^{100}\approx 1.1<e\approx 2.7$. Thus, when x is increased by 1, $e^x$ is always multiplied by $e$, while $x^{100}$ is multiplied by smaller and smaller numbers that tend to 1. This is clever, though delta function is not really a function. Even the piecewise function $\begin{cases}0&x<0\\ 1&x\ge0\end{cases}$ grows infinitely fast at 0. If we restrict ourselves to continuous functions or to functions on natural numbers, suppose we have a candidate for the fastest-growing function $f(x)$. Then what about $2^{f(x)}$? August 23rd 2013, 05:39 AM Re: What is the fastest growing function? I don't believe there is such a thing as "the fastest growing function," as you could always multiply whatever you think is the fastest function by 2 to get one that has a greater slope. However, in thinking about "normal" functions that grow very fast $x^x$ is a pretty good one. Once you get past about x=2.1 it grows much faster than $e^x$. Of course the next logical extension of this is to consider $x^{(x^x)}$, which grows so fast that it exceeds one googol around x= 3.84, then $x^{(x^{(x^x)})}$, etc, etc. Attachment 29056
{"url":"http://mathhelpforum.com/algebra/221348-what-fastest-growing-function-print.html","timestamp":"2014-04-19T22:17:02Z","content_type":null,"content_length":"11313","record_id":"<urn:uuid:20344742-6060-4ec8-a868-29342c12ab17>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
This Python script calculates divergence, curl magnitude and speed variations along flow lines for a 2D vector field. Input data must be provided as two Arc/Info ASCII grids, storing the x- and y-components of the vector field. The results consist in a series of three Arc/Info ASCII grids, with the resulting curl, divergence and speed variations, as well as a VTK file, storing the input data and the results. The names of input and output files are in a text (parameter) file. See Glacial flow analysis with open source tools: the case of the Reeves Glacier grounding zone, East Antarctica for script rationale. Code and use vector_field_par.py - vers. 2010-10-12 Created and tested with Python 2.6 and numpy in Python(x,y) under Windows Vista. Released under GNU General Public License v. 3. Example command line to run the script: vector_field_par.py par.txt See Input and output section below for parameter file content specifications. Divergence and curl are two vector operators widely used in physics and engineering, for example, for the study of water flows. The nabla differential operator allows to calculate these two operators from a vector field: The divergence is a scalar value derived from the scalar product of nabla and the vector field in the neighborhood of a point (x, y, z): while the curl is the vector product of nabla and the vector field: With GIS data, we usually consider 2D dimensionality. Therefore, we only consider partial derivatives with respect to x and y axes. For the curl, we will have a vertical vector (parallel to k), whose magnitude is equal to the last component of the curl formula. To determine the velocity variations per unit length along flow directions, we modify the equation for a DEM directional slope (Neteler & Mitasova, 2008, eq. A.27): velocity change per unit length = (dz/dx) * sin(alpha) + (dz/dy) * cos(alpha) where alpha is the local orientation of the flow line. All results will consist of scalar fields, because only curl magnitude is of interest. Input and output Input and output file names are in a text parameter file: input vx grid: ASCII grid with x components of the vector field input vy grid: ASCII grid with y components of vector field output divergence grid: ASCII grid with divergence values output curl grid: ASCII grid with curl magnitudes output speed variation grid: ASCII grid with speed variation along flow lines output VTK file: VTK file storing input and output data (null value: -99.0)
{"url":"http://www.malg.eu/vector_field_params.php","timestamp":"2014-04-20T01:07:00Z","content_type":null,"content_length":"9940","record_id":"<urn:uuid:e3848570-755a-4ce4-a340-2bba4291fdf2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Why do the centripetal and gravitational force equal each other in orbit??Also... 1. The problem statement, all variables and given/known data Say for example, a problem wants us to find the mass of a planet. It gives us a satellite that orbits that planet with a radius of R and a period T. Now, I know how to solve this problem. You must set Fc = Fg. But what I do not know is why the centripetal and gravitational force of these two objects must equal each other. Also, a similar problem to that is one like this: When you take your 1200 kg car out for a spin, you go around a corner of radius 57.6 m with a speed of 15.2 m/s. The coefficient of static friction between the car and the road is 0.84. Assuming your car doesn't skid, what is the force exerted on it by static friction? Again, I already know how to solve this. You must set Fc = Ff ... mv^2 / r = Ff, and then you just plug in the given values into the mv^2 / r and that is your answer. I do not know why in this case the centripetal force and the static friction must equal each other. If someone could please explain this to me, I would feel so much better while taking the test tomorrow... My teacher goes through this stuff extremely fast. 2. Relevant equations V = 2piR / T Fc = mv^2 / r Fg = Gm1m2 / r^2 Simplest acceleration is this: The gravitational force is the ONLY force acting on the Satellite. The Centripetal force is the force we would need to have if the satellite is to travel in a circular path. The satellite DOES travel in a circular path, so the Graitaional force existing happily equals the centripetal force we need. Gravtational force is a real force. Centripetal force is a desired/necessary force. Another example: Why CAN'T you ride a motorbike in a circle of radius 10m at 150 km/h on flat ground? Simple, you could calculate the centripetal force NEEDED for that to happen, but when you add up [as vectors] all the forces acting - gravity, reaction force, friction,... They just don't add up to the necessary force, so the situation just can't happen. In the case of a satellite, the available force [gravity] happens to provide the required force, so its circular motion is maintained. EDIT: Oh, and in the case of the car - it must have been on flat ground also. Most roads have a small degree of banking so that would have contributed, and if the banking was steep enough - like at a velodrome - you mightn't need friction at all.
{"url":"http://www.physicsforums.com/showthread.php?t=542162","timestamp":"2014-04-17T07:35:06Z","content_type":null,"content_length":"29132","record_id":"<urn:uuid:43ff19a3-6331-49cb-8782-2fc6143f5235>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Luitzen Egbertus Jan Brouwer Born: 27 February 1881 in Overschie (now a suburb of Rotterdam), Netherlands Died: 2 December 1966 in Blaricum, Netherlands Click the picture above to see seven larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index L E J Brouwer is usually known by this form of his name with full initials, but he was known to his friends as Bertus, an abbreviation of the second of his three forenames. He attended high school in Hoorn, a town on the Zuiderzee north of Amsterdam. His performance there was outstanding and he completed his studies by the age of fourteen. He had not studied Greek or Latin at high school but both were required for entry into university, so Brouwer spent the next two years studying these topics. During this time his family moved to Haarlem, just west of Amsterdam, and it was in the Gymnasium there in 1897 that he sat the entrance examinations for the University of Amsterdam. Korteweg was the professor of mathematics at the University of Amsterdam when Brouwer began his studies, and he quickly realised that in Brouwer he had an outstanding student. While still an undergraduate Brouwer proved original results on continuous motions in four dimensional space and Korteweg encouraged him to present them for publication. This he did, and it became his first paper published by the Royal Academy of Science in Amsterdam in 1904. Other topics which interested Brouwer were topology and the foundations of mathematics. He learnt something of these topics from lectures at the university but he also read many works on the topics on his own. He obtained his master's degree in 1904 and in the same year married Lize de Holl who was eleven years older that Brouwer and had a daughter from a previous marriage. After the marriage, which would produce no children, the couple moved to Blaricum, near Amsterdam. Three years later Lize qualified as a pharmacist and Brouwer helped her in many ways from doing bookkeeping to serving in the chemists shop. However, Brouwer did not gain the affection of his step-daughter and relations between them was strained. From an early stage Brouwer was interested in the philosophy of mathematics, but he was also fascinated by mysticism and other philosophical questions relating to human society. He published his own ideas on this topic in 1905 in his treatise Leven, Kunst, en Mystiek (Life, art, and mysticism). In this work he [1]:- ... considers as one of the important moving principles in human activity the transition from goal to means, which after some repetitions may result in activities opposed to the original goal. Brouwer's doctoral dissertation, published in 1907, made a major contribution to the ongoing debate between Russell and Poincaré on the logical foundations of mathematics. His doctoral thesis [13]:- ... revealed the twin interests in mathematics that dominated his entire career; his fundamental concern with critically assessing the foundations of mathematics, which led to his creation of intuitionism, and his deep interest in geometry, which led to his seminal work in topology ... He quickly discovered that his ideas on the foundations of mathematics would not be readily accepted [13]:- Brouwer quickly found that his philosophical ideas sparked controversy. Korteweg, his thesis advisor, had not been pleased with the more philosophical aspects of the thesis, and had even demanded that several parts of the original draft be cut from the final presentation. Korteweg urged Brouwer to concentrate on more "respectable" mathematics, so that the young man might enhance his mathematical reputation and thus secure an academic career. Brouwer was fiercely independent and did not follow in anybody's footsteps, but he apparently took his teacher's advice ... Brouwer continued to develop the ideas of his thesis in The Unreliability of the Logical Principles published in 1908. The research which Brouwer now undertook was in two areas. He continued his study of the logical foundations of mathematics and he also put a very large effort into studying various problems which he attacked because they appeared on Hilbert's list of problems proposed at the Paris International Congress of Mathematicians in 1900. In particular Brouwer attacked Hilbert's fifth problem concerning the theory of continuous groups. He addressed the International Congress of Mathematicians in Rome in 1908 on the topological foundations of Lie groups. However, after studying Schönflies' report on set theory, he wrote to Hilbert:- I discovered all of a sudden that the Schoenfliesian investigations concerning topology of the plane, on which I had relied in the fullest way, could not be taken as correct in all parts, so that my group-theoretic results also became doubt. In 1909 he was appointed as an privatdocent at the University of Amsterdam. He gave his inaugural lecture on 12 October 1909 on 'The nature of geometry' in which he outlined his research programme. A couple of months later he made an important visit to Paris, around Christmas 1909, and there met Poincaré, Hadamard and Borel. Prompted by discussions in Paris, he began working on the problem of the invariance of dimension. Brouwer was elected to the Royal Academy of Sciences in 1912 and, in the same year, was appointed extraordinary professor of set theory, function theory and axiomatics at the University of Amsterdam; he would hold the post until he retired in 1951. Hilbert wrote a warm letter of recommendation which helped Brouwer to gain his chair in 1912. Despite the substantial contributions he had made to topology by this time, Brouwer chose to give his inaugural professorial lecture on intuitionism and formalism. In the following year Korteweg resigned his chair so that Brouwer could be appointed as ordinary professor. Although he had helped Brouwer to obtain his chair in Amsterdam, in 1919 Hilbert tried to tempt him away with an offer of a chair in Göttingen. He was also offered the chair at Berlin in the same year. These must have been tempting offers, but despite their attractions Brouwer turned them down. Perhaps the exceptional way he was treated by Amsterdam, mentioned in the following quote by Van der Waerden, helped him make these decisions. Van der Waerden, who studied at Amsterdam from 1919 to 1923, wrote about Brouwer as a lecturer (see for example [14]):- Brouwer came [to the university] to give his courses but lived in Laren. He came only once a week. In general that would have not been permitted - he should have lived in Amsterdam - but for him an exception was made. ... I once interrupted him during a lecture to ask a question. Before the next week's lesson, his assistant came to me to say that Brouwer did not want questions put to him in class. He just did not want them, he was always looking at the blackboard, never towards the students. ... Even though his most important research contributions were in topology, Brouwer never gave courses on topology, but always on -- and only on -- the foundations of intuitionism. It seemed that he was no longer convinced of his results in topology because they were not correct from the point of view of intuitionism, and he judged everything he had done before, his greatest output, false according to his philosophy. He was a very strange person, crazy in love with his As is mentioned in this quotation, Brouwer was a major contributor to the theory of topology and he is considered by many to be its founder. The status of the subject when he began his research is well described in [13]:- When Brouwer was beginning his career as a mathematician, set-theoretic topology was in a primitive state. Controversy surrounded Cantor's general set theory because of the set-theoretic paradoxes or contradictions. Point set theory was widely applied in analysis and somewhat less widely applied in geometry, but it did not have the character of a unified theory. There were some perceived benchmarks. For example; the generally held view that dimension was invariant under one-to-one continuous mappings ... He did almost all his work in topology early in his career between 1909 and 1913. He discovered characterisations of topological mappings of the Cartesian plane and a number of fixed point theorems. His first fixed point theorem, which showed that an orientation preserving continuous one-one mapping of the sphere to itself always fixes at least one point, came out of his researches on Hilbert's fifth problem. Originally proved for a 2-dimensional sphere, Brouwer later generalised the result to spheres in n dimensions. Another result of exceptional importance was proving the invariance of As well as proving theorems of major importance in topology, Brouwer also developed methods which have become standard tools in the subject. In particular he used simplicial approximation, which approximated continuous mappings by piecewise linear ones. He also introduced the idea of the degree of a mapping, generalised the Jordan curve theorem to n-dimensional space, and defined topological spaces in 1913. Van der Waerden, in the above quote, said that Brouwer would not lecture on his own topological results since they did not fit with mathematical intuitionism. In fact Brouwer is best known to many mathematicians as the founder of the doctrine of mathematical intuitionism, which views mathematics as the formulation of mental constructions that are governed by self-evident laws. His doctrine differed substantially from the formalism of Hilbert and the logicism of Russell. His doctoral thesis in 1907 attacked the logical foundations of mathematics and marks the beginning of the Intuitionist School. His views had more in common with those of Poincaré and if one asks which side of the debate between Russell and Poincaré he came down on then it would have with the latter. In his 1908 paper The Unreliability of the Logical Principles Brouwer rejected in mathematical proofs the Principle of the Excluded Middle, which states that any mathematical statement is either true or false. In 1918 he published a set theory developed without using the Principle of the Excluded Middle Founding Set Theory Independently of the Principle of the Excluded Middle. Part One, General Set Theory. His 1920 lecture Does Every Real Number Have a Decimal Expansion? was published in the following year. The answer to the question of the title which Brouwer gives is "no". Also in 1920 he published Intuitionistic Set Theory, then in 1927 he developed a theory of functions On the Domains of Definition of Functions without the use of the Principle of the Excluded Middle. His constructive theories were not easy to set up since the notion of a set could not be taken as a basic concept but had to be built up using more basic notions which, in Brouwer's case, were choice sequences. Loosely speaking, that the elements of a set had property p, meant to Brouwer that he had a construction which allowed him to decide after a finite number of steps whether each element of the set had property p. Such ideas are fundamental to theoretical computer science today. The later part of Brouwer's career contains some controversial episodes. He had been appointed to the editorial board of Mathematische Annalen in 1914 but in 1928 Hilbert decided that Brouwer was becoming too powerful, particularly since Hilbert felt that he himself did not have long to live (in fact he lived until 1943). He tried to remove Brouwer from the board in a way which was not compatible with the way the board was set up. Brouwer vigorously opposed the move and he was strongly supported by other board members such as Einstein and Carathéodory. In the end Hilbert managed to get his own way but it was a devastating episode for Brouwer who was left mentally broken; see [26] for details. In 1935 Brouwer entered local politics when he was elected as Neutral Party candidate for the municipal council of Blaricum. He continued to serve on the council until 1941. He was also active setting up a new journal and he became a founding editor of Compositio Mathematica which began publication in 1934. Further controversy arose due to his actions in World War II. Brouwer was active in helping the Dutch resistance, and in particular he supported Jewish students during this difficult period. However, in 1943 the Germans insisted that the students sign a declaration of loyalty to Germany and Brouwer encouraged his students to do so. He afterwards said that he did so in order that his students might have a chance to complete their studies and to work for the Dutch resistance against the Germans. However, after Amsterdam was liberated, Brouwer was suspended from his post for a few months because of his actions. Again he was deeply hurt and considered emigration. After retiring in 1951, Brouwer lectured in South Africa in 1952, and the United States and Canada in 1953. His wife died in 1959 at the age of 89 and Brouwer, who himself was 78, was offered a one year post in the University of British Columbia in Vancouver; he declined. In 1962, despite being well into his 80s, he was offered a post in Montana. He died in 1966 in Blaricum as the result of a traffic accident. Kneebone writes in [3] about Brouwer's contributions to the philosophy of mathematics:- Brouwer is most famous ... for his contribution to the philosophy of mathematics and his attempt to build up mathematics anew on an Intuitionist foundation, in order to meet his own searching criticism of hitherto unquestioned assumptions. Brouwer was somewhat like Nietzsche in his ability to step outside the established cultural tradition in order to subject its most hallowed presuppositions to cool and objective scrutiny; and his questioning of principles of thought led him to a Nietzschean revolution in the domain of logic. He in fact rejected the universally accepted logic of deductive reasoning which had been codified initially by Aristotle, handed down with very little change into modern times, and very recently extended and generalised out of all recognition with the aid of mathematical symbolism. Kneebone also writes in [3] about the influence that Brouwer's views on the foundations of mathematics had on his fellow mathematicians:- Brouwer's projected reconstruction of the whole edifice of mathematics remained a dream, but his ideal of constructivism is now woven into our whole fabric of mathematical thought, and it has inspired, as it still continues to inspire, a wide variety of inquiries in the constructivist spirit which have led to major advances in mathematical knowledge. Despite failing to convert mathematicians to his way of thinking, Brouwer received many honours for his outstanding contributions. We mentioned his election to the Royal Dutch Academy of Sciences above. Other honours included election to the Royal Society of London, the Berlin Academy of Sciences, and the Göttingen Academy of Sciences. He was awarded honorary doctorates the University of Oslo in 1929, and the University of Cambridge in 1954. He was made Knight in the Order of the Dutch Lion in 1932. Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (27 books/articles) Some Quotations (6) A Poster of L E J Brouwer Mathematicians born in the same country Additional Material in MacTutor Honours awarded to L E J Brouwer (Click below for those honoured in this way) Fellow of the Royal Society 1948 Honorary Fellow of the Edinburgh Maths Society 1954 Fellow of the Royal Society of Edinburgh 1955 Lunar features Crater Brouwer Cross-references in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © October 2003 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Biographies/Brouwer.html","timestamp":"2014-04-19T22:08:26Z","content_type":null,"content_length":"30569","record_id":"<urn:uuid:8f4773dc-7491-429a-926a-5713e0e6ce3b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
PDF Ebooks for Search word 'engineering mathematics by vp mishra' MATHEMATICS III & IV - Tndte DIPLOMA COURSE IN ENGINEERING ... Associate Professor, Dept of Mathematics ... We take great pleasure in presenting this book of mathematics to. http://www.tndte.com/TEXT%2520BOOKS/Complete%2520Books /Engineering%2520Mathematics-I,%2520II,%2520III%2520%26%2520IV/Sem%2520-2%2520-%2520Engineering%2520Maths%2520-III%2520%26%2520IV.pdf File Type:PDF ENGINEERING MATHEMATICS-I L T P Credit I. B. V. Ramana, Higher Engineering Mathematics, Tata Mc Graw-Hill Publishing Company Ltd.,. 2008. 2. B.S.Grewal, Higher Engineering Mathematics, Khanna ... http://www.hcst.edu.in/uploads/file/ Mathematics%2520(ASM-101,%2520ASM-201).pdf File Type:PDF Essentials of Mathematical Methods in Science and Engineering A complete introduction to the multidisciplinary applications of mathematical methods ... text for courses in physics, science, mathematics, and engineering at the ... http:// www.researchandmarkets.com/reports/2173830/essentials_of_mathematical_methods_in_science_and.pdf File Type:PDF MA 101 Mathematics-I Reference Books: 1. ... 3. Elementary Engineering Mathematics B.S. Grewal Khanna Publisher. 4. Engineering Mathematics-II Santi Narayan S. Chand & Co. http://www.nits.ac.in/academics/info/Syllabii/ B.%2520Tech/9.%2520Mathematics/Maths%2520all%2520combined.doc File Type:DOC Handbook of Mathematics for Engineers and Scientists HANDBOOK OF ... Chapman & Hall/CRC is an imprint of Taylor & Francis Group, an Informa business ..... Table of Derivatives and Differentiation Rules . http://tailieuhoctap.files.wordpress.com/2009/09 /handbook-of-mathematics-for-engineers-and-scientists.pdf File Type:PDF Presentation - Mathematical Engineering ... new domain in electrical engineering - a mathematics-based systems side, ... 1942 monograph by Norbert Wiener of MIT (released as a book in 1949) and a ... http://www.stanford.edu/~tkailath/docs /IEEEFinalPresentation.ppt File Type:PPT Mathematics is the language of science and engineering. The ... introduction of such areas of current research in Engineering and science, so that the Mathematics and Physics faculties of the Engineering colleges ... http://www.nitrkl.ac.in/shorttermcourse/ma/ AICTE-MHRD_SDP_RAMMES_brochure.pdf File Type:PDF Department of Curriculum and Instruction. A perception exists that physics and engineering students experience difficulty in applying mathematics to physics and ... http://www.physics.umd.edu/perg/ dissertations/SJones/SJonesDissertation.pdf File Type:PDF Mathematical Methods of Engineering Analysis Feb 2, 2000 ... Book by Erhan Çinlar and Robert J. Vanderbei. Topics covered: functions on metric spaces, differential and integral equations, convex analysis, ... http://www.princeton.edu/~rvdb/ 506book/book.pdf File Type:PDF
{"url":"http://www.downloadpdffree.com/engineering-mathematics-by-vp-mishra.pdf/10","timestamp":"2014-04-19T19:34:48Z","content_type":null,"content_length":"44448","record_id":"<urn:uuid:5a351c6d-4e87-4ed3-aafe-4f7ae7faf68e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Limitations Of Linear Programming Help for Learning - Transtutors Linear programming suffers from certain limitations which are given below: 1. In reality, objectives functions and constraints cannot be expresses in linear form. 2. In linear Programming problem, fractional values are permitted for the decision variables. However many decision problems require that the decision variables should be obtained in non-fractional 3. The co-efficient of basic variables cannot be determined with certainty however; they can be stated only with probability. 4. Where a problem consists of inflicting multiple objectives, this technique cannot provide a solution. 5. The linear programming does not take into consideration the effect of time and uncertainty. 6. Parameters appearing in the LP model are assumed to be constant but in real life situations they are frequently neither known nor constant. 7. In case of large, complex and constrained problems, computational problems are enormous. Live Online 24*7 Homework Assignment Help in Limitations Of Linear Programming Transtutors provides online homework Help or Assignment Help in Limitations of linear programming. Transtutors has a vast panel of highly experienced financial management tutors, who can explain different concept of linear programming to you effectively. Our financial management tutors are available round the clock to help you out in any way with limitations of linear programming. Related Questions more assignments »
{"url":"http://www.transtutors.com/homework-help/accounting/learning-curve-linear-programming-limitations-linear-programming/","timestamp":"2014-04-18T10:43:27Z","content_type":null,"content_length":"73169","record_id":"<urn:uuid:2f8ada9f-4f7c-469c-8f12-91b27d3c95cb>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
Hyperbolic Identities September 15th 2009, 07:03 PM #1 Mar 2009 Hyperbolic Identities I cannot for the life of me work this one out, I am getting very close, but I can't take it further, I must have made a mistake somewhere down the line, anyone mind checking my work? Couldn't get a hold of my prof today Question: Show that $\frac{sinh 3t}{sinh t}=1+2cosh 2t$. Here's what I did. $\frac{sinh(2t)cosh(t)+cosh(2t)sinh(t)}{sinh(t)}=1+ 2cosh(2t)$ $\frac{2sinh(t)cosh^2(t)+cosh(2t)sinh(t)}{sinh(t)}= 1+2cosh(2t)$ I've been trying all sorts of ways to simplify it from here, but none have worked. Thanks for your time! I cannot for the life of me work this one out, I am getting very close, but I can't take it further, I must have made a mistake somewhere down the line, anyone mind checking my work? Couldn't get a hold of my prof today Question: Show that $\frac{sinh 3t}{sinh t}=1+2cosh 2t$. Here's what I did. $\frac{sinh(2t)cosh(t)+cosh(2t)sinh(t)}{sinh(t)}=1+ 2cosh(2t)$ $\frac{2sinh(t)cosh^2(t)+cosh(2t)sinh(t)}{sinh(t)}= 1+2cosh(2t)$ I've been trying all sorts of ways to simplify it from here, but none have worked. Thanks for your time! Identity: $\cosh (2t) = \cosh^2 t + \sinh^2 t = 2 \cosh^2 t - 1 \Rightarrow \cosh^2 t = \frac{\cosh (2t) + 1}{2}$. Awesome, got it. Thanks September 15th 2009, 09:47 PM #2 September 16th 2009, 06:25 AM #3 Mar 2009
{"url":"http://mathhelpforum.com/calculus/102508-hyperbolic-identities.html","timestamp":"2014-04-19T11:30:34Z","content_type":null,"content_length":"39674","record_id":"<urn:uuid:187494a5-7d0f-48c3-9ac9-025f4fa4f1c9>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
int atomic_cmpxchg ( volatile __global int *p , int cmp, int val) unsigned int atomic_cmpxchg ( volatile __global unsigned int *p , unsigned int cmp, unsigned int val) int atomic_cmpxchg ( volatile __local int *p , int cmp, int val) unsigned int atomic_cmpxchg ( volatile __local unsigned int *p , unsigned int cmp, unsigned int val) Read the 32-bit value (referred to as old) stored at location pointed by p. Compute (old == cmp) ? val : old and store result at location pointed by p. The function returns old. A 64-bit version of this function, atom_cmpxchg, is enabled by cl_khr_int64_base_atomics. Copyright © 2007-2013 The Khronos Group Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and/or associated documentation files (the "Materials"), to deal in the Materials without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Materials, and to permit persons to whom the Materials are furnished to do so, subject to the condition that this copyright notice and permission notice shall be included in all copies or substantial portions of the
{"url":"https://www.khronos.org/registry/cl/sdk/2.0/docs/man/xhtml/atomic_cmpxchg.html","timestamp":"2014-04-19T17:27:03Z","content_type":null,"content_length":"15232","record_id":"<urn:uuid:770fdea3-1ff1-41d4-a71a-497d327cff7b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
MathFiction: The Object (Alex Kasman) This is a mathematical horror story, written by someone who doesn't like horror stories. Since I'm the author, I can honestly (and humbly) admit that the result is kind of weird. The plot concerns Alice, a young woman who drops out of college and starts a company based on the clever molecular modeling techniques that she's invented. Since she's really good with math, the algorithms she uses are really cutting edge and quite clever (incorporating everything, including relativistic and quantum effects). Unfortunately for her, the ideas turn out to have been a bit too clever. They produce some sort of bridge between our universe and another, and some nasty little biting creatures get into her office. There are various aspects of geometry in the story. I quite explicitly discuss the Platonic solids and the role of the Euler characteristic in categorizing them, non-Euclidean geometry (which is at least implied by the existence of "the object" itself and also the unusual metric properties that allow the "things" to grow in size as they move further from it), and also instantons, which are special shapes that 4-dimensional space can take (which interestingly are impossible in any other dimension). The story also briefly touches on the notion of a "soliton". From one point of view, an instanton is a special kind of soliton, and so this ties in with the previously mentioned geometry. But, there is more to it than just that. Although most people think of a soliton in terms of waves (like a tsunami), the so-called "topological soliton" is sometimes described as interpolating between two "vacua"...a sudden connection between two different sorts of "emptiness". (See, for example, the article at this location.) It was this notion that brought me to the idea that a soliton in reality could be a physical bridge to another universe. Also important to the story is the fact that such topological solitons always have an "anti-particle", which takes the form of a bridge going in the other direction. This story is one of the ones which appears in my collection Reality Conditions, published by the MAA.
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf511","timestamp":"2014-04-18T23:15:07Z","content_type":null,"content_length":"10644","record_id":"<urn:uuid:302f1221-1b64-4003-ac4b-d1145dfc6a2b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Model category structures on categories of complexes in abelian categories up vote 9 down vote favorite Section 2.3 of Hovey's Model Categories book defines a model category structure on Ch(R-Mod), the category of chain complexes of R-modules, where R is a ring. Lemma 2.3.6 then essentially states (I think) that taking projective resolutions of a module corresponds to taking cofibrant replacements of the module, at least in nice cases (e.g. when the projective resolution is bounded below). There is of course also a "dual" model category structure which gives the "dual" result for injective resolutions and fibrant replacements (Theorem 2.3.13). 1. I think the results in Hovey are proven for not-necessarily-commutative rings. Do things become nicer if we restrict our attention to commutative rings only? 2. Do these results generalize? For example, is there an analogous model category structure and an analogous result for Ch(O[X]-Mod), the category of chain complexes of O[X]-modules, where X is a scheme? More generally, how about for Ch(A), where A is an abelian category? If the answers to these questions are known, then I assume they would be "standard", but I don't know a reference. I've re-asked my question in a different form here. ct.category-theory model-categories homological-algebra I assume so as well, though I likewise can't provide a reference. – Ben Webster♦ Oct 6 '09 at 18:25 The link to the reask of the question is broken (... well, it redirects to a question about numerical methods for calculating digits of pi ...). – cdouglas Oct 27 '11 at 11:53 add comment 2 Answers active oldest votes I don't think the existence of the dual "injective" model structure merits an "of course," since its generators are much less obvious to construct. However, it turns out that injective model structures actually exist in more generality than projective ones, for instance they exist for most categories of sheaves. I believe this was originally proven by Joyal, but it was put in an abstract context by Hovey and Gillespie. up vote 8 The basic idea is that model structures on Ch(A) correspond to well-behaved "cotorsion pairs" on A itself. The projective model structure comes from the (projective objects, all objects) down vote cotorsion pair (which is well-behaved for R-modules, but not for sheaves), and the injective one comes from (all objects, injective objects). There is also e.g. a flat model structure accepted coming from (flat objects, cotorsion objects) which is monoidal and thus useful for deriving tensor products. A good introduction, which I believe has references to most of the literature, is Hovey's paper Cotorsion pairs and model categories. Awesome, thanks for the reference! – Kevin H. Lin Oct 13 '09 at 21:30 2 I would also add the following references: J.D. Christensen and M. Hovey, Quillen model structures for relative homological algebra, Math. Proc. Camb. Phil. Soc. 133 (2002), no. 2, 231-293. The following paper should answer your questions 1) and 2) quite precisely. intlpress.com/HHA/v11/n1/a11 – Denis-Charles Cisinski Oct 25 '09 at 21:36 The paper of Hovey that I linked to includes a reference to the Christensen-Hovey paper, and to a number of others; I didn't want to give a huge list of references here. Thanks for the other link. – Mike Shulman Oct 27 '09 at 0:32 add comment I don't know the "standard" answer, but the exact same construction should work for any abelian category with a small projective generator, where "small" means that any map into a sufficiently large well-ordered colimit factors through some stage. This is exactly what is needed to make the small object argument work. Just replace the ring with your small projective generator in the "sphere" and "disk" objects. For the dual (injective) model structure, cosmall injectives don't tend to exist in practice (for example, they don't exist in abelian groups), so you have to use a more complicated set of up vote 3 maps that I don't understand well and don't know how to generalize. In particular, in the case of quasicoherent sheaves one would need to generalize the injective model structure, and I down vote don't know anything about that. I don't know of any reason to expect commutative rings to give nicer results. This answer also fits into the framework of Gillespie's work. Asking for a small projective generator is related to asking for the cotorsion pair $(A,B)$ of Mike's answer to be cogenerated by a one-element set (cogenerated by a set means there is a set $S\subset A$ such that $b\in B$ iff $Ext^1(S,b)=0$). In a Grothendieck category with enough projectives, being cogenerated by a set forces $(A,B)$ to be a complete cotorsion pair. You'll also get the dual complete cotorsion pair, and thence the Hovey model structure described by Mike. – David White Aug 8 '12 at 21:41 add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory model-categories homological-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/141/model-category-structures-on-categories-of-complexes-in-abelian-categories/158","timestamp":"2014-04-21T04:47:34Z","content_type":null,"content_length":"63322","record_id":"<urn:uuid:f4a0cee7-8a3f-44e2-a3f1-59c2fc8cf80b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching Artifacts The purpose of this page is to illustrate some of the various teaching artifacts I have acquired throughout the PTP experience. In the fall semester, I observed Dr. Chertock’s section of Applied Differential Equations II (MA 401) and developed a greater mastery of the material through grading homework, preparing online lecture notes, creating tests, and teaching a few classes and review sessions. Reflecting upon how the fall semester went, I realized that the students would benefit greatly from more in-depth visualization of the majority of topics covered. Moreover, since the graphs in this course are very difficult to plot by hand, I decided to use my spring teaching semester as an opportunity to experiment with incorporating technology into the classroom in various ways. As such, several of my artifacts will be related to how this implementation of technology was realized. Below is a syllabus from my course, and the remaining artifacts follow: PTP Seminar Implementation and Lesson Plan One example of how I used technology in the classroom was in a lesson on the Maximum Principle. My goals for the lesson were for the students to understand the meaning of the Maximum Principle and how it specifically applied to some of the famous equations we had previously been learning how to solve. Taking a cue from Dr. Bryce Lane in his PTP seminar on Motivating Students, I decided to first “romance” the students and create a positive learning environment by providing some fun historical background on Laplace and Poisson, two mathematicians whose equations we were currently studying. I then proceeded with the lesson, providing the statements of theorems, proofs, and examples (or as Dr. Lane would refer to it, the “drudgery”). Then, as an “application” of that new-found knowledge, I asked my students to use the Maximum Principle to tell me some information about the solution to a specific case of Laplace’s equation. Once they were convinced, I loaded up the mathematical software package Maple on the classroom computer, turned on the overhead display, and created a 3-dimensional plot of the solution. We were then able to visually confirm what the students had already conjectured on their own. Of course, I then proceeded to romance the students all over again by ooh-ing and aah-ing over how neat of a program Maple was, leading to many groans and head shakes. Below are some notes from the lesson plan as well as the plot we studied. Laplace’s Equation example plot Overall, I was quite satisfied with the level of student engagement during this lesson and felt that the visualization from the plot added another layer of understanding that otherwise would have been missed. I also feel that using Dr. Lane’s “tripod” approach to teaching is an effective tool that creates a lot of balance in my lesson plans and improves the motivation of my students. In fact, whenever I now create lesson plans, I ask myself each time if I’m doing enough romancing and providing enough applications to balance and split up the drier lecture portions of the class. Writing Assignment/Course Project Based on the success I found with using technology in the classroom, I decided to push the idea further and required the students to create some of their own plots and animations using Maple as a project to supplement their homework assignments. Many of my students had a predisposition against Maple coming into the course based on experience with the program in previous calculus classes, but after creating some very nice 3-dimensional pictures on their own with minimal syntax problems, they warmed up to the idea. Below is a copy of one of these Maple assignments along with a sample of one student’s plots: I feel that using these assignments to give the class more hands-on experience with using technology worked out very well for both me and the students. Once the students bought in to how “cool” the plots looked, they were highly motivated to get their syntax right and obtain the correct plots to watch their solutions vibrate and come to life. These assignments were one of the highlights of the course as far as teaching tools went, and for me they are a must-have the next time I teach this class. If anything, I would improve these assignments by using them more frequently, as my class got to the point where they were disappointed if there weren’t 3-d plots to create for a given homework. Moreover, based on the positive student response to these assignments, I have begun looking for similar ways to get my other math classes involved in using Maple to facilitate their learning. Lecture Notes Upon planning out the course in the fall, Dr. Chertock and I thought it would be nice to provide lecture notes to the students on a website, particularly for material that was not adequately covered in the textbook. I have included a sample of these notes below: While my students seemed appreciative of me providing lecture notes on material not covered in the textbook, they found it somewhat awkward that lecture notes were not provided for the entire semester. One student mentioned that having lecture notes posted online for every class would make it a lot easier to get caught up on material missed due to an absence. While creating the lecture notes outside of class is quite time-consuming, I completely agree that it would be a useful tool to provide, and I hope to expand the amount of notes I am able to post in future semesters. Assessment Tool I recently wrote this test key to post on my class website. All of the steps have been written in full detail so that the students may use it as a review tool for future studying. Also, near the top of each question I have indicated the point distribution for achieving various steps in the solution. Note that the majority of the credit is awarded for appropriately setting up each problem, with minimal points allocated for the calculation of the correct final answer. Throughout my teaching career, I have found that students have been very receptive to my grading policies. One of the earliest observations I made as a teacher was that students were very turned off by professors who graded their tests and quizzes with an all-or-nothing mindset. I always tell my students that while getting the correct final answer is nice, I am more concerned with their overall thought process. A minor algebraic mistake should not imply that an entire problem is counted incorrect. This is a fundamental part of my approach to teaching, and I feel that student motivation improves when they are not overly anxious about making a small mistake in their work. Faculty Mentor Feedback Throughout both the observation and teaching semesters of the PTP program, Dr. Chertock observed my teaching and provided her comments: PTP Observation Form for Zach Abernathy Dr. Chertock’s comments and evaluations over the past two semesters have for the most part been very positive, and it has reaffirmed my confidence level in my teaching ability for an upper-level math major course. While she did not provide any areas to work on in her above observation form, through many conversations with her over the past year she has given me several tips and pointers on small corrections I could make during my teaching. For example, during one observation when I was covering the heat equation, she caught me describing the movement of heat along an insulated rod as “dispersing” rather than the correct term, “diffusing.” Up to that point, I had been using the words interchangeably. Dr. Chertock explained the context in which each word should be used, and I now know the appropriate setting for each. Relying on her expertise to provide these types of suggestions has been a welcome experience and has improved my attention to detail when lesson planning. Peer Mentor Feedback Also, another PTP fellow, Helen Melito, observed my class on a day where I was incorporating technology into the lesson. I specifically wanted her to gauge students’ reactions to the use of technology and provide feedback on how effective of a learning tool it was. Her observation report is included below: PTP Peer Observation Report for Zach Abernathy Helen’s comments provided some great insight into my level of teaching effectiveness for the class. I learned that my use of Maple in supplementing the lecture was well-received by my students, as well as my ability to ask a lot of questions to help keep them engaged. However, there were also some surprising observations made by Helen that I would not have noticed without this experience. First of all, she noted that there were a few periods throughout the class where I fell into a habit of talking to the board rather than facing the class. While some of my students requested that I say what I write (see my student comments below), this can lead into a bad habit of always talking to the board if you’re not self-aware of it. I appreciated Helen pointing this out and immediately corrected it. Helen also noted that a couple of my students were off-task throughout portions of the lecture, and with the classroom map she had created, she was able to tell me exactly which students they were. I made a note of these students and was able to then keep a closer eye on them for the remainder of the semester. Overall, I very much enjoyed this peer mentoring experience as a way to provide a different window into my classroom and improve my teaching. Student Feedback In addition to feedback from my faculty mentor and other PTP peers, I also conducted a mid-semester evaluation to gather student comments on how they felt the class was going so far. I asked them to both point out areas they enjoyed and areas that could be improved. Below are some of their comments: “I like that you post test solutions and homework solutions on the course website. I think that it would benefit most students if you gave a list of additional problems for us to practice before each test.” “This is probably the most challenging math course I’ve taken, but I enjoy it.” “Use of Maple in-class and in homeworks is awesome and very helpful for making the classical equations feel real. I really like that you spend so much time deriving the formulas we use; it really enhances our understanding of the methods we employ.” “It’s helpful to me when you say what you are writing on the board as you are writing it so I don’t have to keep looking up and down to copy it.” “Shorter and more frequent homeworks would be more enjoyable.” “I appreciate your teaching style. I don’t always remember things from past classes and you thoroughly explain problems that clarify the material for me.” I was happy to get an overall positive response from my students in the mid-semester evaluation (especially on the use of technology in the course), but they also provided some great feedback on easy ways to improve the class. One common request was to make the homework assignments shorter and more frequent. In the early parts of the semester, I spaced out the homework assignments a bit too much, and the amount of content in each one overwhelmed some of the students. This was a quick fix and one that I was happy to make to adjust to the specific needs of the class. Another interesting comment above was the one about saying what I’m writing to facilitate note-taking. While this was viewed as a positive among students, it sometimes leads into the habit of talking to the board for extended periods of time, as noted by my peer mentor above. Therefore, while I continued to say what I wrote, I became much more conscious of breaking away from the board, facing the class, and providing an alternate explanation of the material. To contrast these comments with student responses from previous semesters, please refer to my feedback page. Overall, I am very satisfied with my experience teaching this course. Upon teaching the class again, I would certainly continue to require computer plots and animations in the homework assignments and use these tools throughout course lectures. I felt that the use of technology provided nice breaks in the lecture and led to a great deal of student motivation. I would also continue providing online lecture notes and providing a course website with homework/test solutions, as these features are universally viewed in a positive light by students. I still would like to find other hands-on ways to get the students involved during lectures, perhaps bringing in a vibrating drum-head or some other physical application of the equations we study. I would also like to experiment with group work during the class as another way to stimulate student activity and break out of the traditional lecture format. I will look forward to the next opportunity I have to teach this course, but I am also excited to borrow many of the ideas that were developed over this past year and use them in all of the classes I teach!
{"url":"http://zachabernathyportfolio.wordpress.com/evidence-of-teaching-and-learning/teaching-artifacts/","timestamp":"2014-04-16T07:14:32Z","content_type":null,"content_length":"44370","record_id":"<urn:uuid:c416df8b-25c0-44f8-bbbb-97b1fad1f3ad>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Simulating the Simple Random Walk This Demonstration shows simulated paths of the simple random walk. Thus, you can see how the path evolves with time. The Demonstration also shows approximate confidence intervals (the green curves), which are based on the normal approximation. Snapshot 1: some of the 10 paths go outside of the 95% confidence interval Snapshot 2: all 10 paths stay within the 99.9% confidence interval Snapshot 3: 10 paths, each of 10,000 steps, 99.9% confidence interval The simple random walk starts at 0. At each time step , 1 is added or subtracted from the current value. Addition and subtraction are done with equal probabilities. In the plots, the values are plotted on the vertical axis and the time axis is horizontal. The confidence intervals can be obtained from the following result. Let be the position of the walk at step . The probability that is greater than approaches, as approaches infinity, the probability that the standard normal variable is greater than ; see [1], p. 76. For the simple random walk, see [1], pp. 67–97. For simulation of the simple random walk and other stochastic processes with , see [2], pp. 987–1002. [1] W. Feller, An Introduction to Probability and Its Applications , vol. 1, 3rd ed., revised printing, New York: Wiley, 1968. [2] H. Ruskeepää, Mathematica Navigator: Mathematics, Statistics, and Graphics, 3rd ed., San Diego, CA: Elsevier Academic Press, 2009.
{"url":"http://demonstrations.wolfram.com/SimulatingTheSimpleRandomWalk/","timestamp":"2014-04-20T21:42:56Z","content_type":null,"content_length":"45321","record_id":"<urn:uuid:84b5ab62-f708-4460-a47d-503b317c2e6c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Cubic Equation October 22nd 2008, 09:23 PM #1 Jun 2008 Cubic Equation I know cubic equations must have the form $Ax^3 + Bx^2 + Cx + D = 0,$ and I know how to solve that type of equation by doing long division etc: However I got this cubic question, It doesn't have the same form as above and it's put me right off knowing what to do. I tried to simplify it $( x(x^2+3x) +4)$ but I got no where fast. So any starters of where I should start to factorize would be very helpful! Last edited by Kaynight; October 22nd 2008 at 09:34 PM. I know cubic equations must have the form $Ax^3 + Bx^2 + Cx + D = 0,$ and I know how to solve that type of equation by doing long division etc: However I got this cubic question, It doesn't have the same form as above and it's put me right off knowing what to do. I tried to simplify it $( x(x^2+3x) +4)$ but I got no where fast. So any starters of where I should start to factorize would be very helpful! It has the same form as above, with $C=0$. You can notice that $-1$ is a solution, hence $x+1$ divides your polynomial. It remains to compute $Q(x)$ such that $x^3-3x^2+4=(x+1)Q(x)$. October 22nd 2008, 10:30 PM #2 MHF Contributor Aug 2008 Paris, France October 22nd 2008, 11:00 PM #3 Jun 2008
{"url":"http://mathhelpforum.com/algebra/55262-cubic-equation.html","timestamp":"2014-04-17T04:27:24Z","content_type":null,"content_length":"38768","record_id":"<urn:uuid:dbaaf6ec-ccbe-4735-ad92-bd4c47b9cd76>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics 201: General Physics Physics 201 is the first semester of a 2-semester introduction to physics for students preparing for careers in engineering, science and medicine. The main topics are mechanics, including kinematics, dynamics, statics, fluids, and oscillations. There are two lectures, two discussions and one three-hour lab per week. Our goal is for you to develop an understanding and intuition for physics so that you can solve practical problems. We feel that the only way to accomplish this goal is by thinking about and solving lots of problems and experimenting in the lab. We hope that Physics 201 will develop the critical thinking and collaborative skills you will need in your future career. Please see course information for details Calculus (Math 221 or equivalent)
{"url":"http://www.physics.wisc.edu/undergrads/courses/fall2013/201/","timestamp":"2014-04-19T15:10:32Z","content_type":null,"content_length":"9873","record_id":"<urn:uuid:53126ed8-821b-41df-b90b-e91dc1825b1c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
In mathematics, the Borromean rings^[1] consist of three topological circles which are linked and form a Brunnian link, i.e., removing any ring results in two unlinked rings. In other words, no two of the three rings are linked with each other, but nonetheless all three are linked. Mathematical properties Although the typical picture of the Borromean rings (above right picture) may lead one to think the link can be formed from geometrically round circles, they cannot be. (Freedman & Skora 1987) proves that a certain class of links, including the Borromean links, cannot be exactly circular. Alternatively, this can be seen from considering the link diagram: if one assumes that circles 1 and 2 touch at their two crossing points, then they either lie in a plane or a sphere. In either case, the third circle must pass through this plane or sphere four times, without lying in it, which is impossible; see (Lindström & Zetterström 1991). It is, however, true that one can use ellipses (right picture). These may be taken to be of arbitrarily small eccentricity, i.e. no matter how close to being circular their shape may be, as long as they are not perfectly circular, they can form Borromean links if suitably positioned: for example, Borromean rings made from thin circles of elastic metal wire will bend. In knot theory, the Borromean rings are a simple example of a Brunnian link: although each pair of rings are unlinked, the whole link cannot be unlinked. There are a number of ways of seeing this. Simplest is that the fundamental group of the complement of two unlinked circles is the free group on two generators, a and b, by the Seifert–van Kampen theorem, and then the third loop has the class of the commutator, [a, b] = aba^−1b^−1, as one can see from the link diagram: over one, over the next, back under the first, back under the second. This is non-trivial in the fundamental group, and thus the Borromean rings are linked. Another way is that the cohomology of the complement supports a non-trivial Massey product, which is not the case for the unlink. This is a simple example of the Massey product and further, the algebra corresponds to the geometry: a 3-fold Massey product is a 3-fold product which is only defined if all the 2-fold products vanish, which corresponds to the Borromean rings being pairwise unlinked (2-fold products vanish), but linked overall (3-fold product does not vanish). In arithmetic topology, there is an analogy between knots and prime numbers in which one considers links between primes. The triple of primes (13, 61, 937) are linked modulo 2 (the Rédei symbol is −1) but are pairwise unlinked modulo 2 (the Legendre symbols are all 1). Therefore these primes have been called a "proper Borromean triple modulo 2"^[2] or "mod 2 Borromean primes".^[3] Hyperbolic geometry The Borromean rings are a hyperbolic link: the complement of the Borromean rings in the 3-sphere admits a complete hyperbolic metric of finite volume. The canonical (Epstein-Penner) polyhedral decomposition of the complement consists of two regular ideal octahedra. The volume is 16Л(π/4) = 7.32772… where Л is the Lobachevsky function.^[4] Connection with braids If one cuts the Borromean rings, one obtains one iteration of the standard braid; conversely, if one ties together the ends of (one iteration of) a standard braid, one obtains the Borromean rings. Just as removing one Borromean ring unlinks the remaining two, removing one strand of the standard braid unbraids the other two: they are the basic Brunnian link and Brunnian braid, respectively. In the standard link diagram, the Borromean rings are ordered non-transitively, in a cyclic order. Using the colors above, these are red over yellow, yellow over blue, blue over red – and thus after removing any one ring, for the remaining two, one is above the other and they can be unlinked. Similarly, in the standard braid, each strand is above one of the others and below the other. The name "Borromean rings" comes from their use in the coat of arms of the aristocratic Borromeo family in Italy. The link itself is much older and has appeared in Gandhara (Afghan) Buddhist art from around the 2nd century^[citation needed], and in the form of the valknut on Norse image stones dating back to the 7th century. The Borromean rings have been used in different contexts to indicate strength in unity, e.g., in religion or art. In particular, some have used the design to symbolize the Trinity. The psychoanalyst Jacques Lacan famously found inspiration in the Borromean rings as a model for his topology of human subjectivity, with each ring representing a fundamental Lacanian component of reality (the "real", the "imaginary", and the "symbolic"). The Borromean rings were formerly used as the logo of the German Krupp industrial concern and are used as part of the logo for the successor ThyssenKrupp. The rings were used as the logo of Ballantine beer and are still used by the Ballantine brand beer, now produced by successor Falstaff.^[5] In 2006, the International Mathematical Union decided at the 25th International Congress of Mathematicians in Madrid, Spain to use a new logo based on the Borromean rings.^[6] A stone pillar at Marundeeswarar Temple in Thiruvanmiyur, Chennai, Tamil Nadu, India, has such a figure dating to before 6th century.^[citation needed] Partial rings In medieval and renaissance Europe, a number of visual signs are found that consist of three elements interlaced together in the same way that the Borromean rings are shown interlaced (in their conventional two-dimensional depiction), but the individual elements are not closed loops. Examples of such symbols are the Snoldelev stone horns and the Diana of Poitiers crescents. An example with three distinct elements is the logo of Sport Club Internacional. Less-related visual signs include the Gankyil and the Venn diagram on three sets. Similarly, a monkey's fist knot is essentially a 3-dimensional representation of the Borromean rings, albeit with three layers, in most cases. Using the pattern in the incomplete Borromean rings, one can balance three knives on three supports, such as three bottles or glasses, providing a support in the middle for a fourth bottle or glass.^ Multiple rings Some knot-theoretic links contain multiple Borromean rings configurations; one five-loop link of this type is used as a symbol in Discordianism, based on a depiction in the Principia Discordia. Molecular Borromean rings are the molecular counterparts of Borromean rings, which are mechanically-interlocked molecular architectures. In 1997, biologists Chengde Mao and coworkers of New York University succeeded in constructing a set of rings from DNA.^[8] In 2003, chemist Fraser Stoddart and coworkers at UCLA utilised coordination chemistry to construct a set of rings in one step from 18 components.^[9] A quantum-mechanical analog of Borromean rings, called an Efimov state, was predicted by physicist Vitaly Efimov in 1970. A team of physicists led by Randall Hulet of Rice University in Houston achieved this with a set of three bound lithium atoms and published their findings in the online journal Science Express.^[10] In 2010, a team led by K. Tanaka created an Efimov state within a See also • Freedman, Michael H.; Skora, Richard (1987), "Strange Actions of Groups on Spheres", Journal of Differential Geometry 25: 75–98 • Lindström, Bernt; Zetterström, Hans-Olov (1991), "Borromean Circles are Impossible", American Mathematical Monthly 98 (4): 340–341, DOI:10.2307/2323803, JSTOR 2323803 (subscription required). This article explains why Borromean links cannot be exactly circular. • Brown, R. and Robinson, J., "Borromean circles", Letter, American Math. Monthly, April, (1992) 376–377. This article shows how Borromean squares exist, and have been made by John Robinson (sculptor), who has also given other forms of this structure. • Chernoff, W. W., "Interwoven polygonal frames". (English summary) 15th British Combinatorial Conference (Stirling, 1995). Discrete Math. 167/168 (1997), 197–204. This article gives more general interwoven polygons. External links
{"url":"http://dictionary.sensagent.com/Borromean_rings/en-en/","timestamp":"2014-04-18T08:04:45Z","content_type":null,"content_length":"84630","record_id":"<urn:uuid:d57e85f8-3f90-4dc4-bc8a-de1c44d01192>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Significant Figures Date: 02/25/99 at 04:14:53 From: Em Subject: Significant Figures I am having trouble with significant figures. I do not understand why: a) 62.3 multiplied by 5.7 = 360, but 62.30 multiplied by 5.70 = 355. The question says to express your answer with an appropriate number of significant figures. Please help me, as I do not understand why we get 360 and 355. Date: 02/25/99 at 12:15:12 From: Doctor Peterson Subject: Re: Significant Figures The idea here is that if one of the numbers you are multiplying is only accurate to two significant digits, you can only trust two significant digits of the result, so you round to that accuracy. When the numbers being multiplied are given as 62.30 and 5.70, there are 4 and 3 significant digits respectively, so you can keep 3 digits in your answer, 355. But when you are only given 62.3 and 5.7, you should only keep 2 significant digits, so you round it up to 360. Here's one way to see why this is. (I'll use a different example, and explain why below.) The multiplication of 12.30 by 5.70 looks like this: 1 2.3 0 * 5.7 0 7 0.1 1 0 0 If we don't know the last digit of each number, but represent each unknown digit by X, your multiplication looks like this: 1 2.3 X * 5.7 X X X X X 8 6 1 X 6 1 5 X 7 0.X X X X I've written X wherever I don't know what a digit is, because I'm multiplying or adding an unknown digit. The X's show that I can't trust the last digits, and should round off to 70. If you look closely, you'll see that the significant digits in the answer come from the significant digits of the 5.7, the number with the fewest significant digits. So that's the rule: you keep as many significant digits in the product as there are in the factor with the fewest significant digits. Now here's your original problem: 6 2.3 X * 5.7 X X X X X 4 3 6 1 X 3 1 1 5 X 3 5 5.X X X X You'll notice that it looks as if we have more valid digits than the rule says! That's because the first digits of both numbers are relatively large, so that you get an extra digit. The rule is an approximation, and is a little on the conservative side, assuming that it's better to keep too few digits than to trust too many in some cases. We could probably modify the rule slightly to take the extra digit into account, but the simple rule has been found to be good The important thing, of course, is that whether you call the answer 355 or 360, you are rounding off several more digits from the actual product, 355.11, that are really meaningless. In this age of calculators, when you can get many digits in any calculation with no trouble, it is important not to keep all those digits and get a false sense of the precision of your results. We don't want to pretend we know seven digits when we really only know 2 or 3. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/59022.html","timestamp":"2014-04-16T11:28:19Z","content_type":null,"content_length":"7954","record_id":"<urn:uuid:52a8f5e2-a78e-4135-a493-2adc00272203>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: A HERMITIAN ANALOGUE OF THE BR šOCKER­PRESTEL THEOREM ABSTRACT. The Bršocker­Prestel local-global principle characterizes weak iso- tropy of quadratic forms over a formally real field in terms of weak isotropy over the henselizations and isotropy over the real closures of that field. A hermitian analogue of this principle is presented for algebras of index at most two. An improved result is also presented for algebras with a decomposable involution, algebras of pythagorean index at most two, and algebras over SAP and ED fields. In the algebraic theory of quadratic forms over fields the problem of deter- mining whether a form is isotropic (i.e., has a non-trivial zero) has led to the development of several powerful local-global principles. They allow one to test the isotropy of a form over the original field ("global" situation) by test- ing it over a collection of other fields where the original problem is potentially easier to solve ("local" situation). The most celebrated local-global principle is of course the Hasse­Minkowski theorem which gives a test for isotropy over the rational numbers Q in terms of isotropy over the p-adic numbers Qp for each prime p and the real numbers R. More generally, Q may be replaced by any global field F and the collection
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/985/2034760.html","timestamp":"2014-04-20T01:40:08Z","content_type":null,"content_length":"8421","record_id":"<urn:uuid:a7b6e7bd-b9b0-4b12-aecd-25b93645d734>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
A question on semi-stratifiable space up vote 2 down vote favorite This question is also posted here. A space $X$ is callled semi-stratifiable space if it has a $g$-function such that: for any point $x$ of $X$ and a sequence $\{x_n\}$ of $X$ if $x \in g(n,x_n)$, then $x_n \to x$. Note that every Moore space is semi-stratifiable. We know the cardinality of a star countable Moore space is not greater than $\mathfrak c$. A topological space $X$ is said to be star countable if whenever $\mathscr{U}$ is an open cover of $X$, there is a countable subspace $K$ of $X$ such that $X = \operatorname{St}(K,\mathscr{U})$. Is there a star countable semi-stratifiable space $X$ with $|X|> \mathfrak c$? Thanks for your help. Could you kindly tell us what a $g$-function is? – Joel David Hamkins May 29 '13 at 11:32 $g: \mathbb N \times X \to \tau_X$ is a $g$-function of $X$ if for any $x$ and $n \in \mathbb N$, $x \in g(n+1,x) \subset g(n,x)$. – Paul May 29 '13 at 11:40 It seems that you want to impose some separation axiom, since otherwise the indiscrete space (of any cardinality) would seem to be trivially semi-stratifiable and star-countable. – Joel David Hamkins May 29 '13 at 15:00 It's customary in generalised metric spaces to assume at least $T_3$ ($T_1$ plus regular). This is also customary for stratifiable and semi-stratifiable spaces, AFAIK. – Henno Brandsma May 29 '13 at 18:12 @Joel: maybe I should mentioned it. – Paul May 30 '13 at 0:03 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged gn.general-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/132209/a-question-on-semi-stratifiable-space","timestamp":"2014-04-16T10:50:47Z","content_type":null,"content_length":"50807","record_id":"<urn:uuid:93c73857-70b1-4530-af80-30e3758fea81>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
As I've noted a couple of times, the prices on TEMP.2009.HIGH stock are very high when compared to any reasonable forecasting model. In previous posts , I used a variety of methods to forecast the 2009 temperature anomaly based only on the time series. A model using only a nonlinear time trend (year and year squared) and lagged temperature variables explains about 84% of the variation in the time series. Based on that model, it looked like there was no more than a 5% chance that 2009 would be warm enough to beat 1998. We now have the global average temperature anomaly for January. Let's add it to the model and see what comes up. The first way I did it was to use my model to predict January temperatures. That model provides a much worse fit than the one predicting annual temperatures, but that's to be expected. The model's predicted January temperature is spot-on: the model predicted a temperature anomaly of 0.37 and that's what we got. We would have needed an actual January temperature much higher than predicted to start me worrying about my call on this contract. Instead, I'm right on the money. The next approach is to add January temperatures in to help predict average annual temperatures. I updated the forecasting model I previously was using to have January temperatures as an explanatory variable. I then took the regression residuals and checked how many times in the data series we've had residuals large enough that, if we had that bad a model fit this year, 2009 would beat 1998. It's happened 6 times in the 154 year series. That's 3.8%. The best fit model says we have a 3.8% chance that 2009 is warmer than 1998. Last time, we had a 7/154 chance. So, the addition of the January data makes it LESS likely that we'll beat 1998. Why is that, given that January temps are right on target? The addition of January data to the model tightens up the precision of the Long story short, the January data suggests that the chances of 2009 beating 1998 are even lower than I'd previously thought. A fair price on this contract is $0.038. Full disclosure: I have a very large short position on this contract, based largely on analyses like those conducted and linked to here. I'm also slightly long on the TEMP.2009 contract as my model suggests 2009 will be warmer than 2008 (74% chance), just not warm enough to beat 1998. If you want to check my work and try it for yourself, get a copy of Stata. Get the data series and import it. Add the obvious column headings. Then do the following: gen year2 = year^2 tsset year reg temp year year2 L1.temp L4.temp jan predict temp_hat predict resid, residuals sum resid if resid > (0.543-0.3867909) The numbers are the 1998 anomaly and the predicted 2009 anomaly. The last line will then tell you the number of times that the model has been so far out that it generated a residual sufficiently large that, if encountered again, 2009 will be warmer than 1998. 3.8%. I don't know who or what is keeping the prices up at $0.18. Current price is five times higher than it ought to be based on fundamentals. 14 news stocks are now underway, all relating to interest rates and what they'll be doing over the next six months. For the first time, iPredict is pointing at Australia and asking you to consider what the Reserve Bank of Australia will do with its cash rate when the RBA board next meets on 3 February, next Tuesday. Yes, a short sharp set of stocks this one, but we expect some healthy trading. In another first, iPredict is also asking you to forecast retail mortgage rates for home owners by the middle of this year. How low will variable rate mortgages be in June? Below 7%? Below 6% below 5.5%? This should be a stock of interest to home owners considering whether to break their agreements with their banks and moving to a lower interest rate. Finally, we launch another set of OCR stocks, this time for the RBNZ's 12 March annoucement. Of course, we'll be keeping an eye on these stocks as tomorrow's OCR news comes through. iPredict's forum is underway. Here's the link. Well done Aaron and Simon (our developers) for pulling this together and fighting the monster that is IE6. If you have any trouble, please tell us by posting into the bug reports section of the forum. If you're having so much trouble that you can't see the forum, well just post it here :-) Today sees the launch of our first petrol price stocks, aimed at forecasting the price of 91 unleaded petrol at the pump. Five binary stocks are designed to measure this, each asking will the price of 91 unleaded be within the following ranges: 91.FEB09.VLOW: price less and 132 cents 91.FEB09.LOW: price between 132 and 139.9 cents 91.FEB09.MID: price between 140 and 148 cents 91.FEB09.HIGH: price between 148.1 and 156 cents 91.FEB09.VHIGH: price above 156 cents How do we measure price? We use the Ministry of Economic Development's Regular Petrol Price series, available from here. Although the Ministry does not make it clear on the page, this series is measuring the GST inclusive, all other taxes-inclusive price of 91 unleaded petrol in the Wellington region. I had to call the Ministry to figure that out. We have also launched a new bundle called BUNDLE.91.FEB09 which combines all five stocks and sells for $1. Since folks seem to be discounting at 100% the previous post on why the TEMP.2009.HIGH stock seems massively overvalued, I started worrying that traders out there know something I don't. So, I threw the data into STATA instead and ran a few prediction models with lag effects and a time trend. The best fit model predicts a 2009 temperature anomaly of 0.38821, which is higher than I previously was estimating. But, the probability of exceeding 1998 seems largely unaffected because my confidence bands are tighter. The standard deviation of the regression residuals is 0.100832. The residual tells you the difference between the actual observation and the estimate predicted by the model for any particular year. So, if the actual temperature anomaly turns out to be 0.546, the residual would be -0.15779: the prediction minus the observation gives you that as residual. How often do we observe residuals at least that large on the down side? 7 times in 154 observations: 4.5% of the time. The simple model explains 84% of the variance in the time series using only lagged dependent variables and a time trend. Want to replicate this? Get the dataset, add in a null 2009 observation, throw it into STATA, type: gen year2 = year^2 reg deviation L1.deviation L4.deviation year year2 predict y_hat predict resid, residuals Then just take (y_hat - 0.546) for 2009, and see how often the residual is smaller than that value: sum resid if resid < -0.15778 7 observations in 154. I wonder what the folks buying at 0.22 know that I don't. No way anybody's using this for hedging against a hot year, not at these stakes.... * Note I kept the first and fourth lags after a general to specific reduction starting with 7 lags. Heaps of great feedback in earlier posts on new features ideas. This morning we compiled a list of new features that we've been bouncing around for a while. Part of the list we came up with is below, much of it is from ideas given to us by you. We've left out some administrative features ideas and ideas only for our internal prediction market customers. Please do two things in the comments. One, list your priorities starting with most preferred e.g. F,D,E,B,Q. Prioritise as many as you like, I'll compile your opinions on ideas for however many as you are willing to give. Two, add new suggestions. I apologise in advance for all the features already submitted in the last few months not on the list - the list was prepared in a rush, please submit them again, I'll add I will update the list with your new ideas. I may reverse the list's order so that newest ideas are put at the top. For option D, what minimum, if any, is appropriate? $20? Some of these may require clarification - if so please ask. Here is what we have so far, in no particular order. A. Forum B. Continued API development C. Confirm trade button disabled after one click D. Option to display trader earnings by percentage (minimum deposit $20?) + make named appearance on list optional E. Funds frozen following a withdrawal request + option to cancel request if not processed F. User's cash, portfolio value and net worth plotted over time G. User-added events/news on stock details, editable/removable/flaggable by other users via wiki, added items plotted on stock price chart H. User stock board – lists all stocks in one place, current price/volume/day's high,low/lifetime high,low/high buy,low sell. Ajaxed and sortable. I. Daily stock high/low prices added to stock details J. Multiple stock trading system - allows buying and selling of multiple stocks at one time from single page – facilitates arbitraging K. Stock details page widget which converts price to prediction for that stock taking into account any special rules (e.g. price floors) for that stock + remove probability from browse stock page and replace with "Avg. Cost" or "Rev/Stock Received At Sale" values if position taken, otherwise blank L. Home page public notification – alerts to users arriving on the home page, warning of approaching downtime for maintenance, for example, or new stocks launch time M. User alerts system - user can get emails/text messages sent if prices move outside defined bands N. Basic look/feel site adjustments (colours, some page design changes) O. Re-work trading interfaces - reduce steps, make clearer, ajax-enable P. Re-work short selling to make clearer to user what funds have been transferred and why Q. Post-trade suggested stocks "Users who bought this stock also traded …" displayed on the trade confirmed screen R. Browse closed stocks page S. Stocks vary between $0 and $max, where $max can be <> $1. T. Connect forum and stock details pages - allow forum browsing (for posts relating to that stock only) and posting from stock details U. Download (anonymous) raw trading data interface (currently available via API only) V. Add Google news items to stock details pages using stock-specific search terms. Plot news item flags on stock price line. W. One-click trading interface: buy and sell price can be clicked for a standard parcel size (eg 25 contracts) subject to there being at least the std number available at that price (if not the lesser number) X. Either streamline browse stocks to load only selected categories or set up a browse stocks for mobile/PDA only Y. Remove the division into 'page 1, page 2' of stocks in the lists on My Portfolio, e.g. the Watch list Z. Show current bid and offer prices in the portfolio lists At current prices, the markets are forecasting a 60% chance that 2009 temperatures will be above 2008 temperatures, with a 24% chance that 2009 temperatures will beat the 1998 high of a 0.546 degree anomaly. The 2008 anomaly was 0.324, so traders must reckon 60% of the temperature probability distribution lies above 0.324 and 24% lies above 0.546. If you go to the underlying data series, which goes back to 1850, we find a mean change from year to year of close to zero (because of course it's anomaly relative to an average baseline) with a standard deviation of the year-on-year change of 0.115. So, in any given year, the next year's anomaly will fall within +/- 0.115 of the prior year's anomaly about 68% of the time. For the 2009 anomaly to be greater than 0.546 would require an increase equivalent to at least 1.93 standard deviations of the year-on-year change. A jump that large shouldn't happen more than about 2.5% of the time in a normally distributed series. You could try arguing that the variance of the series has increased, but the standard deviation of year on year changes over the last decade is almost identical to that of the whole 159 year series. You might also well say that I ought to account for the warming trend. Let's try that. From 1980 through 2008, the average change in the anomaly has been 0.009483. We can add that to the 2008 temperature to get a prediction for 2009. The 2009 observation would need to be 1.83 SD above the expected temperature increase to beat the 1998 anomaly. We would expect that to happen 3.36% of the I've included a graph with the underlying temperature series and bands showing changes 1.83 standard deviations above and below the prior year's observation. How often in the series' 158 year* history do we see the subsequent year's anomaly outside the 1.83SD band? 14 times. Or, 8.8% of the time, with 4.4% above the band and 4.4% below the band. That's a bit more than we'd expect from the standard normal distribution (6.7% of observations, 3.35% each side), but not a ton more. I've marked these with big red dots: 1863, 1865, 1877, 1879, 1890, 1916, 1930, 1954, 1957, 1964, 1974, 1977, 1997 and 1999. Those are years with temperatures that varied by more than 1.83 standard deviations of the normal year-on-year change from the prior year's observation. You'll probably need to click on the graph to enlarge it to see things properly. I've been shorting this stock since it launched. I plan to continue shorting this stock. I'm not saying that 2009 can't be warmer than 1998, I just can't see how it's more than 20% likely to happen. I can't see that it's more than 10% likely to happen either. The analysis above should give an upper bound estimate of the likelihood of 2009 exceeding 1998: for a midpoint estimate, I'd not account for a warming trend and instead evaluate at the mean zero change. At what price would I start covering my shorts? Well, I think a fair price wouldn't be higher than $0.05, and I'm a bit risk averse. Of course, feel free to follow the links provided in the stock description to get the underlying data and have your own play with it. I'm not doing anything high tech here. *Yes, the series is 159 years long, but you can't look at the year-ahead for the most recent year. That's the one we're trading on! Note: Post updated to provide analysis based on 1.83SD cutpoints (the upperbound case) rather than the midpoint case provided earlier.
{"url":"http://ipredictnz.blogspot.com/","timestamp":"2014-04-18T21:24:01Z","content_type":null,"content_length":"58339","record_id":"<urn:uuid:2179a7bd-64b0-473d-a053-618e6ee8ee6a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof for average velocity August 30th 2012, 03:30 AM #1 Junior Member Aug 2009 Brisbane, Australia Proof for average velocity Hi all, I want to prove the following algebraically (that is, NOT with calculus): If an object with initial velocity $v_i$ is accelerated at some constant acceleration over a time $t$, and finishes with a final velocity of $v_f$, then the average velocity of the object is ${v_i + v_f} \over 2$. This average velocity is the equivalent velocity that this object would have to be travelling (without acceleration) to travel the same distance as the object experiencing acceleration as above. I need to somehow prove this in order to derive the kinematic equations. But how? It is easy to derive the kinematic equations with a knowledge of calculus, but if you are teaching this to someone without a knowledge of calculus, how can this be proven rigorously? Re: Proof for average velocity Hi all, I want to prove the following algebraically (that is, NOT with calculus): If an object with initial velocity $v_i$ is accelerated at some constant acceleration over a time $t$, and finishes with a final velocity of $v_f$, then the average velocity of the object is ${v_i + v_f} \over 2$. This average velocity is the equivalent velocity that this object would have to be travelling (without acceleration) to travel the same distance as the object experiencing acceleration as above. I need to somehow prove this in order to derive the kinematic equations. But how? It is easy to derive the kinematic equations with a knowledge of calculus, but if you are teaching this to someone without a knowledge of calculus, how can this be proven rigorously? We have to choose a couple of equations that don't depend on your answer for the average speed. (There's no real difference between velocity and speed in this case. I'm just picky that way.) So first, always set up a coordinate system. I will choose an origin to be at the starting point of the measurement ( $x_i = 0$) and final point to be at $x_f$. I am choosing the positive x direction to be in the same direction as the acceleration, and we might as well choose that to be horizontal and to the right. Okay, now we need three equations and put them together. I'm going to choose: 1. $v_f^2 - v_i^2 = 2a(x_f - x_i) \implies v_f^2 - v_i^2 = 2ax_f$ (remembering that $x_i = 0$) 2. $v_f - v_i = at$ So we have for the average speed: $v_{ave} = \frac{x_f - x_i}{t} \implies v_{ave} = \frac{x_f}{t}$ Solving equation 1 for $x_f$ and putting it into equation 2: $v_{ave} = \frac{v_f^2 - v_i^2}{2at}$ I'll leave the rest to you. You need to factor the numerator, plug in $v_f - v_i = at$ and do some simplifying. If you need help to finish it, just ask. Re: Proof for average velocity Hi topsquark, The problem I have with this is that Equation 1 has not yet been derived yet. I know it is one of the common kinematic equations, but we cannot use this formula until it has been derived, if you know what I mean? And to derive it requires proving the formula for average velocity, sort of a chicken and egg problem! At this point, all I have to work with is the formula $v_f = v_i + at$. I don't think it is possible to prove the average velocity formula without a knowledge of calculus, in particular integration and what the area under a curve means. Re: Proof for average velocity As I have said above, knowing that the area under the graph means total displacement implies a knowledge of calculus. I have come to the conclusion that the average velocity formula cannot be proved without a knowledge of calculus. Re: Proof for average velocity i have replied to this thread twice now but it seems to be disappearing for some reason. The way you need to do it is using graphs....area under velocity time graph is the distance and you can compare a trapezium shape with a rectangular which will give you an equation of average vs changing velocity. Hope this helps. I agree. I can't find a way to do it either. August 30th 2012, 04:28 AM #2 August 30th 2012, 08:35 PM #3 Junior Member Aug 2009 Brisbane, Australia September 2nd 2012, 08:40 PM #4 Junior Member Aug 2009 Brisbane, Australia September 3rd 2012, 04:11 AM #5
{"url":"http://mathhelpforum.com/algebra/202705-proof-average-velocity.html","timestamp":"2014-04-21T06:56:23Z","content_type":null,"content_length":"50470","record_id":"<urn:uuid:800d2111-aa42-4c4b-9dff-42131581af49>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/experimentx/medals/1","timestamp":"2014-04-19T12:57:47Z","content_type":null,"content_length":"125395","record_id":"<urn:uuid:82159bbb-ee8f-4fba-92f7-e64e8901269d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Associating Definitions with Different Symbols When you make a definition in the form or , Mathematica associates your definition with the object f. This means, for example, that such definitions are displayed when you type ?f. In general, definitions for expressions in which the symbol f appears as the head are termed downvalues of f. Mathematica however also supports upvalues, which allow definitions to be associated with symbols that do not appear directly as their head. Consider for example a definition like Exp[g[x_]]:=rhs. One possibility is that this definition could be associated with the symbol Exp, and considered as a downvalue of Exp. This is however probably not the best thing either from the point of view of organization or efficiency. Better is to consider Exp[g[x_]]:=rhs to be associated with , and to correspond to an upvalue of . Associating definitions with different symbols. It is not associated with In simple cases, you will get the same answers to calculations whether you give a definition for as a downvalue for f or an upvalue for g. However, one of the two choices is usually much more natural and efficient than the other. A good rule of thumb is that a definition for should be given as an upvalue for g in cases where the function f is more common than g. Thus, for example, in the case of Exp[g[x]], Exp is a built-in Mathematica function, while g is presumably a function you have added. In such a case, you will typically think of definitions for Exp[g[x]] as giving relations satisfied by g. As a result, it is more natural to treat the definitions as upvalues for g than as downvalues for Exp. Since the full form of the pattern is Plus[g[x_], g[y_]], a definition for this pattern could be given as a downvalue for Plus. It is almost always better, however, to give the definition as an upvalue for . In general, whenever Mathematica encounters a particular function, it tries all the definitions you have given for that function. If you had made the definition for a downvalue for Plus, then Mathematica would have tried this definition whenever Plus occurs. The definition would thus be tested every time Mathematica added expressions together, making this very common operation slower in all cases. However, by giving a definition for as an upvalue for , you associate the definition with . In this case, Mathematica only tries the definition when it finds a inside a function such as Plus. Since presumably occurs much less frequently than Plus, this is a much more efficient procedure. f[g]^=value or f[g[args]]^=value make assignments to be associated with g, rather than f f[g]^:=value or f[g[args]]^:=value make delayed assignments associated with g f[arg[1],arg[2],...]^=value make assignments associated with the heads of all the Shorter ways to define upvalues. A typical use of upvalues is in setting up a "database" of properties of a particular object. With upvalues, you can associate each definition you make with the object that it concerns, rather than with the property you are specifying. In general, you can associate definitions for an expression with any symbol that occurs at a sufficiently high level in the expression. With an expression of the form , you can define an upvalue for a symbol g so long as either g itself, or an object with head g, occurs in args. If g occurs at a lower level in an expression, however, you cannot associate definitions with it. f[...]:=rhs downvalue for f f/:f[g[...]][...]:=rhs downvalue for f g/:f[...,g,...]:=rhs upvalue for g g/:f[...,g[...],...]:=rhs upvalue for g Possible positions for symbols in definitions. As discussed in "The Meaning of Expressions", you can use Mathematica symbols as "tags", to indicate the "type" of an expression. For example, complex numbers in Mathematica are represented internally in the form Complex[x, y], where the symbol Complex serves as a tag to indicate that the object is a complex number. Upvalues provide a convenient mechanism for specifying how operations act on objects that are tagged to have a certain type. For example, you might want to introduce a class of abstract mathematical objects of type . You can represent each object of this type by a Mathematica expression of the form quat[data]. In a typical case, you might want objects to have special properties with respect to arithmetic operations such as addition and multiplication. You can set up such properties by defining upvalues for with respect to Plus and Times. This defines an upvalue for with respect to When you define an upvalue for with respect to an operation like Plus, what you are effectively doing is to extend the domain of the Plus operation to include objects. You are telling Mathematica to use special rules for addition in the case where the things to be added together are objects. In defining addition for objects, you could always have a special addition operation, say , to which you assign an appropriate downvalue. It is usually much more convenient, however, to use the standard Mathematica Plus operation to represent addition, but then to "overload" this operation by specifying special behavior when objects are encountered. You can think of upvalues as a way to implement certain aspects of object-oriented programming. A symbol like represents a particular type of object. Then the various upvalues for specify "methods" that define how objects should behave under certain operations, or on receipt of certain "messages".
{"url":"http://reference.wolfram.com/mathematica/tutorial/AssociatingDefinitionsWithDifferentSymbols.html","timestamp":"2014-04-17T01:14:20Z","content_type":null,"content_length":"54632","record_id":"<urn:uuid:0abff1bc-bbf9-4919-9ecd-8a774bd5b991>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Standard ML Paradigm(s) multi-paradigm: functional, imperative Typing discipline strong, static, inferred Major implementations MLKit, MLton, MLWorks, Moscow ML, Poly/ML, SML/NJ, MLj, SML.NET Dialects Alice, Dependent ML Influenced by ML, Hope Influenced Rust Filename extension(s) .sml Standard ML (SML) is a general-purpose, modular, functional programming language with compile-time type checking and type inference. It is popular among compiler writers and programming language researchers, as well as in the development of theorem provers. SML is a modern descendant of the ML programming language used in the Logic for Computable Functions (LCF) theorem-proving project. It is distinctive among widely used languages in that it has a formal specification, given as typing rules and operational semantics in The Definition of Standard ML (1990, revised and simplified as The Definition of Standard ML (Revised) in 1997).^[1] Standard ML is a functional programming language with some impure features. Programs written in Standard ML consist of expressions to be evaluated, as opposed to statements or commands, although some expressions return a trivial "unit" value and are only evaluated for their side-effects. Like all functional programming languages, a key feature of Standard ML is the function, which is used for abstraction. For instance, the factorial function can be expressed as: fun factorial n = if n = 0 then 1 else n * factorial (n-1) A Standard ML compiler is required to infer the static type int -> int of this function without user-supplied type annotations. I.e., it has to deduce that n is only used with integer expressions, and must therefore itself be an integer, and that all value-producing expressions within the function return integers. The same function can be expressed with clausal function definitions where the if-then-else conditional is replaced by a sequence of templates of the factorial function evaluated for specific values, separated by '|', which are tried one by one in the order written until a match is found: fun factorial 0 = 1 | factorial n = n * factorial (n - 1) This can be rewritten using a case statement like this: val rec factorial = fn n => case n of 0 => 1 | n => n * factorial (n - 1) or as a lambda function: val rec factorial = fn 0 => 1 | n => n * factorial(n -1) Here, the keyword val introduces a binding of an identifier to a value, fn introduces the definition of an anonymous function, and case introduces a sequence of patterns and corresponding Using a local function, this function can be rewritten in a more efficient tail recursive style. fun factorial n = let fun lp (0, acc) = acc | lp (m, acc) = lp (m-1, m*acc) in lp (n, 1) end (The value of a let-expression is that of the expression between in and end.) The encapsulation of an invariant-preserving tail-recursive tight loop with one or more accumulator parameters inside an invariant-free outer function, as seen here, is a common idiom in Standard ML, and appears with great frequency in SML code. Type synonyms[edit] A type synonym is defined with the type keyword. Here is a type synonym for points in the plane, and functions computing the distances between two points, and the area of a triangle with the given corners as per Heron's formula. type loc = real * real fun dist ((x0, y0), (x1, y1)) = let val dx = x1 - x0 val dy = y1 - y0 in Math.sqrt (dx * dx + dy * dy) end fun heron (a, b, c) = let val ab = dist (a, b) val bc = dist (b, c) val ac = dist (a, c) val perim = ab + bc + ac val s = perim / 2.0 in Math.sqrt (s * (s - ab) * (s - bc) * (s - ac)) end Algebraic datatypes and pattern matching[edit] Standard ML provides strong support for algebraic datatypes. An ML datatype can be thought of as a disjoint union. They are easy to define and easy to program with, in large part because of Standard ML's pattern matching as well as most Standard ML implementations' pattern exhaustiveness checking and pattern redundancy checking. A datatype is defined with the datatype keyword, as in datatype shape = Circle of loc * real (* center and radius *) | Square of loc * real (* upper-left corner and side length; axis-aligned *) | Triangle of loc * loc * loc (* corners *) (See above for the definition of loc.) Note: datatypes, not type synonyms, are necessary to define recursive constructors. (This is not at issue in the present example.) Order matters in pattern matching; patterns that are textually first are tried first. Pattern matching can be syntactically embedded in function definitions as follows: fun area (Circle (_, r)) = 3.14 * r * r | area (Square (_, s)) = s * s | area (Triangle (a, b, c)) = heron (a, b, c) (* see above *) Note that subcomponents whose values are not needed in a particular computation are ellided with underscores, or so-called wildcard patterns. The so-called "clausal form" style function definition, where patterns appear immediately after the function name, is merely syntactic sugar for fun area shape = case shape of Circle (_, r) => 3.14 * r * r | Square (_, s) => s * s | Triangle (a, b, c) => heron (a, b, c) Pattern exhaustiveness checking will make sure each case of the datatype has been accounted for, and will produce a warning if not. The following pattern is inexhaustive: fun center (Circle (c, _)) = c | center (Square ((x, y), s)) = (x + s / 2.0, y + s / 2.0) There is no pattern for the Triangle case in the center function. The compiler will issue a warning that the pattern is inexhaustive, and if, at runtime, a Triangle is passed to this function, the exception Match will be raised. The set of clauses in the following function definition is exhaustive and not redundant: fun hasCorners (Circle _) = false | hasCorners _ = true If control gets past the first pattern (the Circle), we know the value must be either a Square or a Triangle. In either of those cases, we know the shape has corners, so we can return true without discriminating which case we are in. The pattern in second clause the following (meaningless) function is redundant: fun f (Circle ((x, y), r)) = x+y | f (Circle _) = 1.0 | f _ = 0.0 Any value that matches the pattern in the second clause will also match the pattern in the first clause, so the second clause is unreachable. Therefore this definition as a whole exhibits redundancy, and causes a compile-time warning. C programmers will often use tagged unions, dispatching on tag values, to accomplish what ML accomplishes with datatypes and pattern matching. Nevertheless, while a C program decorated with appropriate checks will be in a sense as robust as the corresponding ML program, those checks will of necessity be dynamic; ML provides a set of static checks that give the programmer a high degree of confidence in the correctness of the program at compile time. Note that in object-oriented programming languages, such as Java, a disjoint union can be expressed by designing class hierarchies. However, as opposed to class hierarchies, ADTs are closed. This makes ADT extensible in a way that is orthogonal to the extensibility of class hierarchies. Class hierarchies can be extended with new subclasses but no new methods, while ADTs can be extended to provide new behavior for all existing constructors, but do not allow defining new constructors. Higher-order functions[edit] Functions can consume functions as arguments: fun applyToBoth f x y = (f x, f y) Functions can produce functions as return values: fun constantFn k = let fun const anything = k in const end fun constantFn k = (fn anything => k) Functions can also both consume and produce functions: fun compose (f, g) = let fun h x = f (g x) in h end fun compose (f, g) = (fn x => f (g x)) The function List.map from the basis library is one of the most commonly used higher-order functions in Standard ML: fun map _ [] = [] | map f (x::xs) = f x :: map f xs (A more efficient implementation of map would define a tail-recursive inner loop as follows:) fun map f xs = let fun m ([], acc) = List.rev acc | m (x::xs, acc) = m (xs, f x :: acc) in m (xs, []) end Exceptions are raised with the raise keyword, and handled with pattern matching handle constructs. exception Undefined fun max [x] = x | max (x::xs) = let val m = max xs in if x > m then x else m end | max [] = raise Undefined fun main xs = let val msg = (Int.toString (max xs)) handle Undefined => "empty list...there is no max!" in print (msg ^ "\n") end The exception system can be exploited to implement non-local exit, an optimization technique suitable for functions like the following. exception Zero fun listProd ns = let fun p [] = 1 | p (0::_) = raise Zero | p (h::t) = h * p t in (p ns) handle Zero => 0 end When the exception Zero is raised in the 0 case, control leaves the function p altogether. Consider the alternative: the value 0 would be returned to the most recent awaiting frame, it would be multiplied by the local value of h, the resulting value (inevitably 0) would be returned in turn to the next awaiting frame, and so on. The raising of the exception allows control to leapfrog directly over the entire chain of frames and avoid the associated computation. It has to be noted that the same optimization could have been obtained by using a tail recursion for this example. Module system[edit] Standard ML has an advanced module system, allowing programs to be decomposed into hierarchically organized structures of logically related type and value declarations. SML modules provide not only namespace control but also abstraction, in the sense that they allow programmers to define abstract data types. Three main syntactic constructs comprise the SML module system: signatures, structures and functors. A structure is a module; it consists of a collection of types, exceptions, values and structures (called substructures) packaged together into a logical unit. A signature is an interface, usually thought of as a type for a structure: it specifies the names of all the entities provided by the structure as well as the arities of type components, the types of value components, and signatures for substructures. The definitions of type components may or may not be exported; type components whose definitions are hidden are abstract types. Finally, a functor is a function from structures to structures; that is, a functor accepts one or more arguments, which are usually structures of a given signature, and produces a structure as its result. Functors are used to implement generic data structures and algorithms. For example, the signature for a queue data structure might be: signature QUEUE = sig type 'a queue exception Queue val empty : 'a queue val isEmpty : 'a queue -> bool val singleton : 'a -> 'a queue val insert : 'a * 'a queue -> 'a queue val peek : 'a queue -> 'a val remove : 'a queue -> 'a * 'a queue end This signature describes a module that provides a parameterized type queue of queues, an exception called Queue, and six values (five of which are functions) providing the basic operations on queues. One can now implement the queue data structure by writing a structure with this signature: structure TwoListQueue :> QUEUE = struct type 'a queue = 'a list * 'a list exception Queue val empty = ([],[]) fun isEmpty ([],[]) = true | isEmpty _ = false fun singleton a = ([], [a]) fun insert (a, ([], [])) = ([], [a]) | insert (a, (ins, outs)) = (a::ins, outs) fun peek (_,[]) = raise Queue | peek (ins, a::outs) = a fun remove (_,[]) = raise Queue | remove (ins, [a]) = (a, ([], rev ins)) | remove (ins, a::outs) = (a, (ins,outs)) end This definition declares that TwoListQueue is an implementation of the QUEUE signature. Furthermore, the opaque ascription (denoted by :>) states that any type components whose definitions are not provided in the signature (i.e., queue) should be treated as abstract, meaning that the definition of a queue as a pair of lists is not visible outside the module. The body of the structure provides bindings for all of the components listed in the signature. To use a structure, one can access its type and value members using "dot notation". For instance, a queue of strings would have type string TwoListQueue.queue, the empty queue is TwoListQueue.empty, and to remove the first element from a queue called q one would write TwoListQueue.remove q. One popular algorithm^[2] for breadth-first traversal of trees makes uses of queues. Here we present a version of that algorithm parameterized over an abstract queue structure: functor BFT (Q: QUEUE) = (* after Okasaki, ICFP, 2000 *) struct datatype 'a tree = E | T of 'a * 'a tree * 'a tree fun bftQ (q : 'a tree Q.queue) : 'a list = if Q.isEmpty q then [] else let val (t, q') = Q.remove q in case t of E => bftQ q' | T (x, l, r) => let val q'' = Q.insert (r, Q.insert (l, q')) in x :: bftQ q'' end end fun bft t = bftQ (Q.singleton t) end Please note that inside the BFT structure, the program has no access to the particular queue representation in play. More concretely, there is no way for the program to, say. select the first list in the two-list queue representation, if that is indeed the representation being used. This data abstraction mechanism makes the breadth-first code truly agnostic to the queue representation choice. This is in general desirable; in the present case, the queue structure can safely maintain any of the various logical invariants on which its correctness depends behind the bulletproof wall of Code examples[edit] This section does not cite any references or sources. (June 2013) Snippets of SML code are most easily studied by entering them into a "top-level", also known as a read-eval-print loop. This is an interactive session that prints the inferred types of resulting or defined expressions. Many SML implementations provide an interactive top-level, including SML/NJ: $ sml Standard ML of New Jersey v110.52 [built: Fri Jan 21 16:42:10 2005] - Code can then be entered at the "-" prompt. For example, to calculate 1+2*3: - 1 + 2 * 3; val it = 7 : int The top-level infers the type of the expression to be "int" and gives the result "7". Hello world[edit] The following program "hello.sml": print "Hello world!\n"; can be compiled with MLton: $ mlton hello.sml and executed: $ ./hello Hello world! Insertion sort[edit] Insertion sort for lists of integers (ascending) is expressed concisely as follows: fun ins (n, []) = [n] | ins (n, ns as h::t) = if (n<h) then n::ns else h::(ins (n, t)) val insertionSort = List.foldr ins [] This can be made polymorphic by abstracting over the ordering operator. Here we use the symbolic name << for that operator. fun ins' << (num, nums) = let fun i (n, []) = [n] | i (n, ns as h::t) = if <<(n,h) then n::ns else h::i(n,t) in i (num, nums) end fun insertionSort' << = List.foldr (ins' <<) [] The type of insertionSort' is ('a * 'a -> bool) -> ('a list) -> ('a list). Here, the classic mergesort algorithm is implemented in three functions: split, merge and mergesort. The function split is implemented with a local function named loop, which has two additional parameters. The local function loop is written in a tail-recursive style; as such it can be compiled efficiently. This function makes use of SML's pattern matching syntax to differentiate between non-empty list (x::xs) and empty list ([]) cases. For stability, the input list ns is reversed before being passed to loop. (* Split list into two near-halves, returned as a pair. * The “halves” will either be the same size, * or the first will have one more element than the second. * Runs in O(n) time, where n = |xs|. *) local fun loop (x::y::zs, xs, ys) = loop (zs, x::xs, y::ys) | loop (x::[], xs, ys) = (x::xs, ys) | loop ([], xs, ys) = (xs, ys) in fun split ns = loop (List.rev ns, [], []) end The local-in-end syntax could be replaced with a let-in-end syntax, yielding the equivalent definition: fun split ns = let fun loop (x::y::zs, xs, ys) = loop (zs, x::xs, y::ys) | loop (x::[], xs, ys) = (x::xs, ys) | loop ([], xs, ys) = (xs, ys) in loop (List.rev ns, [], []) end As with split, merge also uses a local function loop for efficiency. The inner loop is defined in terms of cases: when two non-empty lists are passed, when one non-empty list is passed, and when two empty lists are passed. Note the use of the underscore (_) as a wildcard pattern. This function merges two "ascending" lists into one ascending list. Note how the accumulator out is built "backwards", then reversed with List.rev before being returned. This is a common technique—build a list backwards, then reverse it before returning it. In SML, lists are represented as imbalanced binary trees, and thus it is efficient to prepend an element to a list, but inefficient to append an element to a list. The extra pass over the list is a linear time operation, so while this technique requires more wall clock time, the asymptotics are not any worse. (* Merge two ordered lists using the order lt. * Pre: the given lists xs and ys must already be ordered per lt. * Runs in O(n) time, where n = |xs| + |ys|. *) fun merge lt (xs, ys) = let fun loop (out, left as x::xs, right as y::ys) = if lt (x, y) then loop (x::out, xs, right) else loop (y::out, left, ys) | loop (out, x::xs, []) = loop (x::out, xs, []) | loop (out, [], y::ys) = loop (y::out, [], ys) | loop (out, [], []) = List.rev out in loop ([], xs, ys) end The main function. (* Sort a list in according to the given ordering operation lt. * Runs in O(n log n) time, where n = |xs|. *) fun mergesort lt xs = let val merge' = merge lt fun ms [] = [] | ms [x] = [x] | ms xs = let val (left, right) = split xs in merge' (ms left, ms right) end in ms xs end Also note that the code makes no mention of variable types, with the exception of the :: and [] syntax which signify lists. This code will sort lists of any type, so long as a consistent ordering function lt can be defined. Using Hindley–Milner type inference, the compiler is capable of inferring the types of all variables, even complicated types such as that of the lt function. Quicksort can be expressed as follows. This generic quicksort consumes an order operator <<. fun quicksort << xs = let fun qs [] = [] | qs [x] = [x] | qs (p::xs) = let val (less, more) = List.partition (fn x => << (x, p)) xs in qs less @ p :: qs more end in qs xs end Expression language[edit] Note the relative ease with which a small expression language is defined and processed. exception Err datatype ty = IntTy | BoolTy datatype exp = True | False | Int of int | Not of exp | Add of exp * exp | If of exp * exp * exp fun typeOf (True) = BoolTy | typeOf (False) = BoolTy | typeOf (Int _) = IntTy | typeOf (Not e) = if typeOf e = BoolTy then BoolTy else raise Err | typeOf (Add (e1, e2)) = if (typeOf e1 = IntTy) andalso (typeOf e2 = IntTy) then IntTy else raise Err | typeOf (If (e1, e2, e3)) = if typeOf e1 <> BoolTy then raise Err else if typeOf e2 <> typeOf e3 then raise Err else typeOf e2 fun eval (True) = True | eval (False) = False | eval (Int n) = Int n | eval (Not e) = (case eval e of True => False | False => True | _ => raise Fail "type-checking is broken") | eval (Add (e1, e2)) = let val (Int n1) = eval e1 val (Int n2) = eval e2 in Int (n1 + n2) end | eval (If (e1, e2, e3)) = if eval e1 = True then eval e2 else eval e3 fun chkEval e = (ignore (typeOf e); eval e) (* will raise Err on type error *) Arbitrary-precision factorial function (libraries)[edit] In SML, the IntInf module provides arbitrary-precision integer arithmetic. Moreover, integer literals may be used as arbitrary-precision integers without the programmer having to do anything. The following program "fact.sml" implements an arbitrary-precision factorial function and prints the factorial of 120: fun fact n : IntInf.int = if n=0 then 1 else n * fact(n - 1) val () = print (IntInf.toString (fact 120) ^ "\n") and can be compiled and run with: $ mlton fact.sml $ ./fact 66895029134491270575881180540903725867527463331380298102956713523016335 57244962989366874165271984981308157637893214090552534408589408121859898 481114389650005964960521256960000000000000000000000000000 Numerical derivative (higher-order functions)[edit] Since SML is a functional programming language, it is easy to create and pass around functions in SML programs. This capability has an enormous number of applications. Calculating the numerical derivative of a function is one such application. The following SML function "d" computes the numerical derivative of a given function "f" at a given point "x": - fun d delta f x = (f (x + delta) - f (x - delta)) / (2.0 * delta); val d = fn : real -> (real -> real) -> real -> real This function requires a small value "delta". A good choice for delta when using this algorithm is the cube root of the machine epsilon.^[citation needed] The type of the function "d" indicates that it maps a "float" onto another function with the type "(real -> real) -> real -> real". This allows us to partially apply arguments. This functional style is known as currying. In this case, it is useful to partially apply the first argument "delta" to "d", to obtain a more specialised function: - val d = d 1E~8; val d = fn : (real -> real) -> real -> real Note that the inferred type indicates that the replacement "d" is expecting a function with the type "real -> real" as its first argument. We can compute a numerical approximation to the derivative of $f(x) = x^3-x-1$ at $x=3$ with: - d (fn x => x * x * x - x - 1.0) 3.0; val it = 25.9999996644 : real The correct answer is $f'(x) = 3x^2-1$=>$f'(3) = 27-1 = 26$. The function "d" is called a "higher-order function" because it accepts another function ("f") as an argument. Curried and higher-order functions can be used to eliminate redundant code. For example, a library may require functions of type a -> b, but it is more convenient to write functions of type a * c -> b where there is a fixed relationship between the objects of type a and c. A higher order function of type (a * c -> b) -> (a -> b) can factor out this commonality. This is an example of the adapter Discrete wavelet transform (pattern matching)[edit] The 1D Haar wavelet transform of an integer-power-of-two-length list of numbers can be implemented very succinctly in SML and is an excellent example of the use of pattern matching over lists, taking pairs of elements ("h1" and "h2") off the front and storing their sums and differences on the lists "s" and "d", respectively: - fun haar l = let fun aux [s] [] d = s :: d | aux [] s d = aux s [] d | aux (h1::h2::t) s d = aux t (h1+h2 :: s) (h1-h2 :: d) | aux _ _ _ = raise Empty in aux l [] [] end; val haar = fn : int list -> int list For example: - haar [1, 2, 3, 4, ~4, ~3, ~2, ~1]; val it = [0,20,4,4,~1,~1,~1,~1] : int list Pattern matching is a useful construct that allows complicated transformations to be represented clearly and succinctly. Moreover, SML compilers turn pattern matches into efficient code, resulting in programs that are not only shorter but also faster. Many SML implementations exist, including: • MLton is a whole-program optimizing compiler that produces very fast code compared to other ML implementations. [1] • Poly/ML is a full implementation of Standard ML that produces fast code and supports multicore hardware (via Posix threads); its runtime system performs parallel garbage collection and online sharing of immutable substructures. • Isabelle/ML integrates parallel Poly/ML into an interactive theorem prover, with a sophisticated IDE (based on jEdit) both for ML and the proof language. • Standard ML of New Jersey (abbreviated SML/NJ) is a full compiler, with associated libraries, tools, an interactive shell, and documentation. [2] • Moscow ML is a light-weight implementation, based on the CAML Light runtime engine. It implements the full SML language, including SML Modules, and much of the SML Basis Library. [3] • TILT is a full certifying compiler for SML. It uses typed intermediate languages to optimize code and ensure correctness, and can compile to typed Assembly language. • HaMLet is an SML interpreter that aims to be an accurate and accessible reference implementation of the standard. • The ML Kit integrates a garbage collector (which can be disabled) and region-based memory management with automatic inference of regions, aiming realtime applications. Its implementation is based very closely on the Definition. • SML.NET allows compiling to the Microsoft CLR and has extensions for linking with other .NET code. • SML2c is a batch compiler and compiles only module-level declarations (i.e. signatures, structures, functors) into C. It is based on SML/NJ version 0.67 and shares the front end, and most of its run-time system, but does not support SML/NJ style debugging and profiling. Module-level programs that run on SML/NJ can be compiled by sml2c with no changes. • The Poplog system implements a version of SML, with POP-11, and optionally Common Lisp, and Prolog, allowing mixed language programming. For all, the implementation language is POP-11, which is compiled incrementally. It also has an integrated Emacs-like editor that communicates with the compiler. • SML# is an extension of SML providing record polymorphism and C language interoperability. It is a conventional native compiler and its name is not an allusion to running on the .NET framework. • Alice: an interpreter for Standard ML by Saarland University adding features for lazy evaluation, concurrency (multithreading and distributed computing via remote procedure calls) and constraint All of these implementations are open-source and freely available. Most are implemented themselves in SML. There are no longer any commercial SML implementations. Harlequin once produced a commercial IDE and compiler for SML called MLWorks. The company is now defunct. MLWorks passed on to Xanalys and was later acquired by Ravenbrook Limited on 2013-04-26 and open sourced. See also[edit] 1. ^ Milner, R.; M. Tofte, R. Harper and D. MacQueen. (1997). The Definition of Standard ML (Revised). MIT Press. ISBN 0-262-63181-4. 2. ^ Okasaki, Chris (2000). "Breadth-First Numbering: Lessons from a Small Exercise in Algorithm Design". International Conference on Functional Programming 2000. ACM. External links[edit]
{"url":"http://blekko.com/wiki/Standard_ML?source=672620ff","timestamp":"2014-04-17T19:20:25Z","content_type":null,"content_length":"61483","record_id":"<urn:uuid:c523d6d6-7b90-4d1e-9c97-61c6b765d27f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Welding black pipe (recently salvaged gas pipe) Not Recommended Originally posted by Bob61 View Post This is a request for information and safety concerns around welding salvaged black pipe from a residential renovation. It was being used to pipe natural gas from the city meter through a basement to various fixtures. I have an old but good AIRCO transformer welder with plenty of amps and was thinking 6010, 6011 rods. My idea is to cut the gas pipe into 15" lengths and weld them as rungs between 3"x2" angle to form a set of ramps that will get my car up a 16" incline needed to get it inside my shop. Any thoughts or suggestions would be most appreciated. While you may get away with this, a back of an envelope calculation says otherwise. I would not put a 4000 lb vehicle on 15" lengths of 3/4" SCH 40 black iron pipe. The load on each pipe will be approximate 1/4 of the total weight or 1000 lbs. If you, conservatively, consider the pipe to be a simple beam with a point load in the middle, the bending moment on it will be 3750 in-lbs. The moment of inertia of 3/4" SCH 40 pipe is 0.037 in^4. The bending stress on the pipe is then 53209 psi. The yield stress for black iron is around 35000 psi! (Please verify this.) Now, if you consider that the pipe is not really a simple beam and then do a more sophisticated analysis by considering the entire system, then I surmise that the stress on the pipe would be closer to 1/2 of 53209. Still, that doesn't leave much design margin. Lastly, keep in mind that the above calculation is for a static load. However, this ramp will have to handle a dynamic load which will be greater than a static load both vertically and longitudinally. Note that statically, the longitudinal load is very small. Dynamically, it is much larger. So beware. In summary, I think that in the end of the day, after a lot of work, you will have a structure that is both heavy and weak. I gave away my ramps and have no use for such. I've been wrenching on vehicles for decades. Instead, I use a floor jack to raise the vehicle and keep rims and 4x4s handy to support it safely. Lift with jack, toss rim under frame or appropriately reinforced unibody area, repeat. Ramp require wheels be installed on the vehicle so they interfere with maintenance. BTW I also worked at a used car lot shop where we pulled and stuffed many drivetrains while supporting the vehicle using the rim method. Jack stands aren't as stable as rims, which are even stable on earth or sand. We used rims in salvage yards for that reason. A really useful project (if you have or plan on having torches) is a cutting table. When in doubt, make equipment and make it mobile so it will be easy to take with when you move. Vise stands are handy, and vises live outdoors just fine for decades if you keep the screw greased. Actually the tread width will spread the load out well. I wouldn't hesitate to use 3/4" black pipe....especially if the legs are facing inwards on the angle leaving only a span of 9-11" depending if it's 2 or 3" angle. Just put a couple verticals along the way and you will be fine. Just be confident in your welds...Dave Originally posted by Arizona Joe View Post While you may get away with this, a back of an envelope calculation says otherwise. I would not put a 4000 lb vehicle on 15" lengths of 3/4" SCH 40 black iron pipe. The load on each pipe will be approximate 1/4 of the total weight or 1000 lbs. If you, conservatively, consider the pipe to be a simple beam with a point load in the middle, the bending moment on it will be 3750 in-lbs. The moment of inertia of 3/4" SCH 40 pipe is 0.037 in^4. The bending stress on the pipe is then 53209 psi. The yield stress for black iron is around 35000 psi! (Please verify this.) Now, if you consider that the pipe is not really a simple beam and then do a more sophisticated analysis by considering the entire system, then I surmise that the stress on the pipe would be closer to 1/2 of 53209. Still, that doesn't leave much design margin. Lastly, keep in mind that the above calculation is for a static load. However, this ramp will have to handle a dynamic load which will be greater than a static load both vertically and longitudinally. Note that statically, the longitudinal load is very small. Dynamically, it is much larger. So beware. In summary, I think that in the end of the day, after a lot of work, you will have a structure that is both heavy and weak. Originally posted by Bob61 View Post Thanks to everyone for the input but one thing no one has weighed in about is the last question on my second post. Can anyone tell me whether or not a long-used gas pipe, having contained flammable material over several decades, qualifies as a "flammable container" and is therefore risky to weld? Thanks again. If this pipe is disconnected and has been open for awhile, there should be no problem welding on it. Cut it to length and get after it. Post pix of the build and finished results. Senior Member • Sep 2005 • 1794 • MM250 Trailblazer 250g 22a feeder Lincoln ac/dc 225 Victor O/A MM200 black face Whitney 30 ton hydraulic punch Lown 1/8x 36" power roller Arco roto-phase model M Vectrax 7x12 band saw Miller spectrum 875 30a spoolgun w/wc-24 Syncrowave 250 About the pipe bending, being spaced only 6" apart if a pipe started to bend the tire would then hit the pipe before, after it or both & would then stop bending. This is why I said "The worst is it will bend slightly. I don't see it failing completely to cause damage." I agree it is not ideal material to use but I don't see any harm coming from it. 3" leg vertical & 2" leg horizontal facing in. I would not hesitate to weld on the pipe. Even it if was flammable (which I would say it isn't but I'm not there to see it) both ends are open so it would not explode. Senior Member • May 2008 • 270 • Who do you call when the lawmakers ignore the law? Miller AC/DC Thunderbolt 225 Miller 180 w/Autoset Old cutting torch on LPG The way you describe the ramps "to get in my shop" Im guessing you need a short "bridge" to get over a slab of concrete and in the door rather than a set of ramps to change oil and the like. IF that is the case, I doubt youll be happy using pipe for the ramp. Rolling your steer tires up will not be an issue but the drive tires will not work well climbing up something round. Look at car trailer ramps, nobody uses round. If that pipe gets the slightest bit damp all you will do is spin. Make it an angle instead, or even expanded metal. Give the tires something to grab a hold of. If this pipe is disconnected and has been open for awhile, there should be no problem welding on it. Cut it to length and get after it. Post pix of the build and finished results. Is there going to be any pics of the build when your done with it? I would like to know how they work out after you drive over them. Before and after pics maybe. Is there going to be any pics of the build when your done with it? I would like to know how they work out after you drive over them. Before and after pics maybe. Thanks for the input. I'll let you know how they turn out and put up pics when I can. Welding black pipe (recently salvaged gas pipe) Do a little trial and error. Build your ramps and set them on a 4x4 (wood) on solid ground and drive said vehicles on them. If they fail then you haven't damaged your car and you can build bigger. I doubt the pipe would be saturated to point of explosion. I probably wouldn't be making ramps out of said pipe but would use it for projects. Weld on... Just checking in to see if you started this project yet. If so, how is it turning out? Even IF it is not successful, post the results so someone else may learn about it later. Just checking in to see if you started this project yet. If so, how is it turning out? Even IF it is not successful, post the results so someone else may learn about it later. Sometimes the posters get tied up in the moment and forget to respond. Not saying that about this on. But, some get 20 responses and don't even come back to their own post. Well, I guess it sometimes is hard to find time in todays world. MERRY CHRISTMAS Originally posted by BD1 View Post Sometimes the posters get tied up in the moment and forget to respond. Not saying that about this on. But, some get 20 responses and don't even come back to their own post. Well, I guess it sometimes is hard to find time in todays world. MERRY CHRISTMAS Well he has been on here today and did not say a word on this thread. I'm thinking the black iron pipe folded when he tried to drive up on it, and now he doesn't want to admit it. Either that or he has not tried it yet. I will keep checking back to find out the out come, IF he posts it. Newbie with an idea. I'm new to welding and metal working but have plenty of mechanical and building experience. A thought I had to spread the weight over more than one piece of pipe and also to keep the car from bumping up and down each rung is to add a piece of flat bar up the middle of the rungs. Idk, maybe 1/4 or 3/16 by 2" wide. That should spread the load and make a smoother drive up, no? Originally posted by Bob61 View Post Thanks for the input. I'll let you know how they turn out and put up pics when I can. Is there any update on this project? I am getting curious as to whether this worked out or not. Have you even attempted to make these ramps or not? Have you scrapped the idea of making these? I've been thinking the same thing. I'd like to know how it worked out.
{"url":"http://www.millerwelds.com/resources/communities/mboard/forum/welding-discussions/32881-welding-black-pipe-recently-salvaged-gas-pipe/page2","timestamp":"2014-04-18T00:59:07Z","content_type":null,"content_length":"184566","record_id":"<urn:uuid:266300e0-7ee4-4fb4-bf04-db997bdcb36a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
East Newark, NJ Calculus Tutor Find an East Newark, NJ Calculus Tutor ...I will come to your home or meet you at a mutually convenient location (such as the library). I am happy to work with individuals or groups. Group rates can be negotiated. I am available on weekends and some evenings. 10 Subjects: including calculus, statistics, algebra 2, geometry ...I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year. I have a lot of experience tutoring physics and math at all levels. I have been tutoring since high school so I have more than 10 years of experience, having tutored students of all ages, starting from elementary school all the way to college-level. 11 Subjects: including calculus, Spanish, physics, geometry ...My tutoring services mainly focus on test preparation. Scoring in the 99th percentile on the math and chemistry sections of the GRE, GMAT, MCAT, SAT, SAT Subject and AP exams, I have also obtained high combined scores in the math and verbal sections of the SAT (99th percentile) and GRE (98th per... 24 Subjects: including calculus, chemistry, physics, biology ...I participate in NaNoWriMo every year! ** NOTE: I can't travel farther than 10 miles to meet with you, due to an increase in tutees. Sorry! **I got 5s in the following AP tests: Physics B, Physics C Mechanics, Physics C E&M. I have been designing websites in HTML and CSS for several years. (I ... 26 Subjects: including calculus, English, physics, writing ...I can speak from experience as a student that poor tutoring could not come in any worse form than this. I would like you to gain as much knowledge and appreciation for the subject to the extent that you are capable of! I can almost guarantee you will actually take notice of your personal growth! 28 Subjects: including calculus, chemistry, writing, geometry Related East Newark, NJ Tutors East Newark, NJ Accounting Tutors East Newark, NJ ACT Tutors East Newark, NJ Algebra Tutors East Newark, NJ Algebra 2 Tutors East Newark, NJ Calculus Tutors East Newark, NJ Geometry Tutors East Newark, NJ Math Tutors East Newark, NJ Prealgebra Tutors East Newark, NJ Precalculus Tutors East Newark, NJ SAT Tutors East Newark, NJ SAT Math Tutors East Newark, NJ Science Tutors East Newark, NJ Statistics Tutors East Newark, NJ Trigonometry Tutors Nearby Cities With calculus Tutor Belleville, NJ calculus Tutors Bloomfield, NJ calculus Tutors East Orange calculus Tutors East Rutherford calculus Tutors Elizabethport, NJ calculus Tutors Glen Ridge calculus Tutors Harrison, NJ calculus Tutors Hillside, NJ calculus Tutors Kearny, NJ calculus Tutors Little Ferry calculus Tutors Lyndhurst, NJ calculus Tutors Newark, NJ calculus Tutors North Arlington calculus Tutors South Kearny, NJ calculus Tutors Verona, NJ calculus Tutors
{"url":"http://www.purplemath.com/East_Newark_NJ_Calculus_tutors.php","timestamp":"2014-04-18T05:39:38Z","content_type":null,"content_length":"24237","record_id":"<urn:uuid:2e1d7444-7f8f-480b-8d77-db536084efa7>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
PIRSA - Perimeter Institute Recorded Seminar Archive A proposed Test of the Local Casuality of Spacetime Abstract: A theory governing the metric and matter fields in spacetime is {it locally causal} if the probability distribution for the fields in any region is determined solely by physical data in its past, i.e. it is independent of events at space-like separated points. This is the case according to general relativity, and it is natural to hypothesise that it should also hold true in any theory in which the fundamental description of space-time is classical and geometric --- for instance, some hypothetical theory which stochastically couples a classical spacetime geometry to a quantum field theory of matter. On the other hand, a quantum theory of gravity should allow the creation of spacetimes which macroscopically violate local causality. I describe a feasible experiment to test the local causality of spacetime, and hence to test whether gravity is better described, in this respect, by general relativity or by quantum theory. The experiment will either identify a definite limit to the domain of validity of quantum theory or else produce significant evidence for the hypothesis that gravity is described by a quantum theory. Date: 21/07/2006 - 10:15 am
{"url":"http://pirsa.org/06070057","timestamp":"2014-04-16T10:47:56Z","content_type":null,"content_length":"9261","record_id":"<urn:uuid:bb178f40-be7c-4f9c-bc56-addf70cb093b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Thomas Calculus Solve any calculus differentiation problem with this calculus tutorial software. Calculus differentiation and calculus tutorialCalculus Problem Solver can solve differentiation of any arbitrary equation and output the result. It can provide detailed step-by-step solutions to given differentiation problems in a tutorial-like format. On. Freeware download of Calculus Problem Solver 1.0, size 10.11 Mb. A series of solved calculus exercises, with step-by-step shown. Freeware download of Calculus Help 1.0, size 387.07 Kb. Visual Calculus is an easy-to-use calculus grapher for Graphing limit, derivative function, integral, 3D vector, double integral, triple integral, series, ode etc. Which can create 2D, 2.5D and 3D function graphs, animations and table graphs. 2D Features: explicit, implicit, parametric, and in equation; Cartesian and polar coordinate systems; curve. Free download of Visual Calculus For Academic 3.9, size 3.93 Mb. Infinite Calculus is a professional application designed to generate mathematics tests in just a few minutes. It was created by a teacher for teachers. Infinite Calculus includes a rich set of predefined questions that can be further customized to match your needs. Subjects such as limits, integrals, approximations and others are. Free download of Infinite Calculus 1.03.00, size 0 b. KeatsSoft Quick Calculus is a professional application designed to calculate derivatives, integrals, Taylor series, and limits of functions. Quick Calculus helps you quickly find the solutions of caculus problems, freeing you from the tedious manipulation of mathematical equations. Through an intuitive graphical interface, Quick Calculus. Free download of Quick Calculus 1.0.1, size 0 b. Cleantouch Calculus Solutions developed to help students, understanding complicated calculation of mathematics. Features: Graphic Representation of any fix), Graphic Representation of any f(x,y), Surfaces given by parametric equations, Integrator. Freeware download of Cleantouch Calculus Solutions 1.0, size 6.14 Mb. If you are a math student or teacher or just a person who is interested in high-school algebra or college calculus, I would recommend you this program.Graphmatica presents an interactive algebraic equation grapher that can be used as an aide to plotting mathematical curves. Graphmatica remembers up to the last 999 equations you typed in or loaded. Free download of Graphmatica 2.0f, size 384.00 Thomas is a young wizard who lives with his uncle Artem in his magical pet shop. One day Thomas gets an F in 'Magical Spell Words' so he is ordered to study. But while carrying the book down the stairs, he accidentally falls and hits his uncle and makes a mess of things. His uncle is angry and so casts a spell on Thomas and now Thomas finds himself. Free download of Thomas and the Magical Words 1.10, size 16.61 Mb. Calculus Victus is an an advanced dietary tool, designed to help you follow the optimal diet with minimum impact on the variety and quality of food that you can eat . Calculus Victus lets you control what foods you want to eat and has a number of outstanding features that make shopping, cooking and monitoring dietary intake very simple.. Free download of Calculus Victus 2.0.3.1, size 3.77 Mb. OneStone Math is a revolutionary calculus program incorporating stunning 2 and 3D graphics and a powerful symbolic math engine in an easy to use format. An extensive and expandable feature set provides tools for graphical and numerical analysis of functions, derivatives, integrals and vectors. OneStone Math builds on the technology showcased in our. Free download of OneStone Math 1.4.1.1, size 1.97 Mb. SKF13's Thomas Fan Fiction Layouts Toolbar is a nice and easy to use browser tool. Using this powerful search engine, you can find important and fresh news, announcements, all instantly. The SKF13's Thomas Fan Fiction Layouts toolbar is compatible with the following Internet browsers: - Microsoft Internet. Freeware download of SKF13's Thomas Fan Fiction Layouts 6 3, size 0 b. Visual Calculus is a grapher to compute and graph limit, derivative, integral, 3D vector, partial derivative function, series, ODE etc.Pre-calculus: functions, piecewise defined function, even and odd functions, polynomials, rational functions.The program has the Ability to set and modify the properties of coordinate graphs, animations and table. Free download of Visual Calculus 3. 7. 2001, size 3.54 Mb. A powerful, easy-to-use, equation plotter with numerical and calculus features: - Graph Cartesian functions, relations, and inequalities, plus polar, parametric, and ordinary differential equations. - Up to 999 graphs on screen at once. - New data plotting and curve-fitting features. - Numerically solve and graphically display tangent lines. Free download of Graphmatica 2.0g 1.0, size 398.34 Kb. LCI is an interpreter for the lambda calculus. It supports many advances features like integers, recursion, user defined operators and multiple evaluation strategies.. Freeware download of LCI - A lambda calculus interpeter 32, size 1.05 Mb. This project will provide tools that leverage WS-CDL and Pi Calculus to build more robust Service Oriented Architectures (SOA). The tools are now released (and supported) through JBoss Tools (version > 3.2): http://www.jboss.org/tools. Freeware download of Pi Calculus for SOA 1.0, size 29.85 Mb. The b-calculus musical analysis assistant and data miner The b-calculus assistant 1.0 License - GNU General Public License (GPL). Freeware download of The b-calculus assistant 1.0, size 0 b. This project will provide tools that leverage Pi Calculus to build more robust Service implementations in Java, that can be verified against a global model description (as defined in the pi4soa Pi Calculus for Java 1.0 License - Apache License V2.0. Freeware download of Pi Calculus for Java 1.0, size 0 b. CWC Simulator is a C++ implementation of CWC (Calculus of Wrapped Compartments). This package is basically a rewriting-based calculus for the representation and simulation of biological systems. . Freeware download of CWC Simulator 0.6.1, size 0 b. Brave Thomas roams around colorful fairy tale worlds. He meets many fantastic creatures who may look attractive but have hostile minds. You must help Thomas escape the treacherous enemies and traps, collect all magic chips and pass to another part of the fairy land. In other parts of the fairy land you will not have any time to rest, either. There. Free download of PacLands 1.2, size 5.75 Mb. Fortran Calculus Compiler: Calculus level computer languages are Fortran Calculus and PROSE. Both languages are based on what is called 'Automatic Differentiation' (AD). Calculus languages simplify computer coding to an absolute minimum; i.e., a mathematical model, constraints, and the objective function. Minimizing the amount of code allows the. Free download of Fortran Calculus Compiler 1, size 2.63 Mb. Thomas Calculus Web Results Includes latest news and all dates about Modern Talking and Thomas Anders's activities outside of the band. Essays on Thomas' life and works, a biography, and links to related sites and resources. Calculus based problems range from easy to difficult. Tutorials on calculus subjects ranging from precalculus to differential equations. Math tools and resource links. An overview of calculus ideas. Covered are derivative rules and formulas as well as some basic integration rules. This is an applied calculus tutorial. Some prior calculus knowledge might be helpful. A basic calculus tutorial covering limits, derivatives and integrals. Uses a PDF format.
{"url":"http://www.fileguru.com/apps/thomas_calculus","timestamp":"2014-04-18T13:16:26Z","content_type":null,"content_length":"26971","record_id":"<urn:uuid:8fb31e19-fdc2-465a-a256-71591ffef3b5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] NumPy re-factoring project Francesc Alted faltet@pytables.... Fri Jun 11 03:08:40 CDT 2010 A Friday 11 June 2010 02:27:18 Sturla Molden escrigué: > >> Another thing I did when reimplementing lfilter was "copy-in copy-out" > >> for strided arrays. > > > > What is copy-in copy out ? I am not familiar with this term ? > Strided memory access is slow. So it often helps to make a temporary > copy that are contiguous. In my experience, this technique will only work well with strided arrays if you are going to re-use the data of these temporaries in cache, or your data is unaligned. But if you are going to use the data only once (and this is very common in NumPy element-wise operations), this is rather counter- productive for strided arrays. For example, in numexpr, we made a lot of different tests comparing "copy-in copy-out" and direct access techniques for strided arrays. The result was that operations with direct access showed significantly better performance with strided arrays. On the contrary, for unaligned arrays the copy-in copy- out technique gave better results. Look at these times, where the arrays where unidimensional with a length of 1 million element each, but the results can be extrapolated to larger, multidimensional arrays (the original benchmark file is bench/vml_timing.py): Numexpr version: 1.3.2.dev169 NumPy version: 1.4.1rc2 Python version: 2.6.1 (r261:67515, Feb 3 2009, 17:34:37) [GCC 4.3.2 [gcc-4_3-branch revision 141291]] Platform: linux2-x86_64 AMD/Intel CPU? True VML available? True VML/MKL version: Intel(R) Math Kernel Library Version 10.1.0 Product Build 081809.14 for Intel(R) 64 architecture applications To start with, times between numpy and numexpr are very similar for very simple expressions (except for unaligned arrays, where "copy-in copy-out" works pretty well for numexpr): ******************* Expression: i2 > 0 numpy: 0.0016 numpy strided: 0.0037 numpy unaligned: 0.0086 numexpr: 0.0016 Speed-up of numexpr over numpy: 0.9512 numexpr strided: 0.0039 Speed-up of numexpr over numpy: 0.964 numexpr unaligned: 0.0042 Speed-up of numexpr over numpy: 2.0598 When doing some basic operations (mind that there are no temporaries here, so numpy should be not in great disadvantage), direct access to strided data goes between 2x and 3x faster than numpy: ******************* Expression: f3+f4 numpy: 0.0060 numpy strided: 0.0176 numpy unaligned: 0.0166 numexpr: 0.0052 Speed-up of numexpr over numpy: 1.1609 numexpr strided: 0.0086 Speed-up of numexpr over numpy: 2.0584 numexpr unaligned: 0.0099 Speed-up of numexpr over numpy: 1.6785 ******************* Expression: f3+i2 numpy: 0.0060 numpy strided: 0.0176 numpy unaligned: 0.0176 numexpr: 0.0031 Speed-up of numexpr over numpy: 1.9137 numexpr strided: 0.0061 Speed-up of numexpr over numpy: 2.8789 numexpr unaligned: 0.0078 Speed-up of numexpr over numpy: 2.2411 Notice how, until now, absolute times in numexpr and strided arrays (using the direct technique) are faster than the unaligned case (copy-in copy-out). Also, when evaluating transcendental expressions (numexpr uses Intel's Vector Math Library, VML, here), direct access is again faster than NumPy: ******************* Expression: exp(f3) numpy: 0.0150 numpy strided: 0.0155 numpy unaligned: 0.0222 numexpr: 0.0030 Speed-up of numexpr over numpy: 5.0268 numexpr strided: 0.0081 Speed-up of numexpr over numpy: 1.9086 numexpr unaligned: 0.0066 Speed-up of numexpr over numpy: 3.3454 ******************* Expression: log(exp(f3)+1)/f4 numpy: 0.0486 numpy strided: 0.0563 numpy unaligned: 0.0639 numexpr: 0.0121 Speed-up of numexpr over numpy: 4.0332 numexpr strided: 0.0170 Speed-up of numexpr over numpy: 3.3067 numexpr unaligned: 0.0164 Speed-up of numexpr over numpy: 3.8833 However, now that I see the latter figures, I don't remember that we have checked whether a copy-in copy-out technique would work faster in combination with VML. By looking at the better absolute times in unaligned arrays, I'd say chances are that performance for the strided scenario *might* benefit from using copy-in/copy-out. Mmh, that's worth a try... Francesc Alted More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-June/050915.html","timestamp":"2014-04-18T13:37:53Z","content_type":null,"content_length":"8499","record_id":"<urn:uuid:c2048e39-43de-4249-814d-36cd3e75a254>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Sausalito Math Tutor Find a Sausalito Math Tutor ...In addition, I apply the rule of four to successfully understand the material: graphing and verbalizing the material, applying data tables and manipulating the equations. I have worked with an instructor at city college grading papers. My teaching methods with calculus will be to focus on manip... 5 Subjects: including algebra 1, algebra 2, calculus, precalculus ...I have been tutoring for 3 years at the community college level. The subjects have ranged from pre-algebra to Calculus II. Along with taking my classes, I am teaching Algebra 1 this Fall at 8 Subjects: including algebra 1, algebra 2, vocabulary, prealgebra ...I have been using Excel for over a decade for classwork, lab work, and fun. I have used Excel as a database for organizing contact information, analyzing large scientific data sets, and creating clear visuals for presentations. I have helped other people learn how to use the many different formula functions, shortcut commands, and graphic capabilities. 13 Subjects: including algebra 1, biology, chemistry, prealgebra ...Right now I'm finishing my bachelors in Pure Math from UC Berkeley. I've tutored family, students, and friends alike. Being a student, I haven't had the experience in teaching that others can offer, but I have a love of math that will hopefully make up for any shortcomings. 19 Subjects: including algebra 2, basketball, tennis, discrete math With a BA in Economics from the University of Chicago, and an MFA in Creative Writing from the University of Georgia, I can tutor a wide variety of subjects. I have worked with kids of all ages through 826 Valencia, and I currently teach undergraduate writing at the University of San Francisco. I was also a Research Fellow at Stanford Law School, where I did empirical economics research. 39 Subjects: including algebra 2, calculus, chemistry, prealgebra
{"url":"http://www.purplemath.com/Sausalito_Math_tutors.php","timestamp":"2014-04-17T19:18:17Z","content_type":null,"content_length":"23823","record_id":"<urn:uuid:bd405db0-675d-408a-9c60-4c113c6f7fcd>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
SymMath Application Effect of Activity Coefficients on Excess Functions and Phase Equilbria© Baudilio Coto Department of Chemical and Environmental Technology ESCET, Rey Juan Carlos University C/ Tulipan Móstoles ( Madrid ) 28933 mail to: baudilio.coto@urjc.es Immaculada Suárez Department of Chemical and Environmental Technology ESCET, Rey Juan Carlos University Móstoles ( Madrid ) 28933 The comparison between ideal and non ideal behavior in liquid mixtures in terms of experimental properties as vapor pressures or boiling temperatures is easy to show. However, it is quite complex for students to understand such behavior in terms of activity coefficients, activities or excess functions. In adition, a numerical problem arises because, althlough the expressions for the activity coefficients, even in quite simple models are not difficult to manage, they are very cumbersome. The application of such models to calculate vapor liquid equilibrium conditions in some cases introduces a numerical problem which has to be solved iteratively. As consequence, from the academic point of view, two possibilities are to have the students carry out the calculations by hand using very simple models or even ideal solution behavior, or, alternativley, they carry out very complex calculations with the help of commercial simulation software used as a black box where the thermodynamic equations are far away from the view of the student. In this work we reverse the point of view. We start from the activity coefficient as a known property. We examine how from activity coefficients some functions change from the "ideal solution" value, the excess functions differ from zero and vapor liquid equilibrium diagrams can show very complex behavior. The use of Maple to evaluate such functions, to solve the equilibrium equations, and to display calculated values allows the students both to obtain numerical values and to plot the functions in order to understand the meaning of the calculations. The model used is a simple one in order to avoid the use of a very complex iterative procedure to solve equilibrium equations. Activity and fugacity are calculated from activity coefficient values, and some theoretical aspects of the Henry and Raoult limit laws are shown. Excess and mixing Gibbs energies are calculated and plotted. Finally, vapor liquid equilibrium equations are solved to compute both isobaric and isothermal binary diagrams; calculations for given mixtures are carried out and clearly shown in diagrams. Audiences: Upper-Division Undergraduate Pedagogies: Computer-Based Learning Domains: Physical Chemistry Topics: Equilibrium, Mathematics / Symbolic Mathematics, Thermodynamics File Name Description Software Type Software Version ExcessFunctions12.mw Maple 12 Computational Document Maple 12 ExcessFunctions12.pdf Read-Only Document JCE Subscribers only: name and password or institutional IP number access required. Comments to: Baudilio Coto at baudilio.coto@urjc.es ©Copyright 2009 Division of Chemical Education, Inc., American Chemical Society.
{"url":"http://www.chemeddl.org/alfresco/service/org/chemeddl/symmath/app?app_id=141&guest=true","timestamp":"2014-04-18T13:34:18Z","content_type":null,"content_length":"9843","record_id":"<urn:uuid:98bdce59-ed91-47eb-aeb8-8a6d8ecf9513>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of results: 1,232 If the physical quantity you want to measure is discrete (i.e. an integer multiple of some number, then you can measure it exactly. An example is electric charge. The electric charge is always a multiple of the elementary charge (minus the charge of the electron). So, if you ... Saturday, September 1, 2007 at 8:12pm by Count Iblis AED 201 In Michigan, a person can teach 7th and 8th grade with either an elementary or secondary certificate. I don't know of any elementary teachers who frequent this board. Monday, April 7, 2008 at 5:10pm by Ms. Sue Elementary Math I don't know what C.G.W. is -- but this looks like elementary math. Divide the numerator by the denominator. 18/11 = 1.636363 Wednesday, July 4, 2012 at 8:07pm by Ms. Sue Okay, would the answer be 6.40 V? I found the potential difference by diving the work and the elementary charge and i got 4 x 10^-57. And multiply that by the elementary charge. Friday, February 5, 2010 at 5:09pm by Priscilla It must be an elementary question because I too have the same question on my homework. I was thinking the shape they are looking for is a trapezoid but still having problems getting to the .95 out of .30, 1.25 and 1.00 Thursday, October 22, 2009 at 12:24pm by Patrick You can become a scientist by liking and doing well in science in elementary school. Take all the science classes in high school that you can. Then go to college. Sunday, May 10, 2009 at 8:46pm by Ms. Sue Women comprise 80.3% of all elementary school teachers. In a random sample of 300 elementary teachers, what is the probability that more the 3/4 are women? Saturday, October 9, 2010 at 12:14am by Lori AED 201 I need info. on the daily activities of a typical day for an educator (at the elementary or secondary level). The list and discussion of three time management tips should be attached as separate documents. I am going to be an elementary teacher.Please help Monday, April 7, 2008 at 5:10pm by Dawn elementary algebra if its elementary algebra dont you draw some columns and guess and test? otherwise, these are the equations: x(3.00) + y(6.50) = $125.50 x+y = 29 tickets Sunday, December 14, 2008 at 11:43pm by s 1st grade The letters need to be capitalized I go to Scott Elementary School or I attend Scott Elementary School Monday, November 22, 2010 at 4:50pm by Ayana I have never had a student post an entire lab online. Do you want me to do it for you? I will be happy to critique your thinking or work. This is a very elementary lab, I have done this for elementary school teachers. Sunday, February 1, 2009 at 11:02am by bobpursley Elementary ????? Math Elementary math? Really???? Monday, November 30, 2009 at 8:27pm by Ms. Sue elementary statistics • Resources: Ch.1 & 2 of Elementary Statistics • Complete the following problems from Ch. 1 of Elementary Statistics: o Case Study on p. 15 o Real Statistics/Real Decisions on p. 28 • Complete the following problems from Ch. 2 of Elementary Statistics: o Case Study on p. 92 o ... Friday, April 16, 2010 at 6:10pm by Anonymous The first thing I'd do is set up magnet schools with each emphasizing an educational specialty. Parents can choose any elementary school in the district for their child to attend. In my city, we have an elementary school that teaches some classes in both Spanish and English. ... Sunday, November 28, 2010 at 7:11pm by Ms. Sue That sounds pretty advanced for an elementary class. One way you could start to solve this is to use a few different values for n and see what you get. Using n=1,2,3 we get p=7,13,19. Do that for a bunch of numbers (a spreadsheet program like Excel or Openoffice Calc will help... Wednesday, October 20, 2010 at 3:17pm by Gray chemistry kinetics 1. Statement-I : Fractional order reactions are not elementary reactions. Statement-II : For an elementary reaction, order must be same as molecularity. are these statements true? give reason. Thursday, May 24, 2012 at 1:06am by swa Suppose a newly found elementary particle has an average lifetime, T=30 ns, when at rest. In order for this new elementary particle to travel 15m in one lifetime, how fast must it be moving relative to the velocity of light? In other words, find v/c. Thursday, October 12, 2006 at 10:56pm by slien PHYSICS-Help required I personally think you need a tutor, if in fact you are lost on such simple problems. None of these are difficult, very elementary. If you do not understand such a wide array of elementary questions, you need a tutor. I am not going to do the work so you can copy. Show your ... Wednesday, July 21, 2010 at 10:47pm by bobpursley Polk Elementary School has three classes for each grade level,kindergarten through fifth grade.If there is an average of 27 students per classroom,about how many students attend Polk Elementary Wednesday, September 14, 2011 at 2:55pm by felix The equation A + 2 B = C + D describes an elementary reaction, which takes place in a single step. Thus, the rate law must be a. rate = k(A]2 b. rate = k(B]2 c. rate = k(A][B] d. rate = k(A][B]2 e. none of the above represents the rate law for this elementary reaction. Friday, February 24, 2012 at 8:19pm by bob The equation A + 2 B = C + D describes an elementary reaction, which takes place in a single step. Thus, the rate law must be a. rate = k(A]2 b. rate = k(B]2 c. rate = k(A][B] d. rate = k(A][B]2 e. none of the above represents the rate law for this elementary reaction. help Saturday, February 25, 2012 at 10:09am by bob 1st grade i go to scott elementary school That will be changed to I go to Scott elementary school. Monday, November 22, 2010 at 4:50pm by Tati Just realized it said "elementary" so I don't know if you know how to work with variables. Here is another way Bld A Bld B Total 3 .. 2 .. 5 6 .. 4 .. 10 12 .. 8 .. 20 24 .. 16 .. 40 30 .. 20 .. 50 36 .. 24 .. 60 notice 3/2 = 6/4 = ... = 36/24 Wednesday, December 16, 2009 at 12:12am by Reiny Both sentences are correct. But, perhaps a better sentence would be -- "The capacious room was in one of the nine elementary schools in the area." I am writing a personal narrative for my AP english class. One sentence I had written went as follows: The capacious room belonged... Wednesday, May 30, 2007 at 8:56pm by Ms. Sue Dawn, you are on the wrong path for a teacher of children. I suggest you immediately re-evaluate your career goals. You wont change your thinking nor habits, nor will the need for books and love of books ever be eliminated in young minds. You are wasting time and money ... Tuesday, February 17, 2009 at 7:15pm by bobpursley lesson planning A simple Google search for lesson plans elementary yielded this page. Sort through them to see which plans meet your criteria. Please note that your plans will need to meet your state and district objectives as well as tie in with the textbooks you'll need to use. http://www.... Wednesday, January 30, 2008 at 9:27pm by Ms. Sue A small droplet of oil with mass of 1.84×10-15 kg is held suspended in a region of uniform electric field directed upward with a magnitude of 6625 N/C. Is the excess charge on the droplet positive or negative? NEGATIVE b.How many excess elementary charges reside on the droplet... Thursday, September 1, 2011 at 10:47pm by TP Identify and analyze the different crimes for which students are most at risk for in K-12, include some of the differences in victimization found across elementary, middle school, high schools, and college. Provide reasons why you think these crimes occur within the schools. ... Monday, February 20, 2012 at 12:12am by sharday Every day, a bakery owner buys either butter(B) or ghee(G).The type of item purchased in consecutive days is to be recorded.List the sample space.If a different type of item is purchased than in the previous day, we say that there is a switch.Let X denote the number of ... Saturday, August 13, 2011 at 5:23am by Zinnia I look for reliability, details and accuracy to judge whether or not information is reliable for example, the education reporter for the Fort Worth Star-Telegram wrote a short item directing residents of a suburban neighborhood to a school board meeting at Parkview Elementary ... Wednesday, March 26, 2008 at 6:09pm by Jean elementary brainteaser My college teacher gave us a brainteaser for elementary kids and I can not figure it out! next letter: s,m,h,d,w,m,_? ok seconds, minutes, hours, days, weeks, months, ?? then what? next is years years? Ok, yes. Years is the next term in the sequence I gave, but the letter "y" ... Sunday, August 27, 2006 at 5:10pm by elizabeth While wearing uniforms does not necessarily guarantee academic success; not wearing uniforms deters students from concentrating on their studies, dress codes can have a positive and negative effect on students. The statement above is my thesis statement on dress codes in ... Tuesday, May 20, 2008 at 11:58pm by Phil Identify and analyze the different crimes for which students are most at risk for in K-12, include some of the differences in victimization found across elementary, middle school, high schools, and college. why you think these crimes occur within the schools. Mention at least ... Tuesday, February 21, 2012 at 7:07pm by sharday Thursday, February 5, 2009 at 2:25pm by al Monday, May 2, 2011 at 2:55pm by elementary kid 2*3 = 6, not 5 Thursday, October 25, 2012 at 10:30pm by PsyDAG Tuesday, October 16, 2012 at 6:43pm by Erick Wednesday, March 5, 2014 at 8:19pm by elementary How can WE answer that for YOU? We have not been in your classes. You might try some of the following links for ideas: http://search.yahoo.com/search?fr=mcafee&p= what+one+would+learn+in+elementary+science Sra Wednesday, June 8, 2011 at 4:42am by SraJMcGin Since this is not my area of expertise, I searched Google under the key words "elementary math games": http://www.google.com/search?client=safari&rls=en&q=elementary+math+games&ie=UTF-8&oe=UTF-8 In the future, you can find the information you desire more quickly, if you use ... Wednesday, January 27, 2010 at 6:52pm by PsyDAG ok now let me try and put it together and see if i can get it Sharp Elementary School Principal Ellen Hayes said her school failed to meet state education requirements because she was continually undermined by the teacher’s union. Sharp elementary school principal ellen hayes ... Tuesday, May 28, 2013 at 2:24pm by Afranko Elementary Math Monday, October 2, 2006 at 5:02pm by Anonymous Elementary Math . . . Thursday, October 19, 2006 at 4:56pm by Anonymous Wednesday, October 8, 2008 at 7:59pm by Victoria And your question is? Thursday, October 30, 2008 at 5:34pm by DrBob222 What's the question? Thursday, October 30, 2008 at 5:34pm by Cecilia Yes, you should! Wednesday, October 8, 2008 at 7:59pm by Cecilia Elementary Math Thank you Thursday, September 3, 2009 at 4:12pm by Lisa buckeye elementary 4 and 3...5 and 2....6 and 1 Wednesday, March 2, 2011 at 5:02pm by sam elementary functions Tuesday, November 2, 2010 at 3:00pm by Hannah Elementary Math Monday, August 17, 2009 at 6:51pm by dsfg Elementary Statistics Wednesday, June 8, 2011 at 7:44pm by Mgraph 6 + 5 = 5 + 6 Wednesday, October 31, 2012 at 7:56pm by Ms. Sue Wednesday, October 31, 2012 at 9:13pm by swetha thank u so much Wednesday, October 31, 2012 at 9:13pm by swetha That chocolate is from me. Why can't YOU do this? Tuesday, October 30, 2012 at 6:49pm by Caitlyn elementary math thank you.. Friday, January 25, 2013 at 1:29am by anna WOW Math Having no idea what you meant by that, I googled "partial sums method" and found this: http://www.cheney268.com/Math/Math%201/Elementary%20School/Computational%20Strategies/PartialSumsMethod.htm I sure hope they don't teach you that !!!! Monday, October 20, 2008 at 8:55pm by Reiny English Expression In the U.S., first grade means the first grade of only elementary school. Children in first grade are 6 or 7 years old. The last two are common responses for 7th graders. We usually use grade, rather than class for elementary and middle school students. Class is more common in... Friday, April 18, 2008 at 12:25pm by Ms. Sue Elementary Math You are not smart! :O Thursday, October 19, 2006 at 4:56pm by memememe yes i would but you do not have too Wednesday, October 8, 2008 at 7:59pm by Anonymous elementary algebra thank u guys! Sunday, December 14, 2008 at 11:43pm by lisa how were YOU in elementary school? Friday, May 1, 2009 at 2:27pm by y912f how becoming a scientist Sunday, May 10, 2009 at 8:46pm by jose I want to teach in Elementary Saturday, July 18, 2009 at 4:46pm by Anonymous What is the rule in math? Wednesday, September 9, 2009 at 3:58pm by jamie explain regrouping Wednesday, September 9, 2009 at 3:58pm by Anonymous what is the definition of quality Tuesday, September 15, 2009 at 4:54pm by liz Elementary Statistics help Monday, October 29, 2007 at 3:34pm by Anonymous Elementary Statistics help Monday, October 29, 2007 at 3:34pm by asdddddd elementary algebra -2+4(x-1)= -7-4x Sunday, December 14, 2008 at 11:43pm by Teri elementary math 1 1/2 = 3/2 1/2 * 3/2 = 3/4 Monday, August 16, 2010 at 6:23pm by Ms. Sue If P(B)=1/5, compute P(not B) Please Help?? Sunday, December 5, 2010 at 4:29pm by Shan elementary math P = (1/5)B Sunday, December 5, 2010 at 4:29pm by Ms. Sue banyan elementary Wednesday, January 19, 2011 at 6:30pm by Vanessa Elementary Math Thursday, January 27, 2011 at 9:48am by Anonymous elementary math Friday, February 18, 2011 at 8:57pm by devin 300 and 200000 Thursday, March 11, 2010 at 1:47pm by Anonymous Math: Elementary why is it obvious that 3/4 x 8/5 = 6/5? Monday, March 17, 2008 at 11:39pm by doris Elementary math College Tuesday, March 30, 2010 at 1:33pm by Anonymous elementary stats 1/4 * 11 = ? Thursday, February 23, 2012 at 11:18pm by PsyDAG How can i get on xtramath Monday, October 22, 2012 at 7:23pm by elementary girly elementary (incomplete) What was question 1? Thursday, October 25, 2012 at 10:55pm by PsyDAG elementary (Incomplete) What is question 1? Thursday, October 25, 2012 at 10:34pm by PsyDAG A pentagon has 5 sides. 5 + 3 = ? Monday, October 29, 2012 at 5:43pm by PsyDAG Elementary Statistics 5*4*3= 60 Wednesday, June 8, 2011 at 7:44pm by jason elementary math what is a partial sum? Sunday, September 30, 2007 at 7:58pm by Casey Elementary Math what is defintion for median Monday, October 2, 2006 at 5:02pm by Anonymous estimated quotient is too small? Thursday, October 30, 2008 at 5:34pm by kylie Saturday, January 31, 2009 at 3:20pm by MAXWELL Elementary Math True or False if 3/n,then 9/n Friday, August 7, 2009 at 11:22pm by Shantel word problem elementary reposed Thank You Saturday, August 8, 2009 at 9:06pm by Heather Elementary Math Why isn't the answer B??? Sunday, August 9, 2009 at 8:25pm by Ms. Sue Elementary Math is it true 3|n,then9|n Friday, August 7, 2009 at 11:22pm by Anonymous elementary math for teachers clearly B Thursday, October 15, 2009 at 10:52pm by Reiny Elementary Math MTH 213 Monday, November 30, 2009 at 8:27pm by Shanta bridgedale elementary I love a pas okay Friday, January 30, 2009 at 11:20am by emely bridgedale elementary I love a pas okay Friday, January 30, 2009 at 11:20am by emely gregg elementary what is intervles on a numberline Tuesday, January 30, 2007 at 4:22pm by lydia Elementary Math dam confusing Monday, October 2, 2006 at 5:02pm by Anonymous Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=elementary","timestamp":"2014-04-18T09:52:47Z","content_type":null,"content_length":"29653","record_id":"<urn:uuid:eb1a54ec-7c75-4080-935b-b77ebb8d95f2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Deligne, Pierre René ( born Oct. October 3, 1944 , Brussels, Belg.Belgian BelgiumBelgian mathematician who was awarded the Fields Medal at the International Congress of Mathematicians in Helsinki, Fin., in 1978 (1978) and the Abel Prize (2013) for his work in algebraic geometry. Deligne received a bachelor’s degree in mathematics (1966) and a doctorate (1968) from the Free University of Brussels. After a year at the National Foundation for Scientific Research, Brussels, he joined the Institute of Advanced Scientific Studies, Bures-sur-Yvette, France, in 1968. In 1984 he became a professor at the Institute for Advanced Study, Princeton, N.J.New Jersey, U.S. In 1949 the French mathematician André Weil made a series of conjectures concerning zeta functions of curves of abelian varieties. One of these was the equivalent of the Riemann hypothesis for varieties over finite fields. Deligne used a new theory of cohomology called étalestable étale cohomology, drawing on ideas originally developed by Alexandre Grothendieck some 15 years earlier, and applied them with great success to solve the deepest of the Weil conjectures. Deligne’s work provided important insights into the relationship between algebraic geometry and algebraic number theory. He also developed an area of mathematics called weight theory, which has applications in the solution of differential equations. Later he proved some conjectures by named for the British topologist Sir William Vallance Douglas Hodge. Deligne’s publications include Équations différentielles à points singuliers réguliers (1970; “Differential Equations with Regular Singular Points”); Groupes de monodromie en géométrie algébrique (1973; “Monodromy Groups in Algebraic Geometry”); Modular Functions of One Variable (1973); with Jean-Franƈois Boutot et al., Cohomologie étale Cohomologieétale (1977; “Étale “ Étale Cohomologies”); and, with J. Milne, A. Ogus, and K. Shih, Hodge Cycles, Motives, and Shimura Varieties (1982).
{"url":"http://media-2.web.britannica.com/eb-diffs/576/156576-15670-85969.html","timestamp":"2014-04-21T07:09:51Z","content_type":null,"content_length":"5133","record_id":"<urn:uuid:4653fff7-6eb1-4c02-882a-a7e05c4477ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
The Quantum Pontiff Delta X Delta P The science blogosphere is abuzz about Lisa Randall’s op-ed article in the New York Times. See comments at Hogg’s Universe, Not Even Wrong, Lubos Motl’s Reference Frame, and Cosmic Variance. The article just made me happy: read the following paragraph “The uncertainty principle” is another frequently abused term. It is sometimes interpreted as a limitation on observers and their ability to make measurements. But it is not about intrinsic limitations on any one particular measurement; it is about the inability to precisely measure particular pairs of quantities simultaneously. The first interpretation is perhaps more engaging from a philosophical or political perspective. It’s just not what the science is about. There is nothing that makes my Monday mornings brighter than a correct popular explanation of the uncertainty principle. 6 Responses to Delta X Delta P 1. I assume it would be the least to expect from a practising physicist to understand the Uncertainty Principle correctly, and be able communicate said comprehension in a legible fashion. In other news, I’m not allowed to tell you my advisor’s prize until Tuesday. Like or Dislike: 0 0 2. Well, you’d be amazed how many “practicing physicists” get the uncertaintly principle wrong! I won’t name names Like or Dislike: 0 0 3. That’s an excellent article. There needs to be more like it in the unintelligent design debate… Oh yea, Dave, I was happy to see her correctly describe the uncertainty principle! Like or Dislike: 0 0 4. As far as I know the interpretation was messy since the very beginning, as Pauli pointed out in his book about ‘wave mechanics’ (circa 1933). According to Pauli quantum theorists (and also Heisenberg) used different terms, depending on the specific meaning (or interpretation) they were talking about. Ungenauigheit = inexactness Unbekanntheit = unknowability Unsicherheit = uncertainty Unbestimmtheit = indeterminacy I do not know if, at present, we know the ultimate answers to questions like these: - Do uncertainty relations apply to a single system or to ensembles (all in the same state)? - Do u.r. imply a mere limitation on making certain kinds of measurements simultaneously? - Do u.r. imply a limitation on the possible knowledge obtainable about a system? - Do u.r. imply a limitation on the properties that can be ascribed to a quantum system? Like or Dislike: 0 0 5. scerir, interesting to hear about the uncertainty in the uncertainty principle! Here are my answers to your questions (for fun, not religion). I’m sure these answers are very naive, and I’d love to hear others comments on these questions! - Do uncertainty relations apply to a single system or to ensembles (all in the same state)? The uncertainty relations apply to the statistics of multiple experiments with the same preparation. They have no meaning for a single experiment, since, from a single experiment, I can never even compute these statistics. - Do u.r. imply a mere limitation on making certain kinds of measurements simultaneously? Yes. Well, you ask whether they imply other things, certainly one can go from uncertainly relations to other interesting statements about quantum theory. But this is almost as broad as saying “what else does quantum theory imply?” Also while I like Lisa’s statment of the uncertainty principle, I really also have a problem with phrasing the principle as about “simultaneous measurements.” Certainly I have no idea what this concept means. For example, consider experiements for a measurement of position and for a measurement of momentum. I can do one before the other and then take the limit as these measurements go to zero. But is it really possible to measure these two things at once? The problem, of course, is that most people think about the uncertainty principle in terms of limits on the values we ascribe to a system (like: in classical theory we can ascribe position and momentum.) But this interpretation seems to me to be way off mark. But I’m much more content with the simultaneously measurable language, than with most other language I hear about the uncertainty principle - Do u.r. imply a limitation on the possible knowledge obtainable about a system? Certainly there is the notion of information disturbance in learning about the value of the amplitudes of a quantum state. - Do u.r. imply a limitation on the properties that can be ascribed to a quantum system? Not as far as I understand. This question is the relm of the Kochen-Specker theorem or even Bell inequalities. But, like I said above, certainly non-commutativity is central to the KS theorem. It would be interesting to try to go from a theory with only uncertainty relations (quantum theory is not the only one!) and then derive a Kochen-Specker violation. One problem with all these questions is that the uncertainty principle is a simple consequence of the non-commutativity of operators on our Hilbert space. And pretty much everything in quantum theory that is interesting is a result of non-commutativity. Indeed everything that is interesting in our classical world is also non-commutative (that turn the steering wheel before I push the accelerator yields far different consequence for the grandma in front of the car that if I had done these things in the opposite order!) Thus it seems arbitrary to say that the uncertainty principle has a fundamental status for helping answer the questions you ask. This would be like trying to use the fact that orbits in Newtonian gravity are ellipses to try to explain deep properties of Newtonian gravity. Like I said, my answers are naive. One question I’ve always found fascinating is the limits of the arguments put forth by Heissenberg in his microscope thought experiment. Like or Dislike: 0 0 6. Why do physicists like Michio Kaku make provacative statements as they do? In the Science Channel series “Atom”, he states, “If you want to see a physicist turn green, ask about the problem of measurement”. Is he one of the “practicing physicists” who’s got it wrong? Like or Dislike: 0 0 This entry was posted in Quantum, Science. Bookmark the permalink.
{"url":"http://dabacon.org/pontiff/?p=1068","timestamp":"2014-04-20T18:22:17Z","content_type":null,"content_length":"32145","record_id":"<urn:uuid:71f07b6c-f60a-4521-83f2-0afdf3a8dd8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential equation in implicit form January 25th 2009, 09:56 AM Differential equation in implicit form I was wondering if someone could give me some idea as where to start with the following question? Find in implicit form the general solution of the differential equation $dy/dx\ = -3 y^3 (1 + e^{-3x} )(3x - e^{-3x} + 3)$ Any help would be greatly appreciated January 25th 2009, 10:29 AM Your equation is separable $\frac{dy}{y^3} = -3 (1 + e^{-3x} )(3x - e^{-3x} + 3) dx$ expand the RHS and integrate term by term. January 25th 2009, 10:34 AM Divide through by -y^3 and integrate: $-\int\frac{dy}{y^3} = \int 3(1 + e^{-3x} )(3x - e^{-3x} + 3)dx$. The x-integral is easy, because that function is the derivative of $\tfrac12(3x - e^{-3x} + January 25th 2009, 11:51 AM Thanks to you both, it's clear now. January 26th 2009, 04:15 AM I take $-\int\frac{dy}{y^3}$ is the same as $-\int\frac{1}{y^3}dy$ in which case I find the explicit form and solve for y=1/2 and x=0. But the next part asks for the explicit form which I have had a go at, when I use y=1/2 and x=0 I get a different answer is this correct? January 26th 2009, 10:01 AM I take $-\int\frac{dy}{y^3}$ is the same as $-\int\frac{1}{y^3}dy$ in which case I find the explicit form and solve for y=1/2 and x=0. But the next part asks for the explicit form which I have had a go at, when I use y=1/2 and x=0 I get a different answer is this correct? When you do the integration you should get the implicit form of the solution as $y^{-2} = (3x-e^{-3x}+3)^2 +$ const. If the initial condition is that y=1/2 when x=0 then the constant is 0. You can then take the square root of both sides to get the explicit solution as $y = \pm\frac1{3x-e^{-3x}+3}$. Finally, check the initial condition again to see that you need the + sign, not the – January 26th 2009, 12:26 PM Thanks, I'd been looking at it for ages and going round and round in circles. March 15th 2009, 01:48 PM When you do the integration you should get the implicit form of the solution as $y^{-2} = (3x-e^{-3x}+3)^2 +$ const. If the initial condition is that y=1/2 when x=0 then the constant is 0. You can then take the square root of both sides to get the explicit solution as $y = \pm\frac1{3x-e^{-3x}+3}$. Finally, check the initial condition again to see that you need the + sign, not the – I might be being totally blonde here, but isn't the square root of y^-2 = y^-1? March 16th 2009, 06:28 AM
{"url":"http://mathhelpforum.com/differential-equations/69824-differential-equation-implicit-form-print.html","timestamp":"2014-04-23T17:11:38Z","content_type":null,"content_length":"13050","record_id":"<urn:uuid:b855e480-f9d6-40f9-a2fa-14e96eeb603e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
Literal values may be given for any of the data types supported in SQL statements, wherever the term "literal" appears in the syntax diagrams. String Literals String literals may be represented in two ways, as character strings or hexadecimal strings. Note: An empty string (i.e. '') is a defined value. It is not NULL. □ Hexadecimal string literal A hexadecimal string literal is a string specified as a sequence of hexadecimal values, enclosed in apostrophes and preceded by the letter X. The sequence of values must contain an even number of positions (every character in the string literal is represented by a two-position value), and may not contain any characters other than the digits 0-9 and the letters A-F. The case of letters (and of the preceding X) is irrelevant. The code values for characters are those which apply in the host system. For character and hexadecimal string literals, a separator may be used within the literal to join two or more substrings. Separators are described in Characters. This is particularly useful when a string literal extends over more than one physical line, or when control codes are to be combined with character sequences. ASCII codes are used for the hexadecimal literals: │ String │ Value │ │ 'ABCD' │ ABCD │ │ 'Mimer''s' │ Mimer's │ │ 'data'<LF>'base' │ database │ │ X'0D0A09' │ <CR><LF><TAB> │ │ X'0D0A'<LF>'09' │ <CR><LF><TAB> │ Note: Since the SQL92 standard states that a hexadecimal string is a bit-string and Mimer SQL currently does not support a BIT data type, it is advisable to explicitly type cast hexadecimal strings to the CHARACTER data type to assure forward compatibility. This is done with the CAST specification described in Assignments. Numerical Integer Literals A numerical integer literal is a signed or unsigned number that does not include a decimal point. The sign is a plus (+) or minus (-) sign immediately preceding the first digit. In determining the precision of an integer literal, leading zeros are significant (i.e. the literal 007 has precision 3). Numerical Decimal Literals A numerical decimal literal is a signed or unsigned number containing exactly one decimal point. In determining the precision and scale of a decimal literal, both leading and trailing zeros are significant (i.e. the literal 003.1400 has precision 7, scale 4). Numerical Floating Point Literals Floating point literals are represented in exponential notation, with a signed or unsigned integer or decimal mantissa, followed by an letter E, followed in turn by a signed or unsigned integer The base for the exponent is always 10. The exponent zero may be used. The case of the letter E is irrelevant. In determining the precision of a floating point literal, leading zeros in the mantissa are significant (i.e. the literal 007E4 has precision 3). DATE, TIME and TIMESTAMP Literals A literal that represents a DATE, TIME or TIMESTAMP value consists of the corresponding keyword shown below, followed by text enclosed in single quotes (''). The following formats are allowed: TIMESTAMP 'date-value <space> time-value' A date-value has the following format: year-value - month-value - day-value A time-value has the following format: hour-value : minute-value : second-value where second-value has the following format: whole-seconds-value [. fractional-seconds-value] The year-value, month-value, day-value, hour-value, minute-value, whole-seconds-value and fractional-seconds-value are all unsigned integers. A year-value contains exactly 4 digits, a fractional-seconds-value may contain up to 9 digits and all the other components each contain exactly 2 digits. TIMESTAMP '1997-02-14 10:59:23.4567' TIMESTAMP '1928-12-25 23:59:30' Interval Literals An INTERVAL literal represents an INTERVAL value and consists of the keyword INTERVAL followed by text enclosed in single quotes, in the following format: INTERVAL '[+ | -] interval-value' interval-qualifier The interval-value text must be a valid representation of a value compatible with the interval data type specified by the interval-qualifier, see Named Interval Data Types. □ If the interval precision includes the YEAR and MONTH fields, the values of these fields should be separated by a minus sign. □ If the interval precision includes the DAY and HOUR fields, the values of these fields should be separated by a space. □ If the interval precision includes the HOUR fields and another field of lower significance (MINUTE and/or SECOND), the values of these fields should be separated by a colon. □ The number of digits in the most significant field must not exceed the leading precision defined by the interval-qualifier. If a leading precision is not explicitly specified in the interval-qualifier, the default (2) applies. □ The SECOND field may have a fractional part, whose maximum length is defined by the interval-qualifier. INTERVAL '1:30' HOUR TO MINUTE INTERVAL '1000 10:20:30.123' DAY(4) TO SECOND(3) INTERVAL '-199' YEAR(3) **evaluates to -199 INTERVAL '199' YEAR **Invalid : default leading precision is 2 INTERVAL '5.555' SECOND(1,2) **evaluates to 5.55 INTERVAL '-5.555' SECOND(1,2) **evaluates to -5.55 INTERVAL '19 23' DAY TO MINUTE **Invalid : no minutes in literal Standard Compliance This section summarizes standard compliance concerning literals. │ Standard │ Compliance │ Comments │ │ X/Open-95 │ │ The presence of a newline character (<LF>) between substrings in a character or hexadecimal string literal is not mandatory in Mimer SQL. │ │ SQL92 │ EXTENDED │ │ │ │ │ Hexadecimal string literals are of type BINARY because Mimer SQL does not support the BIT data type. │
{"url":"http://developer.mimer.com/documentation/Mimer_SQL_Reference_Manual/Syntax_Rules5.html","timestamp":"2014-04-21T04:33:29Z","content_type":null,"content_length":"20042","record_id":"<urn:uuid:fd9196ec-a0d7-4c1e-a6d4-d0c23726a124>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
dia_matrix.toarray(order=None, out=None)[source]¶ Return a dense ndarray representation of this matrix. order : {‘C’, ‘F’}, optional Whether to store multi-dimensional data in C (row-major) or Fortran (column-major) order in memory. The default is ‘None’, indicating the NumPy default of C-ordered. Cannot be specified in conjunction with the out argument. : out : ndarray, 2-dimensional, optional If specified, uses this array as the output buffer instead of allocating a new array to return. The provided array must have the same shape and dtype as the sparse matrix on which you are calling the method. For most sparse types, out is required to be memory contiguous (either C or Fortran ordered). arr : ndarray, 2-dimensional Returns : An array with the same shape and containing the same data represented by the sparse matrix, with the requested memory order. If out was passed, the same object is returned after being modified in-place to contain the appropriate values.
{"url":"http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.dia_matrix.toarray.html","timestamp":"2014-04-20T18:29:12Z","content_type":null,"content_length":"8023","record_id":"<urn:uuid:98be7b68-ff58-46f8-a170-e164534226c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Thorofare Statistics Tutors ...Thank you for your interest and I hope to hear from you soon! I currently sing Sop.1 in PVOP. I teach reading music in both treble and bass clefs, time and key signatures. 58 Subjects: including statistics, reading, geometry, biology ...Routinely score 800/800 on practice tests. Able to help students improve reading comprehension through specific test-taking strategies and pinpoint necessary areas of vocabulary improvement. Scored 800/800 on January 26, 2013 SAT Writing exam, with a 12 on the essay. 19 Subjects: including statistics, calculus, geometry, algebra 1 ...My background is in engineering and business, so I use an applied math approach to teaching. I find knowing why the math is important goes a long way towards helping students retain information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. 13 Subjects: including statistics, calculus, geometry, algebra 1 ...If you don't get that grade, I will refund your money, minus any commission I paid to this website. Please note that I only tutor college students, advanced high school students, returning adult students, and those studying for standardized tests such as SAT, GRE, and professional licensure exam... 11 Subjects: including statistics, calculus, ACT Math, precalculus ...I am a graduate of the College of William Mary (BA - Mathematics) and the NJ Institute of Technology (MS - Applied Science). But, my greatest qualifications come from years of experience in the real world. I look forward to meeting you and helping you to achieve your educational goals.Learn Alg... 23 Subjects: including statistics, English, calculus, algebra 1
{"url":"http://www.algebrahelp.com/Thorofare_statistics_tutors.jsp","timestamp":"2014-04-18T16:42:33Z","content_type":null,"content_length":"24992","record_id":"<urn:uuid:76d59cca-ea94-412f-8d77-f39e067b5efb>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
Photoionization Theory for Coherent and Incoherent Light In the following, the semi-classical approach for photoionization formulated in my paper Scattering of Radio Waves by High Atomic Rydberg States is developed further such as to be applicable not only for coherent light but also for light which is not sufficiently coherent in order to ionize an atom within the coherence time. In the paper, the latter case was (although not being significant for the results) taken into account by applying a corresponding reduction factor to the photoionization cross section, but this was somewhat of an ad-hoc procedure and not consistently derived from an appropriate interaction model. On this page, the 'low coherency' case is instead being developed in a straightforward manner from the coherent case by assuming the photoionization to be due to a stepwise accumulation of (pseudo)-energy before the actual transition from the ground state into the continuum can take place. Coherent Photo Ionization The interaction of light with atoms can generally be described as a forced oscillator (in the case of photoionization as a ). If the oscillation is in phase with the driving field of the electromagnetic wave throughout, this is exactly equivalent to the acceleration of an electron by a constant electric field E (where E is the amplitude of the wave). The electron will therefore be accelerated to the velocity v within a time (1) T[c]=v/a = v/(eE/m) , where e is the elementary charge. Since v is related to the energy ε by (2) v= √(2ε/m) (m=electron mass) , and ε has to be taken as identical to h =wave-frequency), this yields (3) T[c]= √(2hν^.m) /(eE) . After evaluating the constants, this gives the numerical expression (4) T[c]= 7^.10^-18^.√ν /E [sec] (in Gausssian cgs-units i.e. ν [Hz], E [statvolt/cm](=3^.10^4V/m) ) . For sunlight (assuming E=10 statvolt/cm), this amounts to about 10 sec, i.e. the almost instantaneous release of electrons in the photoeffect can well be explained within the wave theory of light if a proper interaction model is used (of course, the wave frequency has to be high enough here to enable photoionization in the first place)(note: the assumption E=10 statvolt/cm should be merely considered to be exemplary here as the electric field strength E of the radiation field is in fact unknown; it has been derived here by equating the well known 'energy' flux of the sun with c /4π , which however (as pointed out in the introduction to my page Wave and Particle Theory of Light applied to the Photoelectric Effect on my site physicsmyths.org.uk), is theoretically flawed, and ambiguous anyway due other physical parameters affecting the measured intensity as addressed on this page). However, it is obvious that only if the acceleration is uniform, is the actual ionization time equal to T . This requires that T is short compared to the coherence time τ of the electromagnetic wave or the time between particle collisions disturbing the ionization process. Otherwise one has to modify the argument (see below). Incoherent Photo Ionization If the time T required to ionize the atom is not shorter than the coherence time of the electromagnetic wave or the time between particle collisions disturbing the ionization process, the above consideration can not be applied anymore as the acceleration of the electron can not be uniform due to phase jumps occurring. In this case the necessary ionization energy can only be reached in a stepwise manner. If τ is the coherence time of the wave field, then the associated increase in velocity Δv during this time interval is (5) Δv = (eE/m)^.τ[c] . Associating the related energy increase in a correspondence like fashion by (6) Δε = m/2^.(Δv)^2 = (τ[c]^.eE)^2/(2m) , the total time required for incoherent photoionization is then (7) T[i] = τ[c]^.hν/Δε = 2hν^.m/(eE)^2/τ[c] , and after evaluating the constants (8) T[i]= 5^.10^-35^.ν /E^2/τ[c] [sec] (in Gausssian cgs-units i.e. ν [Hz], E [statvolt/cm](=3^.10^4V/m) ) . In general, Δε and thus T will of course not have a fixed value but merely represent statistical averages as both E and τ will show statistical variations. Applied to the sunlight example, the incoherent photoionization still yields ionization times of the order of 10 sec (assuming a coherence time τ sec (this value is based on the coherence length of an individual atomic emission in the solar photosphere (which is determined by the electronic collision rate), and this is also assumed to be the coherence time for the total radiation field; this assumption is supported by corresponding numerical computations (see Coherence Length of Wave Field Formed by Superposition ))) . It is important to note however that T should, in contrast to T , in general merely be considered as the time required to establish a statistical equilibrium situation rather than the time required for a particular light pulse to release a photoelectron. The point is that in the course of the interaction with the sequence of coherent light pulses with frequency ν, the energy of the (pseudo-) oscillator increases stepwise close to the threshold value h ν, and it will then just take one further light pulse of duration <τ to ionize the atom and release the photoelectron (at which point the energy h (where ε is the ionization energy) is turned into the kinetic energy of the photoelectron) (see the schematic diagram below) . Schematic illustration of photoionization by coherent and incoherent light Due to the circumstance that in this way the electrons can be brought to a 'pre-ionized' state close to the threshold energy h by sufficiently incoherent radiation, the apparent 'reaction' time between arrival of the light pulse and release of the photoelectron can in general actually appear to be even faster than for the case of coherent photoionization (if τ ). This 'seeding' of the photoionization process could well explain measurements that have been made in the past (e.g. by Lawrence and Beams) which show the release of photoelectrons to be instantaneous to within a few nanoseconds (for presumably very small light intensities). One should also note that, although the incoherency of light leads to a reduced amount of photoionization, the electromagnetic wave field should still be fully absorbed in the process as it is still doing (pseudo)-work on the atomic electron. In any case, the apparent intensity of the observed object will be inversely proportional to either T or T (dependent on whether the light has to be considered as coherent or incoherent), because a faster ionization enables obviously more electrons to be ionized within a given time. For incoherent light, this would therefore recover the E dependence of the intensity of light in classical electrodynamics mentioned at the beginning. The dependence on the coherence time of the light τ in this case also means that the radiation would not produce any ionization at all if τ =0 as T =∞ then. This circumstance could for instance well resolve 'Olbers' Paradox' for a steady state universe. One has to bear in mind however that the field strength E is not directly known in most cases but is in fact derived from the photoionization rate over the classical relationship mentioned above (see the paragraph below Eq.(4)). This means that any reduction of the photoionization rate due to incoherency will already (wrongly) be interpreted as a corresponding reduction of the field strength E. In these cases one has to use consequently the formula for the coherent photoionization, as otherwise the reduction would effectively be applied twice (see also my paper about Scattering of Radio Waves by High Atomic Rydberg States (Chpt. 2.4) where the theory for the coherent photoionization was formulated in the first place).
{"url":"http://www.plasmaphysics.org.uk/photoionization.htm","timestamp":"2014-04-17T18:23:38Z","content_type":null,"content_length":"12344","record_id":"<urn:uuid:17e928a2-a9b7-4a18-a6a0-7a2f523b262d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
What exactly is current? I cant answer the first question, but as to current flow verses current, the former is just bad grammer. it can be useful to specify the direction which the current is in. this probably comes from the usual way people are introduced to current, as one of three objects in Ohm / Kirchoff Laws. often, an understanding that electrical current is a flow of electrons comes after the basic formulae, and after we have learned to talk about it.
{"url":"http://www.physicsforums.com/showthread.php?t=583757","timestamp":"2014-04-20T16:00:07Z","content_type":null,"content_length":"28514","record_id":"<urn:uuid:252724ae-dfed-4c16-822b-66c29a7eb3ae>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Discriminates - for matricies greater than 3x3, or non square. March 1st 2013, 04:56 AM Discriminates - for matricies greater than 3x3, or non square. The other algebra forum stated that it was NOT the place for questions about Matrices that were larger than 2x2, however, this question does not really seem to belong here either, so let me apologize in advance. That being said, my textbook, after explaining determinate, minors, and coefficients for 3x3 matrices, just drops the topic. I bet that is not the end of the story though. I'm a little curious about non-square matrices, or matrices larger than 3x3. Does anyone know of any sources or links that I could continue along this vein? March 1st 2013, 05:45 AM Re: Discriminates - for matricies greater than 3x3, or non square. The other algebra forum stated that it was NOT the place for questions about Matrices that were larger than 2x2, however, this question does not really seem to belong here either, so let me apologize in advance. That being said, my textbook, after explaining determinate, minors, and coefficients for 3x3 matrices, just drops the topic. I bet that is not the end of the story though. I'm a little curious about non-square matrices, or matrices larger than 3x3. Does anyone know of any sources or links that I could continue along this vein? As far as I'm aware, finding the determinant of a matrix is only defined for square matrices. For matrices larger than 3X3, you can simply extend the process which you used for 3X3 matrices. It shouldn't be too difficult (although it can be very tedious...). Just as for 3X3 matrices you reduced to a set of three 2X2 matrices (with coefficients), you can do the same for an nXn matrix (reducing to set of n-1Xn-1 matrices, each of which you reduce to a set of n-2Xn-2 matrices and so on untll you arrive at (many!) 2X2 matrices which you can evaluate directly). In fact, the definition of determinant is often giving inductively. March 1st 2013, 07:37 AM Re: Discriminates - for matricies greater than 3x3, or non square. That is very helpful. Many thanks.
{"url":"http://mathhelpforum.com/advanced-algebra/214034-discriminates-matricies-greater-than-3x3-non-square-print.html","timestamp":"2014-04-19T03:27:51Z","content_type":null,"content_length":"6076","record_id":"<urn:uuid:976e9e90-7d65-4166-94f4-a65f30ed1e4a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
In Brief ERCIM News No.40 - January 2000 The challenge took place in a set of about 1029 points on an elliptic curve chosen by Certicom. To solve the problem, the participants first computed 119,248,522,782,547 (more than 1014) points using open-source software developed by Harley. Among these points, they screened 127,492 distinguished points and collected them on an Alpha Linux workstation at INRIA where further processing revealed two twin points. Finally Harley computed the solution using information associated with these two points, thus nailing the problem. The team struck it lucky, finding the solution in less than a third of the expected time. The distributed computation was run by 195 volunteers, on a total of 740 computers, over 40 days. Nevertheless the computing power used, around 16,000 MIPS/ years, was twice as much as that used for the factorization of RSA-155 announced by Herman Te Riele of CWI and his colleagues on 26 August 1999 (see ERCIM News No. 39). This result strengthens the case of those who contend that a crypto system based on ECDL (Elliptic Curve Discrete Logarithms) is stronger even when using short keys than RSA with much longer keys, although it does not prove that assertion. Rather, it indicates that at the current state of the art, the best mathematical tools and algorithms known for cracking ECDL take longer to run than the best tools known for cracking RSA. Out of the $5000 prize money, the team members will gave $4000 to the Free Software Foundation. Further information at http://cristal.inria.fr/~harley/ecdl/. return to the ERCIM News 40 contents page
{"url":"http://www.ercim.eu/publication/Ercim_News/enw40/ib3.html","timestamp":"2014-04-17T03:49:24Z","content_type":null,"content_length":"2810","record_id":"<urn:uuid:acec9345-693f-4c6e-ae7c-37cc758804e0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
Class: LinearModel Linear hypothesis test on linear regression model coefficients p = coefTest(mdl) p = coefTest(mdl,H) p = coefTest(mdl,H,C) [p,F] = coefTest(mdl,...) [p,F,r] = coefTest(mdl,...) p = coefTest(mdl) computes the p-value for an F test that all coefficient estimates in mdl are zero, except for the intercept term. p = coefTest(mdl,H) performs an F test that H*B = 0, where B represents the coefficient vector. p = coefTest(mdl,H,C) performs an F test that H*B = C. [p,F] = coefTest(mdl,...) returns the F test statistic. [p,F,r] = coefTest(mdl,...) returns the numerator degrees of freedom for the test. Input Arguments mdl Linear model, as constructed by fitlm or stepwiselm. H Numeric matrix having one column for each coefficient in the model. When H is an input, the output p is the p-value for an F test that H*B = 0, where B represents the coefficient vector. C Numeric vector with the same number of rows as H. When C is an input, the output p is the p-value for an F test that H*B = C, where B represents the coefficient vector. Output Arguments p p-value of the F test (see Definitions). F Value of the test statistic for the F test (see Definitions). r Numerator degrees of freedom for the F test (see Definitions). The F statistic has r degrees of freedom in the numerator and mdl.DFE degrees of freedom in the denominator. Test Statistics The p-value, F statistic, and numerator degrees of freedom are valid under these assumptions: ● The data comes from a model represented by the formula mdl.Formula. ● The observations are independent conditional on the predictor values. Suppose these assumptions hold. Let β represent the (unknown) coefficient vector of the linear regression. Suppose H is a full-rank matrix of size r-by-s, where s is the number of terms in β. Let v be a vector the same size as β. The following is a test statistic for the hypothesis that Hβ = v: Here is the estimate of the coefficient vector β in mdl.Coefs, and C is the estimated covariance of the coefficient estimates in mdl.CoefCov. When the hypothesis is true, the test statistic F has an F Distribution with r and u degrees of freedom. Make a linear model of mileage as a function of the weight, weight squared, and model year from the carsmall data set. Test the coefficients to see if all should be zero. Load the data and make a table, where the model year is an ordinal variable. load carsmall tbl = table(MPG,Weight); tbl.Year = ordinal(Model_Year); mdl = fitlm(tbl,'MPG ~ Year + Weight + Weight^2'); Test the model for significant differences from a constant model. p = coefTest(mdl) p = There is no doubt that the model contains more than the intercept term. Test the Weight^2 coefficient in a linear model of mileage as a function of the weight, weight squared, and model year. Load the data and make a table, where the model year is an ordinal variable. load carsmall tbl = table(MPG,Weight); tbl.Year = ordinal(Model_Year); mdl = fitlm(tbl,'MPG ~ Year + Weight + Weight^2'); Test the significance of the Weight^2 coefficient. To do so, find the coefficient corresponding to Weight^2. ans = '(Intercept)' 'Weight' 'Year_76' 'Year_82' 'Weight^2' Weight^2 is the fifth (final) coefficient. Test the significance of the Weight^2 coefficient. p = coefTest(mdl,[0 0 0 0 1]) p = The values of commonly used test statistics are available in the mdl.Coefficients table. anova provides a test for each model term. See Also anova | LinearModel | linhyptest How To
{"url":"http://www.mathworks.com/help/stats/linearmodel.coeftest.html?nocookie=true","timestamp":"2014-04-19T07:02:16Z","content_type":null,"content_length":"52425","record_id":"<urn:uuid:cf497606-3757-470d-8440-db490db122fe>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Classical Dynamics of Particles and Systems, by Jerry Marion and Stephen Thornton • Author: Stephen T. Thornton (Author), Jerry B. Marion (Author) • Title: Classical Dynamics of Particles and Systems • Amazon Link: http://www.amazon.com/Classical-Dyna...8796417&sr=8-1 • Prerequisities: Calculus, Ordinary and Partial Differential Equations, Introductory Physics • Level Undergraduate Upper Level Table of Contents: 5th Ed 1. Matrices, Vectors, and Vector Calculus. 2. Newtonian Mechanics--Single Particle. 3. Oscillations. 4. Nonlinear Oscillations and Chaos. 5. Gravitation. 6. Some Methods in the Calculus of Variations. 7. Hamilton's Principle--Lagrangian and Hamiltonian Dynamics. 8. Central-Force Motion. 9. Dynamics of a System of Particles. 10. Motion in a Noninertial Reference Frame. 11. Dynamics of Rigid Bodies. 12. Coupled Oscillations. 13. Continuous Systems: Waves. 14. The Special Theory of Relativity. Selected References.
{"url":"http://www.physicsforums.com/showthread.php?p=4248938","timestamp":"2014-04-17T18:31:21Z","content_type":null,"content_length":"52631","record_id":"<urn:uuid:252abc5d-c8ae-4342-ad47-2f5d885ef661>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Braingle: '`t` to the 1/8th power' Brain Teaser `t` to the 1/8th power Math brain teasers require computations to solve. Puzzle ID: #187 Category: Math Submitted By: trojan5x Corrected By: cnmne How might a mathematician describe a number `t` held to the following condition: When (t+1) is subtracted from t and the result is raised to the 1/8th power. Imaginary Number. Whenever (t+1) is subtracted from `t`, you will simply be left with -1. -1 raised to the 1/8th power is the same as taking the positive root of something. When taking the positive root of any negative number, you are left with an imaginary number. Hide What Next? Phyllis What is a rooth? Feb 01, 2001 jmanheim Your second sentence is syntactically incorrect. Feb 02, 2001 thephirm Your claim that taking the positive root of any negative number results in an imaginary number is incorrect. For example, the cube root of -1 is -1 (-1 * -1 * -1 = -1). However the Oct 18, 2001 even root of any negative number will be imaginary. canu The words in the teaser look like English words, but put together they have no meaning in English or in math. Jul 13, 2004 Sane I did the subtraction wrong and came out with: Mar 20, 2005 1/100 000 000 May 25, 2005 stephiesd i read it wrond after i did the subtraction, i read it as the -8th power, resulting in -1. Dec 09, 2005 anyhow, we haven't covered imaginary numbers yet. i think they're next chapter. mr_brainiac I don't think that the answer is really an imaginary number, I think it's more likely an imaginary imaginary number, or maybe it's an imaginary imaginary imaginary number, or maybe Jan 11, 2006 it's ... lessthanjake789 wrong... all of you. the number t is a positive, real number. let t = 100, t+1 = 101. t-(t+1) = -1, raised to the 1/8th is, truly an imaginary number, but as you can see, "t" is ANY Jan 29, 2006 real number, positive or negative. sorry, but poorly thought out teaser Methlos I thing i might put my head under a pillow for a while Mar 12, 2006 Mar 18, 2006 MadDog72 I see four problems with this teaser: Mar 23, 2006 1) It asks for the number t, not the value of (t-(t+1))^(1/ 2) Why bother with t? Isn't it obvious that if t+1 is subtracted from t, the result is -1? 3) The answer is vague. I actually computed the answer, only to find that all you wanted was 'imaginary'. 4) It's not an imaginary number! An imaginary number is a number of the form b*i, where i^2=-1. The answer is of the form a + b*i, where a is nonzero (there are actually 8 answers, but they are all of this form). The answer is complex and not real, but not imaginary either. Krystle wow, i'm not good at math at all Jul 23, 2006 Qrystal I figured that a mathematician would call 't' TRIVIAL. After all, it got subtracted out of the situation right away. Jul 29, 2006 There must be a way this teaser could be improved so that it asks what it means to ask... although of course MadDog72 is absolutely correct in stating that [-1]^[1/8] is technically considered complex, not imaginary. Does anyone care that [-1]^[1/8] has 8 answers? Let A = cos(pi/ Then [-1]^[1/8] = ( A + B*i, B + A*i, -B + A*i, -A + B*i, -A - B*i, -B - A*i, B - A*i, A - B*i ) Anyways, I don't care if anyone else doesn't care; I wrote it because I care. So there. Qrystal eeek my answer got invaded by sunglass dudes! Jul 29, 2006 That should say: Let A = cos[pi/8]. Let B = sin[pi/8]. dimez_00 i figured this: Oct 22, 2006 (t+1)-t=? ?^1/8 therefore i got t+1-t=1 1^1/8=the 8th root of 1 which is 1 ChristheGreat Hmm.. you did't include the fact that pi to the 3rd power minus the radius of a duck's butt plus the deepness of a toilet = 5 times the 3rd trigonometric function plus the amount of Nov 12, 2006 time it takes for the final star to impact the earth causing free cake for everyone! EA_KLEIN whoever wrote this has some loose marbles in his keppie Mar 15, 2007 jamesbond ya ryt Apr 19, 2007 SRB_1807 I love 2 eat ducks.. Aug 17, 2011
{"url":"http://www.braingle.com/brainteasers/teaser.php?id=187&op=2&comm=1","timestamp":"2014-04-20T21:09:05Z","content_type":null,"content_length":"40815","record_id":"<urn:uuid:52357b6c-58c8-49d8-ba63-ed4f9df49ba3>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimum-Width Grid Drawings of Plane Graphs Results 1 - 10 of 27 , 2002 "... This paper investigates the following question: Given an integer grid phi, where phi is a proper subset of the integer plane or a proper subset of the integer 3d space, which graphs admit straight-line crossingfree drawings with vertices located at the grid points of phi? We characterize the trees t ..." Cited by 38 (4 self) Add to MetaCart This paper investigates the following question: Given an integer grid phi, where phi is a proper subset of the integer plane or a proper subset of the integer 3d space, which graphs admit straight-line crossingfree drawings with vertices located at the grid points of phi? We characterize the trees that can be drawn on a two dimensional c * n &times; k grid, where k and c are given integer constants, and on a two dimensional grid consisting of k parallel horizontal lines of infinite length. Motivated by the results on the plane we investigate restrictions of the integer grid in 3 dimensions and show that every outerplanar graph with n vertices can be drawn crossing-free with straight lines in linear volume on a grid called a prism. This prism consists of 3n integer grid points and is universal -- it supports all outerplanar graphs of n vertices. This is the first algorithm that computes crossing-free straight line 3d drawings in linear volume for a non-trivial family of planar graphs. We also show that there exist planar graphs that cannot be drawn on the prism and that extension to a n &times; 2 &times; 2 integer grid, called a box, does not admit the entire class of planar graphs. , 1996 "... We provide O(n)-time algorithms for constructing the following types of drawings of n-vertex 3-connected planar graphs: ffl 2D convex grid drawings with (3n) × (3n/2) area under the edge L 1 -resolution rule; ffl 2D strictly convex grid drawings with O(n³) × O(n³) area under the edge resolution ru ..." Cited by 29 (10 self) Add to MetaCart We provide O(n)-time algorithms for constructing the following types of drawings of n-vertex 3-connected planar graphs: ffl 2D convex grid drawings with (3n) × (3n/2) area under the edge L 1 -resolution rule; ffl 2D strictly convex grid drawings with O(n³) × O(n³) area under the edge resolution rule; ffl 2D strictly convex drawings with O(1) × O(n) area under the vertex-resolution rule, and with vertex coordinates represented by O(n log n)-bit rational numbers; ffl 3D convex drawings with O(1) × O(1) × O(n) volume under the vertex-resolution rule, and with vertex coordinates represented by O(n log n)-bit rational numbers. We also , 2000 "... this paper first we review known two methods to find such drawings, then explain a hidden relation between them, and finally survey related results. ..." Cited by 13 (3 self) Add to MetaCart this paper first we review known two methods to find such drawings, then explain a hidden relation between them, and finally survey related results. - COMPUTATIONAL GEOMETRY: THEORY AND APPLICATIONS , 2009 "... ..." - SIAM Journal on Computing , 2005 "... Abstract. We introduce and study orderly spanning trees of plane graphs. This algorithmic tool generalizes canonical orderings, which exist only for triconnected plane graphs. Although not every plane graph admits an orderly spanning tree, we provide an algorithm to compute an orderly pair for any c ..." Cited by 12 (1 self) Add to MetaCart Abstract. We introduce and study orderly spanning trees of plane graphs. This algorithmic tool generalizes canonical orderings, which exist only for triconnected plane graphs. Although not every plane graph admits an orderly spanning tree, we provide an algorithm to compute an orderly pair for any connected planar graph G, consisting of an embedded planar graph H isomorphic to G, and an orderly spanning tree of H. We also present several applications of orderly spanning trees: (1) a new constructive proof for Schnyder’s realizer theorem, (2) the first algorithm for computing an area-optimal 2-visibility drawing of a planar graph, and (3) the most compact known encoding of a planar graph with O(1)-time query support. All algorithms in this paper run in linear time. , 1996 "... A k-path query on a graph consists of computing k vertex-disjoint paths between two given vertices of the graph, whenever they exist. In this paper, we study the problem of performing k-path queries, with k < 3, in a graph G with n vertices. We denote with the total length of the paths reported. For ..." Cited by 11 (2 self) Add to MetaCart A k-path query on a graph consists of computing k vertex-disjoint paths between two given vertices of the graph, whenever they exist. In this paper, we study the problem of performing k-path queries, with k < 3, in a graph G with n vertices. We denote with the total length of the paths reported. For k < 3, we present an optimal data structure for G that uses O(n) space and executes k-path queries in output-sensitive O() time. For triconnected planar graphs, our results make use of a new combinatorial structure that plays the same role as bipolar (st) orientations for biconnected planar graphs. This combinatorial structure also yields an alternative construction of convex grid drawings of triconnected planar graphs. - Proc. 12th International Symp. on Graph Drawing (GD ’04 , 2004 "... We study straight-line drawings of graphs with few segments and few slopes. Optimal results are obtained for all trees. Tight bounds are obtained for outerplanar graphs, 2-trees, and planar 3-trees. We prove that every 3-connected plane graph on n vertices has a plane drawing with at most 5n/2 segme ..." Cited by 10 (3 self) Add to MetaCart We study straight-line drawings of graphs with few segments and few slopes. Optimal results are obtained for all trees. Tight bounds are obtained for outerplanar graphs, 2-trees, and planar 3-trees. We prove that every 3-connected plane graph on n vertices has a plane drawing with at most 5n/2 segments and at most 2n slopes. We prove that every cubic 3-connected plane graph has a plane drawing with three slopes (and three bends on the outerface). Drawings of non-planar graphs with few slopes are also considered. For example, interval graphs, co-comparability graphs and AT-free graphs are shown to have have drawings in which the number of slopes is bounded by the maximum degree. We prove that graphs of bounded degree and bounded treewidth have drawings with O(log n) slopes. Finally we prove that every graph has a drawing with one bend per edge, in which the number of slopes is at most one more than the - Journal of Algorithms , 2000 "... In this paper we introduce a new drawing style of a plane graph G, called proper box rectangular (PBR ) drawing. It is defined to be a drawing of G such that every vertex is drawn as a rectangle, called a box, each edge is drawn as either a horizontal or a vertical line segment, and each face is dra ..." Cited by 7 (0 self) Add to MetaCart In this paper we introduce a new drawing style of a plane graph G, called proper box rectangular (PBR ) drawing. It is defined to be a drawing of G such that every vertex is drawn as a rectangle, called a box, each edge is drawn as either a horizontal or a vertical line segment, and each face is drawn as a rectangle. We establish necessary and sufficient conditions for G to have a PBR drawing. We also give a simple linear time algorithm for finding such drawings. The PBR drawing is closely related to the box rectangular (BR ) drawing defined by Rahman, Nakano and Nishizeki [17]. Our method can be adapted to provide a new simpler algorithm for solving the BR drawing problem. 1 Introduction The problem of "nicely" drawing a graph G has received increasing attention [5]. Typically, we want to draw the edges and the vertices of G on the plane so that certain aesthetic quality conditions and/or optimization measures are met. Such drawings are very useful in visualizing planar graphs and fi... "... We provide O(n)-time algorithms for constructing the following types of drawings of n-vertex 3-connected planar graphs: ffl 2D convex grid drawings with (3n) \Theta (3n=2) area under the edge L1 -resolution rule; ffl 2D strictly convex grid drawings with O(n 3 ) \Theta O(n 3 ) area under the e ..." Cited by 6 (0 self) Add to MetaCart We provide O(n)-time algorithms for constructing the following types of drawings of n-vertex 3-connected planar graphs: ffl 2D convex grid drawings with (3n) \Theta (3n=2) area under the edge L1 -resolution rule; ffl 2D strictly convex grid drawings with O(n 3 ) \Theta O(n 3 ) area under the edge resolution rule; ffl 2D strictly convex drawings with O(1) \Theta O(n) area under the vertex-resolution rule, and with vertex coordinates represented by O(n log n)-bit rational numbers; ffl 3D convex drawings with O(1) \Theta O(1) \Theta O(n) volume under the vertex-resolution rule, and with vertex coordinates represented by O(n log n)-bit rational numbers. We also show the following lower bounds: ffl For infinitely many n-vertex graphs G, if G has a straightline 2D convex drawing in a w \Theta h grid satisfying the edge L1 -resolution rule then w;h 5n=6 +\Omega\Gamma20 and w + h 8n=3 + \Omega\Gamma838 ffl For infinitely many bounded-degree triconnected planar graphs G with n ver... , 2010 "... We give an algorithm to create orthogonal drawings of 3-connected 3-regular planar graphs such that each interior face of the graph is drawn with a prescribed area. This algorithm produces a drawing with at most 12 corners per face and 4 bends per edge, which improves the previous known result of 34 ..." Cited by 4 (1 self) Add to MetaCart We give an algorithm to create orthogonal drawings of 3-connected 3-regular planar graphs such that each interior face of the graph is drawn with a prescribed area. This algorithm produces a drawing with at most 12 corners per face and 4 bends per edge, which improves the previous known result of 34 corners per face.
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.54.7830","timestamp":"2014-04-19T13:01:19Z","content_type":null,"content_length":"36570","record_id":"<urn:uuid:2d9cd16f-11c8-4326-aa6d-090841624d32>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Tensor products Next: About this document ... Up: Differential Geometry. Honours 1996 Previous: Vector fields and derivations. &nbsp Contents If set The special property of the free vector space over . The general element of Given two vector spaces for any real numbers and define a map We have Proposition D.2 The map . We check the first factor only From Proposition D.1 we know that any map To show that they are linearly independent assume that We can iterate tensor products. If We use this map to identify these two spaces and ignore the brackets. We write We also need to know about tensor products of maps. If We have seen that any bilinear map Proposition D.4 defined by Next: About this document ... Up: Differential Geometry. Honours 1996 Previous: Vector fields and derivations. &nbsp Contents Michael Murray
{"url":"http://www.maths.adelaide.edu.au/michael.murray/dg_hons/node37.html","timestamp":"2014-04-18T18:10:15Z","content_type":null,"content_length":"22950","record_id":"<urn:uuid:adb69c84-fdd6-40fe-b038-c89d12303268>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
47-XX OPERATOR THEORY • 47-00 General reference works (handbooks, dictionaries, bibliographies, etc.) • 47-01 Instructional exposition (textbooks, tutorial papers, etc.) • 47-02 Research exposition (monographs, survey articles) • 47-03 Historical (must also be assigned at least one classification number from Section 01) • 47-04 Explicit machine computation and programs (not the theory of computation or programming) • 47-06 Proceedings, conferences, collections, etc. • 47Axx General theory of linear operators • 47Bxx Special classes of linear operators (1) • 47Cxx Individual linear operators as elements of algebraic systems • 47Dxx Groups and semigroups of linear operators, their generalizations and applications (1) • 47Exx Ordinary differential operators [See also 34Bxx, 34Lxx] • 47Fxx Partial differential operators [See also 35Pxx, 58Jxx] • 47Gxx Integral, integro-differential, and pseudodifferential operators [See also 58Jxx] • 47Hxx Nonlinear operators and their properties (For global and geometric aspects, see 58-XX, especially 58Cxx) • 47Jxx Equations and inequalities involving nonlinear operators [See also 46Txx] (For global and geometric aspects, see 58-XX) • 47Lxx Linear spaces and algebras of operators [See also 46Lxx] • 47Nxx Miscellaneous applications of operator theory [See also 46Nxx] • 47Sxx Other (nonclassical) types of operator theory [See also 46Sxx]
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/collection/id/11059","timestamp":"2014-04-18T19:44:43Z","content_type":null,"content_length":"14999","record_id":"<urn:uuid:fb99b7f4-da4b-4f5c-a219-9d145d914921>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
United Kingdom Mathematics Trust Junior Mathematical Olympiad Around 1,200 of the highest scorers in the JMC are invited to participate in the Junior Mathematical Olympiad. It consists of a two-hour paper of more in-depth mathematical problems to which there are two sections: Section A requires answers only whereas full written solutions are required for Section B. For some pupils this may be an unfamiliar exercise and an enjoyable introduction to this kind of mathematical activity. Papers are set and marked by the UKMT, as for the Intermediate Mathematical Olympiad. Note that papers are marked almost immediately after the JMO date as we aim to return them to all candidates before the end of the summer term. The top 25% of scorers receive a Certificate of Distinction; candidates who score below this and who qualified automatically for the JMO via the JMC receive a Certificate of Participation. Medals are allocated on the following basis. In each category, a competent performance in section A is required. In addition, candidates awarded a gold medal submitted full, mathematically accurate solutions to at least four questions in section B, a silver medal required good solutions to four section B questions and a bronze needed three substantially correct solutions. As the criteria are performance-related the number of medals awarded each year is variable but is usually of the order of 30 gold, 60 silver and 120 bronze. A book prize is awarded to the top 50 students in each paper. The title varies from year to year. Questions and Solutions Booklet You may buy collections of past papers. For this year's dates, click here.
{"url":"http://www.ukmt.org.uk/individual-competitions/junior-mathematical-olympiad/","timestamp":"2014-04-20T21:29:26Z","content_type":null,"content_length":"9407","record_id":"<urn:uuid:ad86cb31-7be3-4383-a6da-d069021c7ea7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Multimodal Estimation of Discontinuous Optical Flow using Markov Random Fields December 1993 (vol. 15 no. 12) pp. 1217-1232 ASCII Text x F. Heitz, P. Bouthemy, "Multimodal Estimation of Discontinuous Optical Flow using Markov Random Fields," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 12, pp. 1217-1232, December, 1993. BibTex x @article{ 10.1109/34.250841, author = {F. Heitz and P. Bouthemy}, title = {Multimodal Estimation of Discontinuous Optical Flow using Markov Random Fields}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {15}, number = {12}, issn = {0162-8828}, year = {1993}, pages = {1217-1232}, doi = {http://doi.ieeecomputersociety.org/10.1109/34.250841}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Pattern Analysis and Machine Intelligence TI - Multimodal Estimation of Discontinuous Optical Flow using Markov Random Fields IS - 12 SN - 0162-8828 EPD - 1217-1232 A1 - F. Heitz, A1 - P. Bouthemy, PY - 1993 KW - multimodal estimation; discontinuous optical flow; Markov random fields; dense velocity fields; ill-posed problem; motion boundaries; occlusion regions; motion estimation; visual motion; flow estimation; Bayesian estimation; global statistical models; gradient-based motion constraint equations; feature-based motion constraint equations; deterministic relaxation algorithms; real world image sequences; synthetic image sequences; Bayes methods; image sequences; Markov processes; motion estimation; statistics VL - 15 JA - IEEE Transactions on Pattern Analysis and Machine Intelligence ER - The estimation of dense velocity fields from image sequences is basically an ill-posed problem, primarily because the data only partially constrain the solution. It is rendered especially difficult by the presence of motion boundaries and occlusion regions which are not taken into account by standard regularization approaches. In this paper, the authors present a multimodal approach to the problem of motion estimation in which the computation of visual motion is based on several complementary constraints. It is shown that multiple constraints can provide more accurate flow estimation in a wide range of circumstances. The theoretical framework relies on Bayesian estimation associated with global statistical models, namely, Markov random fields. The constraints introduced here aim to address the following issues: optical flow estimation while preserving motion boundaries, processing of occlusion regions, fusion between gradient and feature-based motion constraint equations. Deterministic relaxation algorithms are used to merge information and to provide a solution to the maximum a posteriori estimation of the unknown dense motion field. The algorithm is well suited to a multiresolution implementation which brings an appreciable speed-up as well as a significant improvement of estimation when large displacements are present in the scene. Experiments on synthetic and real world image sequences are reported. [1] G. Adiv, "Determining three-dimensional motion and structure from optical flow generated by several moving objects,"IEEE Trans. Pattern Anal. Machine Intell., vol. 7, pp. 384-401, July 1985. [2] J. K. Aggarwal and N. Nandhakumar, "On the computation of motion from sequences of images--A review,"Proc. IEEE, vol. 76, no. 8, pp. 917-935, 1988. [3] S. T. Barnard, "Stochastic stereo matching over scale,"Int. J. Comput. Vision, vol. 3, pp. 17-32, 1989. [4] J. Bergen, P. Burt, R. Hingorani, and S. Peleg, "Computing two motions from three frames," inProc. 3rd Int. Conf. Comput. Vision, Osaka, Dec. 1990, pp. 27-32. [5] J. Besag, "On the statistical analysis of dirty pictures,"J. Roy. Statist. Soc., vol. 48, ser. B, no 3, pp. 259-302, 1986. [6] M. J. Black and P. Anandan, "A model for the detection of motion over time," inProc. Third Int. Conf. Comput. Vision, 1990, pp. 33-37. [7] P. Bouthemy, "A maximum-likelihood framework for determining moving edges,"IEEE Trans. Pattern Anal. Machine Intell., vol. 11, no. 5, pp. 499-511, May 1989. [8] P. Bouthemy and E. François, "Motion segmentation and qualitative dynamic scene analysis from an image sequence,"Int. J. Comput. Vision, vol. 10, no. 2, pp. 157-182, 1993. [9] P. Bouthemy and P. Lalande, "Detection and tracking of moving objects based on a statistical regularization method in space and time," inProc. First European Conf. Comput. Vision, Antibes, France, Apr. 1990, pp. 307-311. [10] R. Deriche, "Using Canny's criteria to derive a recursively implemented optimal edge detector,"Int. J. Comput. Vision, pp. 167-187, 1987. [11] W. Enkelmann, "Investigations of multigrid algorithms for the estimation of optical flow fields in image sequences,"Comput. Vision Graphics Image Processing, vol. 43, pp. 150-177, 1988. [12] E. B. Gamble and T. Poggio, "Visual integration and detection of discontinuities: The key role of intensity edges," Artificial Intell. Lab., Massachusetts Inst. Technol., AI Memo 970, Oct. 1987. [13] D. Geman, S. Geman, C. Graffigne, and D. Pong, "Boundary detection by constrained optimization,"IEEE Trans. Pattern Anal. Machine Intell., vol. 12, no. 7, pp. 609-628, July 1990. [14] S. Geman and D. Geman, "Stochastic relaxation, Gibbs distributions and the bayesian restoration of images,"IEEE Trans. Pattern Anal. Machine Intell., vol. 6, no. 6, pp. 721-741, Nov. 1984. [15] F. Heitz and P. Bouthemy, "Multimodal motion estimation and segmentation using Markov random fields," inProc. IEEE Int. Conf. Patt. Recogn., June 1990, pp. 378-383. [16] F. Heitz and P. Bouthemy, "Multimodal estimation of discontinuous optical flow using Markov random fields," Tech. Rep. 1367, INRIA-Rennes, Jan. 1991. [17] F. Heitz, P. Perez, and P. Bouthemy, "Multiscale minimization of global energy functions in some visual recovery problems,"CVGIP: Image Understanding, accepted for publication, 1993. [18] B. K. P. Horn and B. G. Schunck, "Determining optical flow,"Artificial Intell., vol. 17, pp. 185-203, 1981. [19] J. Hutchinson, C. Koch, J. Luo, and C. Mead, "Computing motion using analog and binary resistive networks,"Comput., vol. 21, pp. 52-63, Mar. 1988. [20] J.K. Kearny, W.B. Thompson, and D.L. Boley, "Optical flow estimation: An error analysis of gradient-based methods with local optimization,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-9, pp. 229-244, Mar. 1987. [21] J. Konrad and E. Dubois, "Estimation of image motion fields: Bayesian formulation and stochastic solution," inProc. IEEE Int. Conf. Acoust. Speech Signal Processing, Apr. 1988, pp. 1072-1075. [22] J. Konrad and E. Dubois, "Bayesian estimation of motion vector fields,"IEEE Trans. Pattern Anal. Machine Intell., vol. 14, no. 9, pp. 910-927, 1992. [23] J. J. Little and W. E. Gillett, "Direct evidence for occlusion in stereo and motion," inProc. First European Conf. Comput. Vision, Antibes, France, Apr. 1990, pp. 336-340. [24] A. Mitiche, Y. F. Wang, and J. K. Aggarwal, "Experiments in computing optical flow with the gradient-based, multiconstraint methods,"Patt. Recogn., vol. 20, no. 2, 1987. [25] D. W. Murray and B. F. Buxton, "Scene segmentation from visual motion using global optimization,"IEEE Trans. Patt. Analy. Machine Intell., vol. PAMI-9, pp. 161-180, 1987. [26] Y.C. Lee et al., "Internal Thermal Resistance of a Multi-Chip Packaging Design for VLSI Based Systems,"IEEE Trans. Components, Hybrids, and Manufacturing Technology, Vol. 12, No. 2, June 1989, pp. 163- 169. [27] H.-H. Nagel and W. Enkelmann, "An investigation of smoothness constraints for the estimation of displacement vector fields from image sequences,"IEEE Trans. Patt. Anal. Machine Intell., vol. PAMI-8, no. 5, pp. 565-593, Sept. 1986. [28] S. Pele and H. Rom, "Motion based segmentation," inProc. Int. Conf. Patt. Recogn.(Atlantic City, NJ), June 1990, pp. 109-113. [29] B. G. Schunck, "Image flow segmentation and estimation by constraint line clustering,"IEEE Trans. Pattern Anal. Machine Intell., vol. 11, no. 10, pp. 1010-1027, Oct. 1989. [30] M. Shizawa and K. Mase, "Simultaneous multiple optical flow estimation," inProc. Int. Conf. Patt. Recogn. (Atlantic City, NJ), June 1990, pp. 274-278. [31] A. Singh, "An estimation-theoretic framework for image flow computation," inProc. 3rd Int. Conf. Comput. Vision, Osaka, Dec. 1990, pp. 168-177. [32] A. Spoerri and S. Ullman, "The early detection of motion boundaries," inProc. First Int. Conf. Comput. Vision, London, U.K., June 1987, pp. 209-218. [33] D. Terzopoulos, "Image analysis using multigrid relaxation methods,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-8, pp. 129-139, Mar. 1986. [34] A. Verri and T. Poggio, "Motion field and optical flow: Qualitative properties,"IEEE Trans. Pattern Anal. Machine Intell., vol. 11, no. 5, pp. 490-498, May 1989. [35] K. Wohn and A. Waxman, "The analytic structure of image flows: Segmentation,"Comput. Vision Graphics Image Proc., vol. 49, pp. 127-151, Feb. 1990. Index Terms: multimodal estimation; discontinuous optical flow; Markov random fields; dense velocity fields; ill-posed problem; motion boundaries; occlusion regions; motion estimation; visual motion; flow estimation; Bayesian estimation; global statistical models; gradient-based motion constraint equations; feature-based motion constraint equations; deterministic relaxation algorithms; real world image sequences; synthetic image sequences; Bayes methods; image sequences; Markov processes; motion estimation; statistics F. Heitz, P. Bouthemy, "Multimodal Estimation of Discontinuous Optical Flow using Markov Random Fields," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 12, pp. 1217-1232, Dec. 1993, doi:10.1109/34.250841 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tp/1993/12/i1217-abs.html","timestamp":"2014-04-18T16:37:51Z","content_type":null,"content_length":"63035","record_id":"<urn:uuid:08b2d38c-a25d-48ae-91ed-bad9033d7583>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mathematics of Voting and Elections: A Hands-On Approach In 1987, the Consortium for Mathematics and Its Applications Project (COMAP) changed forever the liberal arts math course with its book For All Practical Purposes. Many applications of mathematics to nontraditional areas, such as political science, first became widely known through this book. It was soon followed by the similar Excursions in Modern Mathematics, by Peter Tannenbaum. I have used the first four chapters of Tannenbaum as the basis for a course on mathematics and political science, but I found these chapters to be insufficient for an entire semester course. Fortunately, there are entire books written for a liberal arts course on the mathematics of political science, such as Mathematics and Politics, by Alan D. Taylor. The book by Hodge and Klima is an excellent entry into this field. It is based on a course taught by the authors at Grand Valley State University to students with a wide variety of mathematical backgrounds. Chapter 1 considers the virtues of majority rule in an election with just two candidates. Chapters 2 through 4 review some of the methods for resolving elections with more than two candidates. Chapter 5 goes through a proof of Arrow’s Theorem, which essentially says that a perfectly fair voting method is impossible. Chapter 6 studies weighted voting systems and how one can determine whether a voting system is weighted or not. Chapter 7 discusses measures of power in a weighted voting system. Chapter 8 takes a close look at one very important weighted voting system: the Electoral College. Chapter 9 looks at referendum elections, in which voters can vote on various related referenda, in which their opinion on whether or not one proposition should pass might depend on whether or not another proposition passes. The most interesting result here is that the least preferred outcome by all the voters might be the one that wins! Chapter 10 is on the various methods that have been used or considered for congressional apportionment. The book has plenty of material for a one-semester course. With more time to devote to each topic than either the COMAP or the Tannenbaum text, the text provides a broader and deeper coverage. The down side is that this occasionally makes the presentation less compelling. For example, devoting the entire first chapter to a mathematical analysis of why majority rule is best (and what this means) in an election with two candidates may seem irrelevant to a student who regards this as obvious. In contrast, the COMAP video The Impossible Dream: Election Theory presents a humorous and disturbing scenario of an election with 5 candidates in which 5 different voting methods produce 5 different winners. It then briefly presents Arrow’s Theorem, all in just a half hour. I have found that this approach convinces students on the first day of class that mathematics is relevant. Hodge and Klima lose some of the excitement by slowly releasing this information over three chapters. It’s also curious that with so much detail, one of the most common methods for resolving an election with more than two candidates, plurality with a runoff, isn’t mentioned. Hodge and Klima have a friendly and clear style that students will appreciate. This doesn’t mean that it’s all easy reading; the proof of Arrow’s Theorem will challenge most students at this level. The authors have tried to capture the spirit of a Moore method course, one consequence of which is the complete absence of worked-out examples. To compensate, many of the exercises (labeled “Questions”), which are scattered throughout each section, have solutions at the end of the section. These questions are denoted with a star. For example, Chapter 10 has 47 questions, 9 with solutions. This may not be enough for some instructors. For example, Chapter 10 introduces the new-states paradox but provides no exercises allowing students to play around with this idea. In contrast, the corresponding chapter in Tannenbaum has 10 worked-out examples and 59 exercises. Not all the questions are mathematical. For example, Question 2.30 asks the reader to speculate on the result of the 2000 presidential election if John McCain had run as an independent candidate. Question 8.35 asks the reader to summarize the 12th Amendment to the Constitution and to investigate the historical events behind it. Question 8.17 would be a significant project to answer; it asks the student to fill in the blank and justify that number for the following statement: “In an actual U.S. presidential election with only two candidates, it would be virtually impossible for a candidate to win the election without receiving at least ___% of the popular vote.” A satisfactory answer to Question 9.34 might be worthy of publication; it asks for a new method for solving something called the separability problem in a referendum election, an area in which the research is recent and sparse. Despite this book’s shortcomings, it is still a fine book. It is well-written and well-edited, with virtually no errors. (The errata page on the book’s web site lists two minor corrections.) Every instructor teaching this topic should consider this as the textbook, and should have this book regardless of what textbook is chosen. For All Practical Purposes: Mathematical Literacy in Today’s World (6th ed.), W. H. Freeman, 2003, ISBN 0-7167-4783-9. For All Practical Purposes: Social choice: The Impossible Dream: Election Theory , Annenberg/CPB Project, 1986. Alan D. Taylor, Mathematics and Politics: Strategy, Voting, Power and Proof , Springer-Verlag, 1995, ISBN 0-387-94391-9. Peter Tannenbaum, Excursions in Modern Mathematics (5th ed.), Prentice Hall, 2004, ISBN 0-13-100191-4. Raymond N. Greenwell (matrng@hofstra.edu) is a Professor of Mathematics at Hofstra University in Hempstead, New York. His research interests include applied mathematics and statistics, and he is coauthor of the texts Finite Mathematics and Calculus with Applications, published by Addison Wesley.
{"url":"http://www.maa.org/publications/maa-reviews/the-mathematics-of-voting-and-elections-a-hands-on-approach?device=mobile","timestamp":"2014-04-16T06:36:49Z","content_type":null,"content_length":"30834","record_id":"<urn:uuid:b0b2d9a1-079c-4a7c-853a-a820009faf6c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
Google Ngram Viewer Posted by: Alexandre Borovik | January 9, 2011 Google Ngram Viewer Google released a powerful tool for analysis of long-term cultural trends: Google Ngram Viewer, a database of 500 billion words – mainly in English – and their occurrence in books over the last 2 Hear is a graph for “logarithm, square root,exponent,cosine” from 1880 to 2008. From 1960-s, all mathematical terms appear to show a steady and significant decline in frequency of their occurrence in books. What could this mean? If you wish to run proper statistical analysis, Google kindly provides files with raw data. If you add the word “calculator”, you may get a partial answer to your question ;-) I hesitate, whether to suggest including also the word “challenged” or “challenge”, as it may be considered By: Sergei Yakovenko on January 9, 2011 at 7:26 am Indeed, as simple as that. Adding after that “slide rule” is even more illuminatiting. By: Alexandre Borovik on January 9, 2011 at 8:29 am Alas, it is not that simple at all. When looking at terms from the domain of economic (or financial) numeracy: marginal rate, percentage change (which are lay synonyms of logarithmic derivative), compound interest (which is, of course, a manifestation of exponent), you get a picture which raises even more questions. By: Alexandre Borovik on January 9, 2011 at 8:56 am • I did not experiment with the economic terms, but the general tendency should persist across the spectrum. Except for a handful few who actually understand the links and relationship between different numeric indicators, the “silent” (earning) majority learns a few tidbits and sound bites that substitute for the understanding. Once you learn that four legs good, two legs bad, it is very difficult to address the questions of stability of kinematic mechanisms with different number of joints and support points… By: Sergei Yakovenko on January 9, 2011 at 3:42 pm I also recommend a “Science” article “Quantitative Analysis of Culture Using Millions of Digitized Books”, by Jean-Baptiste Michel et al. http://www.sciencemag.org/content/early/2010/12/15/ science.1199644, and on-line supplementary material, http://www.sciencemag.org/content/suppl/2010/12/16/science.1199644.DC1/Michel.SOM.pdf for various caveats and disclaimers — but also for methodological advice and some examples. For serious analysis, if I will ever need some, I would perhaps go to the level of raw data. By: Alexandre Borovik on January 9, 2011 at 10:09 am [...] This post was mentioned on Twitter by CW, Google News US. Google News US said: [wikio.com] Google Ngram Viewer (Mathematics under the Microscope): Google released a powerful tool for … http:// bit.ly/ea46B5 #google [...] By: Tweets that mention Google Ngram Viewer « Mathematics under the Microscope -- Topsy.com on January 10, 2011 at 6:33 am I wonder how much it simply reflects in the increase in publication – with many new areas opening up (both within and outside mathematics) so that the frequency of existing terms should be expected to decline. By: Tom Franklin on January 10, 2011 at 2:09 pm • My thoughts exactly! And the effect would probably be dominated by areas outside mathematics. Dips in many areas should be correlated with increases in the variety of what’s published. A pure quantity-based approach would clear out some of these problems, but since the Google n-gram viewer is based on a sample of about 4% of books each year, that would be impossible with the By: Dranorter on January 18, 2011 at 5:54 pm @Tom Franklin: I believe you are right. Look at this graphs: There are new mathematical terms which compete with classical one like logarithm. By: Alexandre Borovik on January 10, 2011 at 5:45 pm Looks like peoples have stopped loving mathematics recently , specially from 1970′s By: science and math on January 14, 2011 at 7:03 pm that last idea seems pretty good — relative frequencies are just diminishing. i dont really know cosmology, but i’ve heard information is never lost in the universe, except maybe in black holes, so maybe the elementary functions are just moving to outerspace as evolutionary mathematical succession occurs. ET’s might be catching up on trig now. i also think, given some forms of math platonism (eg tegmark’s ‘shut up and calculate’—-all there is, is physics, and the idea that math exists exists is just a form of ‘false consciousness’ used to justify textbook sales) possibly, following the us Supreme Court’s re-affirmation that corporations are people (as well as current discussions about ‘grammar’) that given this is natural law (us constitution->newton->etc.) maybe math terms (like others) are as real as quarks and jauguars, and hence (following s Jay gould) may have periods of existence as ‘species of thought’ (to use a term of biologist d s wilson). so maybe they are just going into the fossil record, so that future archaelogists (who of course will be actually google algorithms implemented on dell computers) will have some data to mine. perhaps a jurassic park can be created for entertainment, in which people visit to see ancient operations. (more likely though due to strict finitism (eg edward nelson of princeton) there won’t be enough time. ) By: ishi on January 16, 2011 at 2:46 pm 4% of books can make a VERY representative sample — but it depends on criteria for their selection. By: Alexandre Borovik on January 18, 2011 at 6:26 pm For serious analysis, if I will ever need some, I would perhaps go to the level of raw data. By: joyopubku on January 19, 2011 at 7:41 pm will have some data to mine. perhaps a jurassic park can be created for entertainment, in which people visit to see ancient operations. By: sarikhani on January 19, 2011 at 7:42 pm Posted in Uncategorized
{"url":"http://micromath.wordpress.com/2011/01/09/google-ngram-viewer/?like=1&source=post_flair&_wpnonce=c7a699f16c","timestamp":"2014-04-17T04:57:57Z","content_type":null,"content_length":"69587","record_id":"<urn:uuid:def83c23-4dad-4b4b-ac57-1454db3de597>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 Author Message Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 16 Aug 2012, 08:35 I got it correct but took approx 2.5 minutes. Current Student stmnt 1 : insufficient by plugging numbers Joined: 25 Jun 2012 stmnt 2 : Posts: 71 x/y >1 => not suff as bot x and y can be -ve or both +ve. Location: India combined : WE: General Management from stmnt 1 we have (Energy and Utilities) x=y+1/2 => x/y = 1+1/2y => suppose x/y is 2 as x/y>1 => 2=1+1/2y = y=1/2 henc x=1 Followers: 2 both positive. Kudos [?]: 30 [0], given: 15 Ans C Am I right in my approach? Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 18 Aug 2012, 20:01 Joined: 01 Jan 2011 Bunuel... For sure, this problem can be easily solved using algebra, just as you have explained before. However, I was trying to use geometry to solve this problem. The Posts: 22 equation of the line is x/0.5 + y/-0.5= 1, which means that the x and y intercepts are 0.5 and -0.5 respectively. Hence, the line passes through 1st, 3rd, and 4th quadrants. Location: Kansas, USA In Q1, both x and y are +ve. In Q4, y is negative and hence this is out. Where I am getting confused is Q3: how do I verify whether x/y >1? Schools: INSEAD, Followers: 2 Kudos [?]: 5 [0], given: 9 Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 24 Aug 2012, 12:30 This post received chris558 KUDOS Manager Are x and y both positive? Joined: 07 Sep 2011 1) 2x-2y=1 Posts: 74 x-y=1/2 GMAT 1: 660 Q41 V40 -->-1/4-(-3/4)=1/2...NO GMAT 2: 720 Q49 V39 INSUFFICIENT WE: Analyst (Mutual 2) x/y>1 Funds and Brokerage) This just means that x and y have the same sign. They're either both positive or both negative. Followers: 1 Kudos [?]: 21 [7] , x=1/2+y given: 13 y/2 + 1 > 1 y/2 > 0 which means that Y is greater than 0. And since both x and y have the same sign, both x and y are Positive. YES. Answer is C. Re: Are X and Y both positive? GMAT PREP CAT [#permalink] 02 Oct 2012, 18:42 This post received Expert's post Bunuel wrote: Are x and y both positive? (1) 2x-2y=1 (2) x/y>1 (1) 2x-2y=1. Well this one is clearly insufficient. You can do it with number plugging OR consider the following: x and y both positive means that point (x,y) is in the I quadrant. 2x-2y=1 --> y=x-1/2, we know it's an equation of a line and basically question asks whether this line (all (x,y) points of this line) is only in I quadrant. It's just not possible. Not sufficient. (2) x/y>1 --> x and y have the same sign. But we don't know whether they are both positive or both negative. Not sufficient. (1)+(2) Again it can be done with different approaches. You should just find the one which is the less time-consuming and comfortable for you personally. One of the approaches: carcass --> substitute x --> Moderator \frac{1}{y}>0 Joined: 01 Sep 2010 --> Posts: 2175 y Followers: 172 is positive, and as Kudos [?]: 1517 [1] , x=y+\frac{1}{2} given: 610 is positive too. Sufficient. Answer: C. Discussed here: and also here along with other hard inequality problems: Hope it helps. Bunuel i would like tto know how have this : if I have ( y + 1 - y / 2 ) / y > 0 the result should be and not \frac{1}{y}> 0 can you please explain ??' @edited ............I have seen the explanation in another answer by you KUDOS is the good manner to help the entire community. Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 17 Jan 2013, 03:55 Manbehindthecurtain wrote: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 1. x-y = 1/2 This means that the distance between x and y is 1/2 unit and that x is greater than y. Senior Manager But x and y could be positive such as x=5 and y=4.5, OR Joined: 13 Aug 2012 x and y could be both negative such as x=-4 and y=-4.5 Posts: 465 Marketing, Finance 2. x/y > 1 GMAT 1: Q V0 This shows that x and y must be positive meaning they are either both (+) or both (-). GPA: 3.23 ex) x/y = 5/2 OR x/y = -5/-2 = 5/2 still > 1 Followers: 14 INSUFFICIENT. Kudos [?]: 152 [0], Combine. given: 11 Let x = 5 and y=9/2: 5/(9/2) = 10/9 > 1 - This means when x and y are both positive it could be a solution to x/y > 1 Let x = -4 and y=-9/2: -4/(-9/2) = 8/9 < 1 - This means when x and y are negative it could not be a solution to x/y > 1 Thus, SUFFICIENT that x and y are both positive. Answer: C Impossible is nothing to God. Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 02 Apr 2013, 07:25 chris558 wrote: Are x and y both positive? 1) 2x-2y=1 RRLambrecht 2(x-y)=1 Intern -->3/4-1/4=1/2....YES Joined: 02 Apr 2013 INSUFFICIENT Posts: 2 2) x/y>1 This just means that x and y have the same sign. They're either both positive or both negative. Followers: 0 INSUFFICIENT Kudos [?]: 0 [0], 1&2) given: 3 x=1/2+y y/2 + 1 > 1 y/2 > 0 which means that Y is greater than 0. And since both x and y have the same sign, both x and y are Positive. YES. Answer is C. Shouldn't (1/2+y)/y>1 simplify to (1/2y) + 1 > 1 ? Or am I missing something? Still get the right answer following this logic but I believe this step is off. Re: Are X and Y both positive? GMAT PREP CAT [#permalink] 11 Apr 2013, 09:06 Bunuel wrote: Are x and y both positive? (1) 2x-2y=1 (2) x/y>1 (1) 2x-2y=1. Well this one is clearly insufficient. You can do it with number plugging OR consider the following: x and y both positive means that point (x,y) is in the I quadrant. 2x-2y=1 --> y=x-1/2, we know it's an equation of a line and basically question asks whether this line (all (x,y) points of this line) is only in I quadrant. It's just not possible. Not sufficient. (2) x/y>1 --> x and y have the same sign. But we don't know whether they are both positive or both negative. Not sufficient. (1)+(2) Again it can be done with different approaches. You should just find the one which is the less time-consuming and comfortable for you personally. One of the approaches: Joined: 23 Jul 2010 --> substitute x --> Posts: 91 Followers: 0 Kudos [?]: 6 [0], given: 43 y is positive, and as is positive too. Sufficient. Answer: C. Discussed here: and also here along with other hard inequality problems: Hope it helps. From 1- X=Y+1/2. Divide both sides by Y you get X/Y=1+1/2Y --> 1+1/2Y>1 --> 1/2Y>0 then Y>0. Then consequently X>0. Is the reasoning sound? Re: Are X and Y both positive? GMAT PREP CAT [#permalink] 12 Apr 2013, 01:15 This post received Expert's post score780 wrote: Bunuel wrote: Are x and y both positive? (1) 2x-2y=1 (2) x/y>1 (1) 2x-2y=1. Well this one is clearly insufficient. You can do it with number plugging OR consider the following: x and y both positive means that point (x,y) is in the I quadrant. 2x-2y=1 --> y=x-1/2, we know it's an equation of a line and basically question asks whether this line (all (x,y) points of this line) is only in I quadrant. It's just not possible. Not sufficient. (2) x/y>1 --> x and y have the same sign. But we don't know whether they are both positive or both negative. Not sufficient. (1)+(2) Again it can be done with different approaches. You should just find the one which is the less time-consuming and comfortable for you personally. One of the approaches: --> substitute x --> Bunuel \frac{1}{y}>0 Math Expert --> Joined: 02 Sep 2009 y Posts: 17317 is positive, and as Followers: 2874 x=y+\frac{1}{2} Kudos [?]: 18380 [2] , , given: 2348 is positive too. Sufficient. Answer: C. Discussed here: and also here along with other hard inequality problems: Hope it helps. From 1- X=Y+1/2. Divide both sides by Y you get X/Y=1+1/2Y --> 1+1/2Y>1 --> 1/2Y>0 then Y>0. Then consequently X>0. Is the reasoning sound? Yes, your approach is correct. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 14 Sep 2013, 21:23 Hello Bunuel, Request you to please provide your comments on the doubt posted here- Usually, whenever I see combining an inequality and equation, I substitute the value of one of the variable in the inequality and then analyze the effect. So, going by that approach; x-y=1/2 ---(1) x/y>1 --(2) Substituting the value of x in equation(2) imhimanshu (y+1/2)/y>1 Senior Manager Lets assume that y is positive- Joined: 07 Sep 2010 (y+1/2) > y Posts: 341 1/2>0 --This means that our assumption is true since 1/2 is greater than Zero. Hence, y > 0 Followers: 2 Now, Lets assume that y is negative- Kudos [?]: 49 [0], Now, here I'm stuck, I know that multiplying by a negative number changes the sign of the inequality. given: 136 I'm sure that the sign will be changed but what would be the resulting equation. I mean, do we need to replace y with "-y" in the whole equation. Please clarify. Which of the following would be correct then a) y+1/2 <y b) y+1/2 < -y c) -y+1/2 < -y Please help. +1 Kudos me, Help me unlocking GMAT Club Tests Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 14 Sep 2013, 21:39 This post received Expert's post imhimanshu wrote: Hello Bunuel, Request you to please provide your comments on the doubt posted here- Usually, whenever I see combining an inequality and equation, I substitute the value of one of the variable in the inequality and then analyze the effect. So, going by that approach; x-y=1/2 ---(1) x/y>1 --(2) Substituting the value of x in equation(2) Lets assume that y is positive- (y+1/2) > y 1/2>0 --This means that our assumption is true since 1/2 is greater than Zero. Hence, y > 0 Verbal Forum Moderator Now, Lets assume that y is negative- Joined: 10 Oct 2012 Now, here I'm stuck, I know that multiplying by a negative number changes the sign of the inequality. Posts: 626 I'm sure that the sign will be changed but what would be the resulting equation. I mean, do we need to replace y with "-y" in the whole equation. Please clarify. Which of the following would be correct then Followers: 35 a) y+1/2 <y Kudos [?]: 488 [2] , b) y+1/2 < -y given: 135 c) -y+1/2 < -y Please help. Refer to the highlighted portion : Actually you don't have to take 2 cases at this point: The expression you have is : \frac{y+0.5}{y}>1 \to 1+\frac{0.5}{y}>1 \to \frac{1}{y}>0 --> Hence, y>0. As for your doubt, if y is negative, we cross-multiply it and get : y+0.5<y \to 0>0.5 , which is absurd. If y is negative, then -y would be positive, and for multiplying a positive quantity, you don't need to flip signs. So , yes expression a is correct. All that is equal and not-Deep Dive In-equality Hit and Trial for Integral Solutions Manager Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 19 Nov 2013, 10:29 Joined: 18 Oct 2013 Hi I get confused in this question. Posts: 79 I understand A,B,D are not answer bu confuse in C and E.However,official answer is C. Location: India My approach 1) x=y+(1/2) Not sufficient Concentration: 2) x/y>1 Not sufficient Technology, Finance 1+2) x=y+(1/2) So plugging in a value of y which makes x>y by statement 2 . So If,y=-2.5 which gives x=-2 then No GMAT 1: 580 Q48 V21 If, y=2 x=2.5 then Yes So answer is E. WE: Information Please correct me where I am wrong. Technology (Computer Followers: 2 Kudos [?]: 20 [0], given: 21 Math Expert Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 19 Nov 2013, 14:34 Joined: 02 Sep 2009 Expert's post Posts: 17317 Followers: 2874 audiogal101 Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 03 Dec 2013, 22:07 Intern St 1) 2x-2y = 1 => 2 (x-y) = 1 => x-y =1/2 => all this tells us is that x > y (could be positive or negative) == hence INSUFF Joined: 23 Oct 2012 St 2) x/y > 1 => we don't know if y is (+) or (-) . So we have two cases: Posts: 31 if y positive, then x>y; if y negative, then x<y (again INSUFF) Followers: 0 Combining 1) and 2) we get x>y (from 1) ...which means y is positive (from 2) Kudos [?]: 0 [0], Hence, if y is positive, and x >y, then x is also positive. SUFF!! given: 3 Hope this was reasoned properly. Re: Are X and Y both positive? GMAT PREP CAT [#permalink] 21 Dec 2013, 23:27 Bunuel wrote: Are x and y both positive? Joined: 16 Oct 2013 2x-2y=1 --> x=y+\frac{1}{2} Posts: 12 \frac{x}{y}>1 --> \frac{x-y}{y}>0 --> substitute x --> \frac{1}{y}>0 --> y is positive, and as x=y+\frac{1}{2}, x is positive too. Sufficient. Followers: 0 Hope it helps. Kudos [?]: 0 [0], given: 0 Sorry for the bump but could you elaborate on the last part where you go from x/y>1 to (x-y)/y>0 to 1/y>0 ..? I don't quite follow this algebra Math Expert Re: Are X and Y both positive? GMAT PREP CAT [#permalink] 22 Dec 2013, 03:47 Joined: 02 Sep 2009 Expert's post Posts: 17317 Followers: 2874 Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 23 Dec 2013, 02:53 Manbehindthecurtain wrote: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 SravnaTestPrep Plug in approach that can be used without thinking much and very likely arrive at the correct answer. Senior Manager Values to be taken: x positive and negative and find the corresponding values for y based on the statements Joined: 17 Dec 2012 Note: x and y cannot be of different signs and also x cannot be zero as they will not satisfy (ii) Posts: 359 (i) x=10, we have y =9.5 .Both positive satisfied And now x=-10, we have y=-9.5. Both negative also satisfied .Different results. So (i) alone not sufficient Location: India (ii) x=10, y can be positive. Both positive satisfied . And now x=-10, y can be negative. Both negative also satisfied. So (ii) alone not sufficient Followers: 9 (i) + (ii) x=10, y=9.5 satisfies both the statements . Both positive satisfied . And now x=-10. Value of y is found from (i) and is negative , but we see it does not satisfy (ii). So both cannot be negative . Kudos [?]: 128 [0], given: 8 So we can answer the question using (i) and (ii) together Srinivasan Vaidyaraman Sravna Test Prep Online courses and 1-on-1 Online Tutoring for the GMAT and the GRE Re: Are X and Y both positive? GMAT PREP CAT [#permalink] 25 Jan 2014, 20:15 Bunuel wrote: \frac{x}{y}>1 does not mean that x>y. If both x and y are positive, then x>y, BUT if both are negative, then x<y. What you are actually doing when writing x>y from \frac{x} {y}>1 is multiplying both parts of inequality by y: never multiply (or reduce) an inequality by variable (or by an expression with variable) if you don't know the sign of it or are not certain that variable (or expression with variable) doesn't equal to zero. So from (2) \frac{x}{y}>1, we can only deduce that x and y have the same sigh (either both positive or both negative). See the complete solution of this problem in my previous post. Hope it helps. Hi Bunuel, Please can you elaborate on the below part ankur1901 \frac{x}{y}>1 Manager does not mean that Joined: 23 May 2013 x>y Posts: 126 . If both Followers: 0 x Kudos [?]: 17 [0], and given: 108 are positive, then , BUT if both are negative, then for this case i took ii as X>Y, then taking X = -3 and Y = -3.5 i can satisfy ii in both ways if X and Y are -ve or X and Y are +ve. But have read in this forum that we should first change the inequality to x/y > 1. Why do we need to do that? Any theory around this will be helpful. Thanks in advance “Confidence comes not from always being right but from not fearing to be wrong.” Re: Are X and Y both positive? GMAT PREP CAT [#permalink] 26 Jan 2014, 05:27 Expert's post ankur1901 wrote: Bunuel wrote: \frac{x}{y}>1 does not mean that x>y. If both x and y are positive, then x>y, BUT if both are negative, then x<y. What you are actually doing when writing x>y from \frac{x} {y}>1 is multiplying both parts of inequality by y: never multiply (or reduce) an inequality by variable (or by an expression with variable) if you don't know the sign of it or are not certain that variable (or expression with variable) doesn't equal to zero. So from (2) \frac{x}{y}>1, we can only deduce that x and y have the same sigh (either both positive or both negative). See the complete solution of this problem in my previous post. Hope it helps. Hi Bunuel, Please can you elaborate on the below part does not mean that . If both Math Expert Joined: 02 Sep 2009 are positive, then Posts: 17317 Followers: 2874 , BUT if both are negative, then for this case i took ii as X>Y, then taking X = -3 and Y = -3.5 i can satisfy ii in both ways if X and Y are -ve or X and Y are +ve. But have read in this forum that we should first change the inequality to x/y > 1. Why do we need to do that? Any theory around this will be helpful. Thanks in advance Sorry but I don't follow what you mean... NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 28 Jan 2014, 22:21 my bad Bunuel. I got it now. Thanks Joined: 23 May 2013 Posts: 126 “Confidence comes not from always being right but from not fearing to be wrong.” Followers: 0 Kudos [?]: 17 [0], given: 108 Joined: 11 Jan 2014 Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 29 Jan 2014, 21:28 Posts: 95 Is it safe to solve this kind of questions based on logic? Concentration: Finance, I didn't jump into calculations/plug-ins, since statement (1) is clearly insufficient. And statement (2) states that x & y both have the same sign, so combining them together, Statistics the result of subtraction is a positive number, and given from (2) that they have the same sign, then they both must be positive. GMAT Date: 03-04-2014 GPA: 3.77 WE: Analyst (Retail Followers: 1 Kudos [?]: 27 [0], given: 7 gmatclubot Re: Are x and y both positive? (1) 2x-2y = 1 (2) x/y > 1 [#permalink] 29 Jan 2014, 21:28
{"url":"http://gmatclub.com/forum/are-x-and-y-both-positive-1-2x-2x-1-2-x-y-63377-20.html?sort_by_oldest=true","timestamp":"2014-04-19T02:16:09Z","content_type":null,"content_length":"254924","record_id":"<urn:uuid:464682e4-8ded-41a6-9143-39e9ce05fd90>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Overview Package Class Tree Deprecated Index Help PREV CLASS NEXT CLASS FRAMES NO FRAMES SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD Class ECDSASignatureVerifier All Implemented Interfaces: public final class ECDSASignatureVerifier extends Object implements SignatureVerifier Verifies an ECDSA signature on a message. Elliptic Curve cryptography is defined in various standards including P1363 and ANSI X9.62. ECDSA is specifically defined in ANSI X9.62. See Also: Signed: This element is only accessible by signed applications. If you intend to use this element, please visit http://www.blackberry.com/go/codesigning to obtain a set of code signing keys. Code signing is only required for applications running on BlackBerry smartphones; development on BlackBerry Smartphone Simulators can occur without code signing. BlackBerry API 3.6.0 │ Constructor Summary │ │ │ ECDSASignatureVerifier(ECPublicKey key, byte[] r, int rOffset, byte[] s, int sOffset) │ │ │ │ Constructs an ECDSASignatureVerifier object using SHA-1 for the digest. │ │ │ │ ECDSASignatureVerifier(ECPublicKey key, Digest digest, byte[] r, int rOffset, byte[] s, int sOffset) │ │ │ │ Constructs an ECDSASignatureVerifier object. │ │ │ Method Summary │ │ │ String │ getAlgorithm() │ │ │ │ Returns the name of the signing algorithm used, ie "ECDSA/" + Digest.getAlgorithm(). │ │ │ void │ update(byte[] data) │ │ │ │ Adds additional message data to the signature. │ │ │ void │ update(byte[] data, int offset, int length) │ │ │ │ Adds additional message data to the signature. │ │ │ void │ update(int data) │ │ │ │ Adds additional message data to the signature. │ │ │ boolean │ verify() │ │ │ │ Returns true if the signature is valid, false otherwise. │ public ECDSASignatureVerifier(ECPublicKey key, byte[] r, int rOffset, byte[] s, int sOffset) throws CryptoTokenException, Constructs an ECDSASignatureVerifier object using SHA-1 for the digest. NOTE: Also, if r and s point to the same buffer, then it is assumed that the length of r and the length of s are both exactly the length of the private key. key - The public key to use for verification. r - The r part of the signature to verify. Note that r is an element of the finite field that the elliptic curve is defined over. rOffset - The offset, or start position, of the signature data within the array r. s - The s part of the signature to verify. Note that s is an element of the finite field that the elliptic curve is defined over. sOffset - The offset, or start position, of the signature data within the array s. CryptoTokenException - Thrown if an error occurs with the crypto token or the crypto token is invalid. CryptoUnsupportedOperationException - Thrown if a call is made to an unsupported operation. Signed: This element is only accessible by signed applications. If you intend to use this element, please visit http://www.blackberry.com/go/codesigning to obtain a set of code signing keys. Code signing is only required for applications running on BlackBerry smartphones; development on BlackBerry Smartphone Simulators can occur without code signing. BlackBerry API 3.6.0 public ECDSASignatureVerifier(ECPublicKey key, Digest digest, byte[] r, int rOffset, byte[] s, int sOffset) throws CryptoTokenException, Constructs an ECDSASignatureVerifier object. NOTE: If the digest has any state information in it when it is given to the signature verifier, this information will be incorparated into the signature. NOTE: Also, if r and s point to the same buffer, then it is assumed that the length of r and the length of s are both exactly the length of the private key. key - The public key to use for verification. digest - The digest to use for verification. r - The r part of the signature to verify. Note that r is an element of the finite field that the elliptic curve is defined over. rOffset - The offset, or start position, of the signature data within the array r. s - The s part of the signature to verify. Note that s is an element of the finite field that the elliptic curve is defined over. sOffset - The offset, or start position, of the signature data within the array s. CryptoTokenException - Thrown if an error occurs with the crypto token or the crypto token is invalid. CryptoUnsupportedOperationException - Thrown if a call is made to an unsupported operation. Signed: This element is only accessible by signed applications. If you intend to use this element, please visit http://www.blackberry.com/go/codesigning to obtain a set of code signing keys. Code signing is only required for applications running on BlackBerry smartphones; development on BlackBerry Smartphone Simulators can occur without code signing. BlackBerry API 3.6.0 public String getAlgorithm() Returns the name of the signing algorithm used, ie "ECDSA/" + Digest.getAlgorithm(). Specified by: getAlgorithm in interface SignatureVerifier A String representing the name of the algorithm. Signed: This element is only accessible by signed applications. If you intend to use this element, please visit http://www.blackberry.com/go/codesigning to obtain a set of code signing keys. Code signing is only required for applications running on BlackBerry smartphones; development on BlackBerry Smartphone Simulators can occur without code signing. BlackBerry API 3.6.0 public void update(int data) Description copied from interface: SignatureVerifier Adds additional message data to the signature. Specified by: update in interface SignatureVerifier data - The byte to be hashed. Signed: This element is only accessible by signed applications. If you intend to use this element, please visit http://www.blackberry.com/go/codesigning to obtain a set of code signing keys. Code signing is only required for applications running on BlackBerry smartphones; development on BlackBerry Smartphone Simulators can occur without code signing. BlackBerry API 3.6.0 public void update(byte[] data) Description copied from interface: SignatureVerifier Adds additional message data to the signature. Specified by: update in interface SignatureVerifier data - A byte array containing the message data to hash. Signed: This element is only accessible by signed applications. If you intend to use this element, please visit http://www.blackberry.com/go/codesigning to obtain a set of code signing keys. Code signing is only required for applications running on BlackBerry smartphones; development on BlackBerry Smartphone Simulators can occur without code signing. BlackBerry API 3.6.0 public void update(byte[] data, int offset, int length) Description copied from interface: SignatureVerifier Adds additional message data to the signature. Specified by: update in interface SignatureVerifier data - The message data to hash. offset - The offset, or initial position to start reading in the data. length - The amount of data to read. Signed: This element is only accessible by signed applications. If you intend to use this element, please visit http://www.blackberry.com/go/codesigning to obtain a set of code signing keys. Code signing is only required for applications running on BlackBerry smartphones; development on BlackBerry Smartphone Simulators can occur without code signing. BlackBerry API 3.6.0 public boolean verify() throws CryptoTokenException, Description copied from interface: SignatureVerifier Returns true if the signature is valid, false otherwise. Specified by: verify in interface SignatureVerifier A boolean that returns true if the signature is valid, false otherwise. CryptoTokenException - Thrown when a problem occurs with a crypto token or the crypto token is invalid. CryptoUnsupportedOperationException - Thrown when a call is made to an unsupported operation. Signed: This element is only accessible by signed applications. If you intend to use this element, please visit http://www.blackberry.com/go/codesigning to obtain a set of code signing keys. Code signing is only required for applications running on BlackBerry smartphones; development on BlackBerry Smartphone Simulators can occur without code signing. BlackBerry API 3.6.0 Overview Package Class Tree Deprecated Index Help PREV CLASS NEXT CLASS FRAMES NO FRAMES SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD Copyright 1999-2011 Research In Motion Limited. 295 Phillip Street, Waterloo, Ontario, Canada, N2L 3W8. All Rights Reserved. Java is a trademark of Oracle America Inc. in the US and other countries.
{"url":"http://www.blackberry.com/developers/docs/7.0.0api/net/rim/device/api/crypto/ECDSASignatureVerifier.html","timestamp":"2014-04-19T01:55:53Z","content_type":null,"content_length":"33173","record_id":"<urn:uuid:194f1d03-d791-4004-8b9f-0a744fa80247>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
You may need to convert voltage, amperage and electrical specifications from equipment into KW, KVA and BTU information that can be used to calculate overall power and HVAC requirements. The following section addresses the process of taking basic electrical values and converting them into other types of electrical values. The specification nameplates on most pieces of computer, radio or network equipment usually list electrical values. These values are usually expressed in volts, AMPS, kilovolt-AMPS (KVA), watts or some combination of all of the above. If you are an architect or engineer using equipment nameplate specifications to compute power and cooling requirements, you will find that the total power and cooling values will exceed the true requirements of the equipment. Reason: the nameplate value is designed to ensure that the equipment will energize and run safely. Manufacturers build in a “safety factor” (sometimes called an “engineering cushion”) when developing their nameplate data. Some nameplates specify power requirements that are higher than the equipment will ever need. The most common engineering solution is to utilize only 80% of available capacity and therefore your computed results will “over engineer” the power and cooling equipment by a factor close to 20%. Develop the power and cooling budget using the nameplate specifications inserted into the formulae below and use the resultant documentation as your baseline. Document everything. There will come a day when you will need every amp of power you projected. Power budgets are notoriously consumed in a much shorter time than predicted. Don’t forget to add a “future factor” to your power and cooling budget. Power supplies double in power draw and heat every two to three years. If you don’t include these factors in your budgets, you will consume a 10 year power and cooling budget in three years (this happened to me, I know this is true). Three Phase Power You will notice that all of the equations that refer to three phase power contain the value 1.73 in the formula somewhere. The value 1.73 is the square root of 3. Intuitively, you can see how this value is applied in the formulae. (3 phases therefore 1 phase = square root of 3) Computing Watts When Volts and AMPS are Known POWER (WATTS) = VOLTS x AMPS For example, a small computer has a nameplate that shows 2.5 amps. Given a normal 120 Volt, 60 Hz power source and the ampere reading from equipment, make the following calculation: POWER (WATTS) = 2.5AMPS x 120VOLTS = 300 WATTS Generally: P=IE P= Power(WATTS) I = Current(AMPS) and E = Voltage(VOLTS). So: I = P/E and E = P/I therefore: 1 watt = 1 ampere x 1 volt Computing Volt-AMPS (VA) Same as above. VOLT-AMPS (VA) = VOLTS x AMPS = 300 VA Computing kilovolt-AMPS (KVA) KVA stands for “Thousand Volt-Amps”. A 2-Pole Single Phase 208-240 power source requires 2 hot wires from 2 different circuits (referred to as poles) from a power distribution panel. KILOVOLT-AMPS (KVA) = VOLTS x AMPS / 1000 Using the previous example: 120 x 2.5 = 300 VA 300 VA / 1000 = .3 KVA 208-240 SINGLE-PHASE (2-POLE SINGLE-PHASE) Example: An enterprise server with a 4.7 amp rating and requiring a 208-240 power source. Use 220 volts for our calculations. KILOVOLT-AMPS (KVA) = VOLTS x AMPS /1000 220 x 4.7 = 1034 1034 / 1000 = 1.034 KVA Example: A large storage system loaded with disks. The equipment documentation shows a requirement for a 50-amp 208 VAC receptacle. For this calculation, we will use 20 amps. Do not calculate any value for the plug or receptacle. KILOVOLT-AMPS (KVA) = VOLTS x AMPS x 1.73 208 x 20 x 1.73 = 7,196.8 7,196.8 / 1000 = 7.196 KVA (Generally, this would be rounded to 7.2 KVA) Computing Kilowatts Finding Kilowatts can be more complicated because the formula uses a value for the “power factor”. The power factor represents the efficiency in the use of of the electricity applied to the system. This factor can vary widely from 60% to 95% and is never published on the equipment nameplate. It is not often supplied with product information. For purposes of these calculations, we use a power factor of .85. This random number places a slight inaccuracy into the numbers. Its OK and it gets us very close for the work we need to do. Most UPS equipment will claim a power factor of 1.00. It is common for the power factor to be considered 1.0 for devices less than 3 years old. Example: We have a medium-sized Intel server that draws 6.0 amps. KILOWATT (KW) = VOLTS x AMPS x POWER FACTOR / 1000 120 x 6.0 = 720 VA 720 VA x .85 = 612 612 / 1000 = .612 KW 208-240 SINGLE-PHASE (2-POLE SINGLE-PHASE) Example: An enterprise server with a 4.7 amp rating and requiring a 208-240 power source. I’ll use 220 volts for our calculations. KILOWATT (KW) = VOLTS x AMPS x POWER FACTOR x 2 / 1000 220 x 4.7 x 2 = 2068 2068 x .85 = 1757.8 1757.8 / 1000 = 1.76 KW Example: A large storage system loaded with disks. The equipment documentation shows a requirement for a 50-amp 208 VAC receptacle. For this calculation, we will use 21 amps. Do not calculate any value for the plug or receptacle. KILOWATT (KW) = (VOLTS x AMPS x POWER FACTOR x 1.73) / 1000 208 x 22 x 1.73 = 7,916.48 7,916.48 x .85 = 6,729.008 6,729.008/1000=6.729 KW To Convert Between KW and KVA The only difference between KW and KVA is the power factor. Once again, the power factor, unless ascertained from the manufacturer, is an approximation. For this example, we use a power factor of .95. The KVA value is always higher than the value for KW. KW to KVA KW / .95 = SAME VALUE EXPRESSED IN KVA KVA TO KW KVA x .95 = SAME VALUE EXPRESSED IN KW Computing BTUs Known Standard: 1 KW = 3413 BTUs (or 3.413 KBTUs) If you divide the electrical nameplate BTU value by 3413 you may not get the published KW value. If the BTU information is provided by the manufacturer, use it, otherwise use the above formula. Shotgun Section Here are conversions, short and sweet: Convert Watts to Volts when amps are known: Voltage = Watts / AMPS E = P / I Convert Watts to AMPS when volts are known: AMPS = Watts / Voltage I = P / E For 3 Phase power divide by 1.73 Convert AMPS to Watts when volts are known: Watts = Voltage x Amps P = E x I For 3 Phase power multiply by 1.73 Convert Horsepower to AMPS: HORSEPOWER= (E x I x EFF) / 746 EFFICIENCY= (746 x HP) / (V x A) Multiply Horsepower by 746w (1 HP = 746 Watts) Find Circuit Voltage and Phase 40 HP at 480 (3 Phase) 746 multiplied by 40 = 29840 29840 divided by 480 (3 Phase) = 62.2 62.2 divided by 1.73 = 35.95AMPS · Convert KVA to AMPS: Multiply KVA by 1000/voltage [ (KVA x 1000) / E ] For 3 Phase power divide by 1.73 [ (KVA x 1000) / E x 1.73 ] · Convert KW to AMPS: Multiply KW by 1000/voltage and then by power factor [ (KW x 1000) / E x PF ] for 3 Phase power divide by 1.73 [ (KW x 1000) / E x PF x 1.73 ] TO FIND AMPS (I) Direct Current When HP, E and EFF are known: HP x 746 / E x EFF When KW and E are known: KW x 1000 / E When P, E and PF are known: P / E x PF When HP, E, EFF and PF are known: HP x 746 / E x EFF x PF When KW, E and PF are known: KW x 1000 / E x PF When KVA and E are known: KVA x 1000 / E When P, E and PF are known: P / E x PF x 1.73 When HP, E, EFF and PF are known: HP x 746 / E x EFF x PF x 1.73 When KW, E and PF are known: KW x 1000 / E x PF x 1.73 When KVA and E are known: KVA x 1000 / E x 1.73 TO FIND WATTS (P) When E and I are known: I x E When R and I are known: R x I2 When E and R are known: E2 / R TO FIND KILOWATTS (KW) Direct Current E and I must be known: E x I / 1000 E, I and PF must be known: E x I x PF / 1000 E, I and PF must be known: E x I x PF x 1.73 / 1000 TO FIND KILOVOLT-AMPS (KVA) E and I must be known: E x I / 1000 E and I must be known: E x I x 1.73 / 1000 TO FIND HORSEPOWER (HP) Direct Current E, I and EFF must be known: E x I x EFF / 746 E, I, PF and EFF must be known: E x I x PF x EFF / 746 E, I, PF and EFF must be known: E x I x PF x EFF x 1.73 / 746 E =VOLTS P =WATTS R = OHMS I =AMPS PF = POWER FACTOR KW = KILOWATTS KVA = KILOVOLT-AMPS EFF = EFFICIENCY (expressed as a decimal) Basic Horsepower Calculations Horsepower is work done per unit of time. One HP equals 33,000 ft-lb of work per minute. When work is done by a source of torque (T) to produce (M) rotations about an axis, the work done is: radius x 2 x rpm x lb. or 2 TM When rotation is at the rate N rpm, the HP delivered is: HP = radius x 2 x rpm x lb. / 33,000 = TN / 5,250 For vertical or hoisting motion: HP = W x S / 33,000 x E W = total weight in lbs. to be raised by motor S = hoisting speed in feet per minute E = overall mechanical efficiency of hoist and gearing. For purposes of estimating E = .65 for eff. of hoist and connected gear. Energy measurement with Joules and Dynes Energy is measured in joules (watt-seconds) or kilowatt-hours. A power level of one watt that continues for one second equals one joule. The integrated energy from a 100-watt light that runs for 60 seconds equals 6000 joules. 4.18 joules equal 1 calorie, which is enough energy to raise the temperature of one gram of water by one degree Celsius (or Centigrade). When it comes to energy density (watts per liter or watts per kilogram) it is difficult to beat gasoline. A lead-acid battery is good for about 125 thousand joules per kilogram. Lithium batteries can provide as much as 1.5 million joules per kilogram. Gasoline tends to run about 45 million joules per kilogram. 1 joule is exactly 107 ergs. 1 joule is approximately equal to: * 6.2415 x 1018 eV (electron volts) * 0.2390 cal (calorie) (small calories, lower case c) * 2.3901 x 10−4 kilocalorie, Calories (food energy, upper case C) * 9.4782 x 10−4 BTU (British thermal unit) * 0.7376 ft-lb (foot-pound force)2.7778 x 10−7 kilowatt hour * 2.7778 x 10−4 watt hour Units defined in terms of the joule include: * 1 thermo chemical calorie = 4.184 J * 1 International Table calorie = 4.1868 J * 1 watt hour = 3600 J * 1 kilowatt hour = 3.6 x 106 J (or 3.6 MJ) * 1 ton TNT = 4.184 GJ { 0 comments... read them below or add one }
{"url":"http://adhiwahyudy.blogspot.com/2010/02/electrical-conversion_11.html","timestamp":"2014-04-20T00:52:31Z","content_type":null,"content_length":"75291","record_id":"<urn:uuid:8ee26d88-cde2-40e0-8e79-7ff41d0c3cec>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Needham, MA Geometry Tutor Find a Needham, MA Geometry Tutor ...I also have tutored the SAT and the LSAT many times. I scored 99th percentile on the SAT (perfect score on current scale) and 90th on the LSAT. In all I have over 20 years of experience with countless students in many different subjects. 29 Subjects: including geometry, reading, calculus, GED ...We are all surrounded by math and science, and a practical knowledge of them makes the world that much less mysterious. I teach and tutor because I can find no greater satisfaction than to instill the excitement I feel about the maths and sciences in others. When I tutor a student, the first thing I do is evaluate what piece of the foundation is missing. 12 Subjects: including geometry, chemistry, physics, calculus ...I make sure that students do as much as possible on their own, and I take on for myself the role as a guide rather than simply an instructor. I use many examples and problems, starting with easy ones and working up to harder ones. As they progress I help them see how these particular examples and problems fit into the big ideas they are studying. 9 Subjects: including geometry, calculus, physics, algebra 1 ...I am licensed to teach math (8-12) and the topics on the SATs are covered in the licensure. I have tutored in SAT math since 2010. I have been using American Sign Language since 2009. 9 Subjects: including geometry, algebra 1, algebra 2, SAT math ...I have been teaching C++ at North Reading High School for 7 years. I am quite proficient at C++ considering my undergraduate degree at Harvard was Computer Science, in the C++ language, and then I worked 2 years as a C++ programmer before becoming a teacher. I am qualified to teach C since it i... 19 Subjects: including geometry, physics, calculus, SAT math Related Needham, MA Tutors Needham, MA Accounting Tutors Needham, MA ACT Tutors Needham, MA Algebra Tutors Needham, MA Algebra 2 Tutors Needham, MA Calculus Tutors Needham, MA Geometry Tutors Needham, MA Math Tutors Needham, MA Prealgebra Tutors Needham, MA Precalculus Tutors Needham, MA SAT Tutors Needham, MA SAT Math Tutors Needham, MA Science Tutors Needham, MA Statistics Tutors Needham, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Needham_MA_Geometry_tutors.php","timestamp":"2014-04-19T07:40:35Z","content_type":null,"content_length":"23971","record_id":"<urn:uuid:0a0679ff-3a59-46b7-840b-cbc02d1916bc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
The Six Degrees of Conference Separation Note: If you find this boring, then I apologize in advance. The six degrees of separation is a theoretical concept that says, "everyone is on average approximately six steps away, by way of introduction, from any other person on Earth, so that a chain of, "a friend of a friend" statements can be made, on average, to connect any two people in six steps or fewer. (Wikiped’d)". In other words, if you wanted to connect Nerd the Rebel to say… E. Honda, you could say Nerd the Rebel knows Person A who knows Person B who knows… who knows E. Honda, and you’d only need at most six people in the middle. Well how about this: Given two BCS football teams A and B and the 2012 schedule, how many games does it take to connect A and B. For example, let’s take Ole Miss and Oklahoma. Well, Ole Miss plays Texas who plays Oklahoma, so the answer is two games. But what is the maximum number of games we would need to connect any two teams? I think this is a difficult question in general, but I want to share a few thoughts I have. We’re going to model this question by using vertices to represent teams, and if two teams play each other, they will be connected by an edge. (A couple of years ago, I wrote another post with a similar construct.) As an example, here is what Ole Miss’s schedule looks like. (Central Arkansas is in red, because we really only are considering BCS teams, not FBS.) OFF TOPIC: As a side note, I think this modeling systems makes an easy graphic to see a team's success overall. Just replace an edge with an arrow facing the winning team. Here’s Ole Miss’s 2011 Yeah, pretty depressing, but it’s easy to see that Ole Miss went 2-10 without having to count the W and L column on a schedule. Just another example, here’s Georgia’s 2011 schedule: And we can also model the conference’s performance as a whole. Here’s the SEC conference schedule ( I didn’t add out of conference games, but it could easily be done.) : This idea may not be ideal for basketball and baseball as the number of edges would be much greater. Back on Topic: Again, we want to know what is the maximum number of games required to connect any two teams via schedules. Honestly, I don’t know the exact answer, but I’ll share some of what I have found so far. Let conf(A) mean the conference of team A. Let dis(A,B) be the distance from A to B (the number of games connecting A to B), and let DIS(conf(A),conf(B)) be the maximum dis(C,D) for any team C in conf(A) and any team D in conf(B). Makes sense? If team A and team B are in the same conference (conf(A)=conf(B)), then dis(A,B) is at least 1 and at most 2. For example, dis(Ole Miss, Florida) = 2 because Ole Miss plays Vanderbilt who plays Florida. Suppose A and B are not in the same conference then. Well, this can get kind of tricky. I think for any two conferences conf(A) and conf(B), DIS(conf(A),conf(B)) will be at most six, but I’m not sure. Let’s do an example, what is DIS(SEC,PAC12)? (again, this means the maximum number of games that connects any SEC team to any PAC12 team) Well, dis(LSU, Washington) = 1 because LSU plays Washington (and also dis(Mizzou, Arizona State) = 1). So DIS(SEC, PAC12) is at least 1. And the worse case scenario can be viewed in the picture below. Let’s figure out dis(Auburn, UCLA). Well, Auburn plays LSU who plays Washington who plays USC who plays UCLA. That is the shortest path in this process since Auburn doesn’t play Mizzou (Then the shortest path would be Auburn to Mizzou to Arizona State to UCLA). Hence dis(Auburn, UCLA) = 4. However, it may be quicker to go through a third conference. For instance dis(Alabama, UCLA) = 3 if you take a path through the Big 10. So dis(Auburn, UCLA) might be smaller than 4, and I just am not smart enough to see the path through a third conference. It gets really confusing when you look at the WAC, Sun Belt, etc. Anyway, what I do know is that DIS( SEC, PAC 12) is at most 4 with the worst case scenario being something like the Auburn to UCLA example I showed above. I said that I think six is the worst possible for any two teams, and the reason I said that is because not every conference plays every conference. For instance, the Big East and the Big 12 do not have a regular season meeting in 2012. The only conference the SEC avoids is MWC (and a few of the independents). If two conferences don’t play each other directly, then you’ll have to go through a third conference, and that’s very hard to spot just looking at schedules. It’s possible, but it would take some time. If any of you are thinking about writing a mathematical paper, this might be kind of an interesting question to pursue, except you would want to answer the question regardless of what the schedule looked like. Anyway, I hope you found this interesting.
{"url":"http://www.redcuprebellion.com/2012/6/18/3091546/the-six-degrees-of-conference-separation","timestamp":"2014-04-16T07:54:42Z","content_type":null,"content_length":"88627","record_id":"<urn:uuid:974038d5-5703-481f-9dd0-42851375addf>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Tom Cruise height in meters You asked: Tom Cruise height in meters Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/tom_cruise_height_in_meters","timestamp":"2014-04-23T13:23:45Z","content_type":null,"content_length":"66654","record_id":"<urn:uuid:3c53d205-b237-438e-b500-a678df60c2f0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Made Visual: Creating Images for Understanding Mathematics This wonderful book is a fitting edition to the Classroom Resource series. Indeed any teacher of mathematics at the high school level or above should have a copy. Let me take that a bit further: anyone with an interest in mathematics should have a copy! While I ended up reading it pretty much straight through, this is a wonderful reference book which can be consulted whenever one is stuck for a way to make a concept come to life or for an activity to get students involved in mathematics. The first two-thirds of the text consists of a wonderful set of examples of how visualization can aid understanding and inspire exploration. Each section ends with a set of challenges for the reader. These problems would make wonderful projects for pre-service high school teachers — many of them can be implemented in Geometer’s Sketchpad. The final section of the book consists of hints for solving these challenges. Sandwiched between these two sections is a short section providing suggestions as to how these ideas can be used on a classroom. While technology would certainly help, many of the hints involve simple paper folding and cutting. Geometer's Sketchpad would certainly suffice to create most all of the 2-dimensional figures. Part I consists of 20 short chapters each with several related concepts. These chapters are only loosely related to one another and assume minimal mathematical background on the part of the reader. They seem designed with browsing in mind. If you are stuck for a way to explain a concept or for nice examples spend a few minutes with Math Made Visual. Here are a few samples. I am not going to attempt a summary as the book has no ‘plot’ — just a wonderful collection of short stories! Chapters 1 through 4 demonstrate ways to represent numbers (and their sums and products) using geometric elements (triangular numbers), line segments, areas, and finally volumes. The formula 1 + 2 + … + n = n(n+1)/2 is derived in several interesting ways, all of them, I suspect, more convincing to students than the standard proof by induction. This formula and others (sum of the odd numbers, sum of the first n squares, etc) are then illustrated using line segments, areas, and volumes in subsequent chapters. As one might expect, the Pythagorean Theorem shows up early and often. There is also a very nice proof of Herron's formula for the area of a triangle. We also encounter proofs of several of the standard trigonometric identities including Ptolemy's Theorem. Chapter 14, Moving Frames, provides some very clever techniques to illustrate such ideas as functional composition, uniform continuity, the Lipschitz Condition and others. There are some very clever inequalities lurking in here as well. For example, which is larger π^e or e^π? This is perhaps the most interesting example of a general result: if e ≤ a < b, then a^b > b^ a. Here is a sketch with all the information you need. (Need a hint? See page 11) In Chapter 8 on Area-Preserving Transformations we meet the ancient Greek problem of computing area by constructing a square of area equal to the area of the figure under study. The classic example here is squaring the circle which we now know to be impossible. It is easy to square a rectangle using the geometric mean and hence it is also possible to square a triangle. On page 35 we encounter the general result — a method which for any convex polygon with n sides constructs with straightedge and compass a convex polygon with n–1 sides with the same area as the original polygon. One last example from page 91: ‘Take a piece of paper and scissors. Can you cut a hole in the paper large enough to walk through?’ The answer, as you might have guessed is yes. Fold the paper in half and make a series of cuts (nearly the entire folded width) alternating from the folded and unfolded side. A review of this sort can't do this book justice. To appreciate it you have to see it. The visuals (and the wonderfully clear text which accompanies them) are wonderfully conceived and masterfully executed. This is a book you will find yourself picking up again and again. Richard Wilders is Marie and Bernice Gantzert Professor in the Liberal Arts and Sciences and Professor of Mathematics at North Central College. His primary areas of interest are the history and philosophy of mathematics and of science. He has been a member of the Illinois Section of the Mathematical Association of America for 30 years and is a recipient of its Distinguished Service Award. His email address is rjwilders@noctrl.edu.
{"url":"http://www.maa.org/publications/maa-reviews/math-made-visual-creating-images-for-understanding-mathematics?device=desktop","timestamp":"2014-04-17T11:38:36Z","content_type":null,"content_length":"100026","record_id":"<urn:uuid:37c0cccc-d77c-43b3-80f0-778135df67e8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Pseudospectra of Nonsymmetric Matrices • N. Guglielmi, M. Gürbüzbalaban and M.L. Overton, Fast Approximation of the H[∞] Norm via Optimization over Spectral Value Sets SIAM J. Matrix Anal. Appl. 34 (2013), pp. 709-737 Published Article Copy of Published Article • M. Gürbüzbalaban and M.L. Overton, Some Regularity Results for the Pseudospectral Abscissa and Pseudospectral Radius of a Matrix SIAM J. Optimization 22 (2012), pp. 281-285 Published Article Copy of Published Article • N. Guglielmi and M.L. Overton, Fast Algorithms for the Approximation of the Pseudospectral Abscissa and Pseudospectral Radius of a Matrix SIAM J. Matrix Anal. Appl. 32 (2011), pp. 1166-1192 Published Article Copy of Published Article • R. Alam, S. Bora, R. Byers and M.L. Overton, Characterization and Construction of the Nearest Defective Matrix via Coalescence of Pseudospectral Components Linear Algebra and its Applications 435 (2011), pp. 494-513 PDF (final version sent to publisher) • J.V. Burke, A.S. Lewis and M.L. Overton, Spectral Conditioning and Pseudospectral Growth Numerische Mathematik 107 (2007), pp. 27-37 Published Article Copy of Published Article (pdf) • J.V. Burke, A.S. Lewis and M.L. Overton, Convexity and Lipschitz Behavior of Small Pseudospectra SIAM J. Matrix Anal. Appl. 29 (2007), pp. 586-595 Published Article Final Submitted Version (pdf) • M. Gu and M.L. Overton, An Algorithm to Compute Sep_Lambda SIAM J. Matrix Anal. Appl. 28 (2006), pp. 348-359 Published Article Copy of Published Article (pdf) • E. Mengi and M.L. Overton, Algorithms for the Computation of the Pseudospectral Radius and the Numerical Radius of a Matrix IMA Journal of Numerical Analysis 25 (2005) pp. 648-669 Published Article Copy of Published Article (pdf) • J.V. Burke, A.S. Lewis and M.L. Overton, Pseudospectral Components and the Distance to Uncontrollability SIAM J. Matrix Anal. Appl. 26 (2004), pp. 350-361 Published Article Copy of Published Article (pdf) • J.V. Burke, A.S. Lewis and M.L. Overton, A Nonsmooth, Nonconvex Optimization Approach to Robust Stabilization by Static Output Feedback and Low-Order Controllers In: S.Bittanti and P. Colaneri, eds., Proceedings of Fourth IFAC Symposium on Robust Control Design, Milan, June 2003, pp. 175-181 (Elsevier, 2004) Individual figures and problem data • J.V. Burke, A.S. Lewis and M.L. Overton, Robust Stability and A Criss-Cross Algorithm for Pseudospectra IMA Journal of Numerical Analysis 23 (2003), pp. 359-375. Published Article Copy of Published Article (pdf) • J.V. Burke, A.S. Lewis and M.L. Overton, Optimization and Pseudospectra, with Applications to Robust Stability SIAM J. Matrix Anal. Appl. 25 (2003), pp. 80-104. Published Article Copy of Published Article (pdf) Other Research Topics
{"url":"http://www.cs.nyu.edu/cs/faculty/overton/papers/pseudo.html","timestamp":"2014-04-20T03:12:28Z","content_type":null,"content_length":"5429","record_id":"<urn:uuid:e134d50d-6219-4321-99b7-7b43b9f8fb7c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Allston Precalculus Tutor ...My most recent student improved her score by 19 points after two months of tutoring. If you absolutely have to reach a certain score level for a professional qualification or to be admitted into a school you want to attend, I can get you there quickly. Studying philosophy is an especially rewarding activity. 55 Subjects: including precalculus, reading, English, writing ...I have had many statistics students and some have written excellent testimonials on my behalf. I also am approved in biology. I have done a post-doctoral fellowship in neuroscience so I know 47 Subjects: including precalculus, chemistry, calculus, reading ...I am currently a research associate in materials physics at Harvard, have completed a postdoc in geophysics at MIT, and received my doctorate in physics / quantitative biology at Brandeis University. I will travel throughout the area to meet in your home, library, or wherever is comfortable for ... 16 Subjects: including precalculus, calculus, physics, geometry I have worked over 20 years as an engineer. I understand how science and math are used in industry. I like to help students understand the importance of trying to determine if answers make sense. 10 Subjects: including precalculus, physics, calculus, algebra 2 ...I can teach the basics of computer programming. I have experience in C++ as a programmer for 2 years and in my undergraduate major. I also have experience in Java as I have been teaching it this past year to high school students. 19 Subjects: including precalculus, physics, algebra 2, algebra 1
{"url":"http://www.purplemath.com/allston_ma_precalculus_tutors.php","timestamp":"2014-04-20T16:03:48Z","content_type":null,"content_length":"23874","record_id":"<urn:uuid:b24891e8-c748-4be9-969c-14ca65edfea9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
when estimating the mean of a reflected random walk Duffy, Ken R. and Meyn, Sean P. (2010) Most likely paths to error when estimating the mean of a reflected random walk. Performance Evaluation. ISSN 0166-5316 (Submitted) It is known that simulation of the mean position of a Reflected Random Walk (RRW) {Wn} exhibits non-standard behavior, even for light-tailed increment distributions with negative drift. The Large Deviation Principle (LDP) holds for deviations below the mean, but for deviations at the usual speed above the mean the rate function is null. This paper takes a deeper look at this phenomenon. Conditional on a large sample mean, a complete sample path LDP analysis is obtained. Let I denote the rate function for the one dimensional increment process. If I is coercive, then given a large simulated mean position, under general conditions our results imply that the most likely asymptotic behavior, ∗, of the paths n−1W⌊tn⌋ is to be zero apart from on an interval [T0, T1] ⊂ [0, 1] and to satisfy the functional equation ∇I Repository Staff Only(login required)
{"url":"http://eprints.nuim.ie/2160/","timestamp":"2014-04-17T10:24:09Z","content_type":null,"content_length":"22113","record_id":"<urn:uuid:28f7e87a-c6f1-4d94-ad3b-64d379f6c72f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig Modelling May 21st 2008, 02:01 AM #1 May 2008 Trig Modelling A mass on a spring moves up and down with its displacement given by; x=3-4cos((2pi/5)t). X is in centimetres and t is in seconds. a)find the inital displacement b)the displacement after 30 seconds c)the greatest displacement Help greatly appreciated in advance This is $x(0)=3-4\cos\left(\frac{2\pi}{5}\cdot 0\right)= ?$ b)the displacement after 30 seconds This is $x(30)=3-4\cos\left(\frac{2\pi}{5}\cdot 30\right)= ?$ c)the greatest displacement Find the maximum of $x(t)$ using its derivative or the properties of the cosine function. May 21st 2008, 02:16 AM #2
{"url":"http://mathhelpforum.com/trigonometry/39128-trig-modelling.html","timestamp":"2014-04-21T16:18:12Z","content_type":null,"content_length":"33654","record_id":"<urn:uuid:be796a49-d06f-4922-bfe5-fd133c779aa3>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Math on TV: Numb3rs You knew it had to be coming. Any self-respecting individual interested in the intersection of math with popular culture must, at some point, discuss the canonical element of said intersection: CBS’s own crime solving math show, Numb3rs. The use of the 3 is to eliminate any ambiguity surrounding the subject matter of a show called “Numbers.” Since premiering in January of 2005, Numb3rs has been a consistent performer for CBS, in spite of (or because of, depending on your assumptions about the makeup of the show’s audience) its Friday night time slot. For those of you who may have never seen the show, the following synopsis should help give you some perspective: Body counts, multiple criminal masterminds, and perpetrators who are likely to act again … this is the world of NUMB3RS. FBI agent Don Eppes (Rob Morrow) couldn’t be more different from his younger brother, Charlie (David Krumholtz), a brilliant math professor at a California university. Don deals in hard facts and evidence, whereas Charlie thrives in a world of mathematical probability and equations. But despite their disparate lives and career paths, Don and Charlie often combine their areas of expertise to solve a wide range of challenging crimes in Los Angeles. (Courtesy of the Numb3rs Season 2 DVD Box) Indeed, mathematicians would be nowhere without their probabilities, or their equations. Startlingly, one could replace the word “equations” by the word “witchcraft” without at all effecting the tone of the above synopsis. Let’s take a look at the man who provides the center for the show, Professor Charlie Eppes. When you’re this good at math, you get to wear blazers made entirely of gold. Watch a few episodes of Numb3rs (or read this Wikipedia entry), and you will likely learn the following about this darling mathematician: 1. Charlie graduated from Princeton when he was 16, and is a young math prodigy. 2. Charlie enjoys chess. 3. Charlie loves blackboards. 4. Charlie has a beautiful girlfriend (a former graduate student, no less) named Amita Ramanujan, who may or may not be related to this famous (and awesome) mathematician. 5. Charlie enjoys socializing, and appears to shower regularly. 6. Charlie loves to explain mathematical concepts using real world examples, such as or spiders (see below). Spiders love math. What are we to make of these observations? While some play in to stereotypes of mathematicians, others fly in the face of those very same stereotypes. For the record, let it be known that you don’t have to graduate from college when you’re 16 to be a good mathematician (although it certainly doesn’t hurt). Moreover, not all mathematicians are good at, or even enjoy chess. By and large, though, we do enjoy a good blackboard. Points 4 and 5 signify a departure from the math nerd we all know and love. A sexy, brilliant mathematician with an equally sexy, brilliant mathematician? Neither of them even wear glasses! Not to mention the fact that Charlie has more charm than most of the other characters on the show. Is he a mathematician, or a rock star? Or, even better, is he merely a prophet for the future, in which mathematicians and rock stars will be one and the same? Of course, when you are centering your show on a mathematician, you had better make that mathematician marketable. So in a sense, avoiding certain stereotypes becomes a necessity. Still, having a positive mathematics role model like Charlie Eppes certainly can’t be bad for the math community. In fact, if the below video is any indication, people LOVE mathematicians. Turn up the volume and press play. You won’t be disappointed. This is not to say that Charlie is the only character who displays certain stereotypical idiosyncrasies. In fact, Charlie’s friend and colleague Larry exhibits behaviors stereotypical of mathematics savants, including a certain social awkwardness, as well as an aversion to any food that is not white. However, since Professor is technically a Physicist, I will throw him to the Physics camp for interpretation. Mathematicians may be portrayed relatively favorably, but what about the math itself? Is it legit? Well, it’s hard to say, really, since when all is said and done, there’s not a whole lot of math on display. Certainly there are a lot of scenes with people waving their hands and discussing math, or scenes with chalkboards that have math on them, but these scenes are often placeholders in between scenes with guns or explosions or good looking government employees (here I use “or” in the inclusive, mathematical sense of the word). This, however, is expected, again because of the mass market nature of the program. Overall, I think you’d find it difficult to do math any better after having watched an episode of Numb3rs. On the plus side, they do emphasize that the story lines are based on actual cases, so viewers can take comfort in the fact that even out in the real world, math is helping to bring in the bad guys. The show certainly doesn’t do any damage to math’s reputation. By making a protagonist who is smart and has a winning smile, the creators seem to be doing their part to show that math needn’t be as scary as it’s made out to be. While there are plenty of moments where it can be hard to suspend your disbelief (he may be smart, but I don’t think even Doogie Howser can solve the Riemann Hypothesis, contrary to what Prime Suspect would have us believe), as an overall ambassador to the universe of mathematics, the show gets a pass. If nothing else, it teaches America that not all mathematicians are completely socially inept, even if we do live at home and put the moves on a few of our advisees. Not only is her father potentially famous, but Amita Ramanujan also carried the seed of Ryan from the OC. David Krumholtz’ charm and math savvy are clearly related to his Jewishness. The mystery and suspense aspect of the show was very good. The writing was also very good & it has several unique elements. Do you wanna watch this great show then go through the link and get it now. Numb3rs Download with all episodes & better quality…
{"url":"http://www.mathgoespop.com/2008/07/math-on-tv-numb3rs.html/comment-page-1","timestamp":"2014-04-17T12:33:14Z","content_type":null,"content_length":"81820","record_id":"<urn:uuid:a911bafb-5ac1-4aa4-8a18-bfbfc84e8bad>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Consider the functions Posted by Kayleigh on Wednesday, April 10, 2013 at 12:03pm. Consider the functions f(x)= 5x+4/x+3(This is a fraction) and g(x)= 3x-4/5-x(This is a fraction) a)Find f(g(x)) b)Find g(f(x)) c)Determine whether the functions f and g are inverses of each other. • Consider the functions - Steve, Wednesday, April 10, 2013 at 12:37pm f(g) = (5g+4)/(g+3) = (5(3x-4)/(5-x)+4) / ((3x-4)/(5-x)+3) = x g(f) = (3f-4)/(5-f) = (3((5x+4)/(x+3))-4) / (5-((5x+4)/(x+3))) = x since f(g) = g(f) = x, they are inverses • Consider the functions - Kayleigh, Wednesday, April 10, 2013 at 12:44pm What are the values that need to be excluded? • Consider the functions - Steve, Wednesday, April 10, 2013 at 12:53pm whatever makes the denominator zero must be excluded, since division by zero is undefined. So, for f(g), x=5 is not allowed, since g(5) is not defined. In addition, since f(-3) is not defined, any x where g(x) = -3 must also be excluded. Luckily, there is no such x. Use similar reasoning for g(f). • Consider the functions - Kayleigh, Wednesday, April 10, 2013 at 12:56pm So there are no values to be excluded? • Consider the functions - Steve, Wednesday, April 10, 2013 at 1:04pm Read what I said. You have to exclude x=5 because g(5) is not defined. Therefore, f(g(5)) is also not defined. Related Questions Algebra - Use composition of functions to show that the functions f(x) = 5x + 7 ... College Algebra! help! - Consider the following functions f(x)= (7x+8)/(x+3) and... pre calculus - consider the functions f(x)=x^3-3 and g(x)=3 sqrt x+3: a. find f(... math - consider the functions f(x)=x^3-2 and g(x)=3 sqrt x+2: a. find f(g(x)) b... Mat101 - 5. Which of the following are functions? The last two problems, i.e., b... MAT 101 - 5. Which of the following are functions? The last two problems, i.e., ... Math - What are the similarities and differences between functions and linear ... Math - What are the similarities and differences between functions and linear ... Calculus - We're learning about different kinds of functions and I don't really ... Algebra HELP! - Consider the function h as defined. Find functions f and g so (f...
{"url":"http://www.jiskha.com/display.cgi?id=1365609784","timestamp":"2014-04-19T20:56:32Z","content_type":null,"content_length":"9817","record_id":"<urn:uuid:fa29d89b-5940-436d-9778-7393000ca917>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Wood Ridge Calculus Tutors ...I am thus proficient in ordinary differential equations, though I am not competent to tutor partial differential equations. In high school, I obtained exceptional marks in the Word Knowledge, Arithmetic Reasoning, Paragraph Comprehension, Verbal Expression, and Mathematical Knowledge sections of... 32 Subjects: including calculus, physics, statistics, geometry ...Overall, I am a very experienced tutor. I worked with students in a tutoring environment for two and a half years at Montclair State University. I have also done private tutoring for several years now in subjects from algebra to physics to calculus III and differential equations. 7 Subjects: including calculus, physics, algebra 1, astronomy ...This attitude has always served me well in patience with exploring new topics and needing to explore alternate routes of explanation. I do hold high expectations on both parties, and do understand that this is a process that evolves as a deeper relationship is formed. Currently I am employed as the Physical Science and Physics teacher at St. 9 Subjects: including calculus, physics, algebra 1, algebra 2 ...TEACHING STYLEMy teaching style is also very simple: teach to be taught. The ideology behind ?teach to be taught? is that students obtain a certain level of subject mastery when they are able to teach the subject to another individual. To implement this teaching style, I would start by explain... 13 Subjects: including calculus, chemistry, physics, algebra 1 ...I believe in a positive approach that helps you gain confidence as well as competence.I have a Bachelor's Degree in Math and a Master's Degree in Math Education. I am a certified teacher with three years of experience teaching high school math. I have also been tutoring all levels of math from elementary through college for the past two years. 9 Subjects: including calculus, geometry, algebra 1, algebra 2
{"url":"http://www.algebrahelp.com/Wood_Ridge_calculus_tutors.jsp","timestamp":"2014-04-18T16:40:01Z","content_type":null,"content_length":"25150","record_id":"<urn:uuid:55c30842-8c08-48ca-9a33-2cadcb9a6b44>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/antarazri/answered","timestamp":"2014-04-20T21:28:16Z","content_type":null,"content_length":"112899","record_id":"<urn:uuid:fe8adbc3-c361-4e7e-87da-06f36debac6d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
2007-2008 UAF Catalog Developmental Mathematics Math placement information is in the front of this catalog in the Undergraduate: Applying for Admission section. No student will be permitted to enroll in a course having prerequisites if a grade lower than a C is received in the prerequisite course. DEVM 050 3 Credits Operations with whole numbers, fractions, decimals, percents and ratios, signed numbers, evaluation of algebraic expressions and evaluation of simple formula. Metric measurement system and geometric figures. Also available via Independent Learning. (Prerequisites: Appropriate placement test scores.) (3 + 0) Offered Fall, Spring DEVM 051 1 Credit Math Skills Review Develops and reviews basic mathematical terminology, theory and operations as outlined by the Alaska State Mathematics Standards. Mathematics topics focus on reviewing the six basic "strands" of mathematical content: numeration, measurement, estimation and computation, function and relationship, geometry, and statistics and probability. Approaches to problem solving will emphasize the process of mathematical thinking, communication and reasoning. It is an appropriate course for those preparing for the High School Qualifying exam in Alaska or those needing a review of basic math skills in preparation for a math placement test at UAF. May be repeated for a total of three credits. (1 + 0) Offered As Demand Warrants DEVM 060 3 Credits Elementary Algebra First year high school algebra. Evaluating and simplifying algebraic expressions, solving first degree equations and inequalities, integer exponents, polynomials, factoring, rational expressions, equations and graphs of lines. Also available via Independent Learning. (Prerequisite: Grade of C or better in DEVM 050, ABUS 155 or appropriate placement test scores. Prerequisite courses and/or placement exams must be taken within one calendar year prior to commencement of the course.) (3 + 0) Offered Fall, Spring DEVM 061 1 Credit Review of Elementary Algebra Designed to assist students in reviewing material covered by DEVM 060. Individuals who have not previously taken an elementary algebra course are recommended to enroll in DEVM 060. Independent Learning Only DEVM 062 3 Credits Alternative Approaches to Math: Elementary Algebra Algebraic topics. Includes operations with polynomial expressions, first- and second-degree equations, graphing, integral and relational exponents, and radicals using alternative teaching styles. (Prerequisites: Grade of C or better in DEVM 050, or appropriate placement test scores. Prerequisite courses and/or placement exams must be taken within one calendar year prior to commencement of the course.) (3 + 0) Offered Fall, Spring DEVM 065 1-3 Credits Mathematics Skills Designed to assist students in reviewing and reinforcing course concepts covered by DEVM 050, 060, 062, 105 and 106. Consists of instruction which may include lab instruction, individual student work or group work. Recommended for students who need more time and help to master the material in Developmental Math courses. May be repeated. May be offered as pass/fail or letter grade. (Prerequisite: Placement.) (1-3 + 0) Offered Fall, Spring DEVM 071 1 Credit Review of Intermediate Algebra Course reviews material covered by DEVM 105. Individuals who have not taken an intermediate algebra course on the high-school level are recommended to enroll in DEVM 105. Independent Learning Only DEVM 081 1 Credit Review of Basic Geometry High school geometry without formal proofs. Topics include basic definitions, measurement, parallel lines, triangles, polygons, circles, area, solid figures and volume. Available via Independent Learning only. (Prerequisite: DEVM 060.) DEVM 082 1 Credit Hands-On Geometry Basic concepts and uses of geometry. Emphasis on "hands-on" and applied problems. (Prerequisite: A solid knowledge of arithmetic—no algebra required.) (1 + 0) Offered Fall, Spring DEVM 105 3 Credits Intermediate Algebra Second year high school algebra. Operations with rational expressions, radicals, rational exponents, logarithms, inequalities, quadratic equations, linear systems, functions, Cartesian coordinate system and graphing. To matriculate to MATH 107X from DEVM 105 a grade of B or higher is required. Also available via Independent Learning. (Prerequisite: Grade of C or better in DEVM 060, DEVM 062 or appropriate placement test scores. Prerequisite courses and/or placement exams must be taken within one calendar year prior to commencement of the course.) (3 + 0) Offered Fall, Spring DEVM 106 4 Credits Intensive Intermediate Algebra Algebraic topics. Includes exponents, radicals, graphing, systems of equations, quadratic equations and inequalities, logarithms and exponentials, and complex numbers using alternative teaching styles. (Prerequisites: Grade of C or better in DEVM 060, 062, DEVM 105 or appropriate placement test scores. This course satisfies elective credit only. Prerequisite courses and/or placement exams must be taken within one calendar year prior to commencement of the course.) (3 + 0) Offered Fall, Spring
{"url":"http://www.uaf.edu/catalog/catalog_07-08/courses/class/devm.html","timestamp":"2014-04-18T14:37:53Z","content_type":null,"content_length":"11755","record_id":"<urn:uuid:5552b8aa-30d9-4123-80b1-92fcf2d339ea>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
BrainBashers : Puzzles and Brain Teasers Here is snippet of section B of the curious multiple-choice entrance exam into the exclusive BrainBashers puzzle club. 1. The answer to Question 2 is: A. B B. A C. D D. C 2. The answer to Question 3 is: A. C B. D C. B D. A 3. The answer to Question 4 is: A. D B. A C. C D. B 4. The answer to Question 1 is: A. D B. C C. A D. B [Ref: ZVGV] © Kevin Stone 1. D 2. C 3. B 4. A The easiest way to solve this puzzle is to use Q1 to check the logic. If Q1's answer was A, this tells us that Q2's answer is B, which tells us Q3's is D, which tells us Q4's answer is B, which tells us Q1's answer is C. A contradiction. If Q1's answer was B, this tells us that Q2's answer is A, which tells us Q3's is C, which tells us Q4's answer is C, which tells us Q1's answer is A. A contradiction. If Q1's answer was C, this tells us that Q2's answer is D, which tells us Q3's is A, which tells us Q4's answer is D, which tells us Q1's answer is B. A contradiction. If Q1's answer was D, this tells us that Q2's answer is C, which tells us Q3's is B, which tells us Q4's answer is A, which tells us Q1's answer is D. Which is correct, and therefore the correct Back to the puzzles...
{"url":"http://www.brainbashers.com/showanswer.asp?ref=ZVGV","timestamp":"2014-04-21T14:46:46Z","content_type":null,"content_length":"7502","record_id":"<urn:uuid:b8553b48-aa49-4c72-9139-88fc0533529d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximum likelihood blind equalization Results 1 - 10 of 19 - Proc. IEEE , 1998 "... this paper is to review developments in blind channel identification and estimation within the estimation theoretical framework. We have paid special attention to the issue of identifiability, which is at the center of all blind channel estimation problems. Various existing algorithms are classified ..." Cited by 79 (2 self) Add to MetaCart this paper is to review developments in blind channel identification and estimation within the estimation theoretical framework. We have paid special attention to the issue of identifiability, which is at the center of all blind channel estimation problems. Various existing algorithms are classified into the moment-based and the maximum likelihood (ML) methods. We further divide these algorithms based on the modeling of the input signal. If input is assumed to be random with prescribed statistics (or distributions), the corresponding blind channel estimation schemes are considered to be statistical. On the other hand, if the source does not have a statistical description, or although the source is random but the statistical properties of the source are not exploited, the corresponding estimation algorithms are classified as deterministic. Fig. 2 shows a map for different classes of algorithms and the organization of the paper. , 1996 "... A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and mat ..." Cited by 74 (0 self) Add to MetaCart A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and matrix algorithms for use in multichannel /multipath problems. Using abstract algebra/group theoretic concepts, information theoretic principles, and the Bussgang property, methods of single channel filtering and source separation of multipath mixtures are merged into a general FIR matrix framework. Techniques developed for equalization may be applied to source separation and vice versa. Potential applications of these results lie in neural networks with feed-forward memory connections, wideband array processing, and in problems with a multi-input, multi-output network having channels between each source and sensor, such as source separation. Particular applications of FIR polynomial matrix alg... , 1997 "... Blind system identification is a fundamental signal processing technology aimed to retrieve unknown information of a system from its output only. This technology has a wide range of possible applications such as mobile communications, speech reverberation cancellation and blind image restoration. Th ..." Cited by 22 (1 self) Add to MetaCart Blind system identification is a fundamental signal processing technology aimed to retrieve unknown information of a system from its output only. This technology has a wide range of possible applications such as mobile communications, speech reverberation cancellation and blind image restoration. This paper reviews a number of recently developed concepts and techniques for blind system identification which include the concept of blind system identifiability in a deterministic framework, the blind techniques of maximum likelihood and subspace for estimating the system's impulse response, and other techniques for direct estimation of the system input. Keywords: System identification, Blind techniques, Multichannels, Equalization, Source separation. This work has been supported by the Australian Research Council and the Australian Cooperative Research Center for Sensor Signal and Information Processing. y Currently with Motorola Australian Research Centre, 12 Lord Street, Botany 2019, ... - IEEE Trans. Signal Processing , 1997 "... In this correspondence, we propose applying the hidden Markov models (HMM) theory to the problem of blind channel estimation and data detection. The Baum--Welch (BW) algorithm, which is able to estimate all the parameters of the model, is enriched by introducing some linear constraints emerging from ..." Cited by 14 (1 self) Add to MetaCart In this correspondence, we propose applying the hidden Markov models (HMM) theory to the problem of blind channel estimation and data detection. The Baum--Welch (BW) algorithm, which is able to estimate all the parameters of the model, is enriched by introducing some linear constraints emerging from a linear FIR hypothesis on the channel. Additionally, a version of the algorithm that is suitable for timevarying channels is also presented. Performance is analyzed in a GSM environment using standard test channels and is found to be close to that obtained with a nonblind receiver. - IEEE Trans. Signal Processing , 1998 "... Abstract — A batch blind equalization scheme is developed based on maximum likelihood joint channel and data estimation. In this scheme, the joint maximum likelihood optimization is decomposed into a twolevel optimization loop. A micro genetic algorithm is employed at the upper level to identify the ..." Cited by 9 (5 self) Add to MetaCart Abstract — A batch blind equalization scheme is developed based on maximum likelihood joint channel and data estimation. In this scheme, the joint maximum likelihood optimization is decomposed into a twolevel optimization loop. A micro genetic algorithm is employed at the upper level to identify the unknown channel model, and the Viterbi algorithm is used at the lower level to provide the maximum likelihood sequence estimation of the transmitted data sequence. As is demonstrated in simulation, the proposed method is much more accurate compared with existing algorithms for joint channel and data estimation. Index Terms—Blind equalization, genetic algorithms, maximum likelihood estimation. I. , 2000 "... This paper presents a new algorithm, based on an EM (Expectation-Maximization) formulation, for ML (maximum likelihood) sequence estimation over unknown ISI (inter-symbol interference) channels with random channel coefficients which have a Gauss-Markov fast time-varying distribution. By using the EM ..." Cited by 5 (4 self) Add to MetaCart This paper presents a new algorithm, based on an EM (Expectation-Maximization) formulation, for ML (maximum likelihood) sequence estimation over unknown ISI (inter-symbol interference) channels with random channel coefficients which have a Gauss-Markov fast time-varying distribution. By using the EM formulation to marginalize over the channel coefficient distribution, maximum-likelihood estimates of the transmitted sequence are obtained. This EM algorithm is shown to perform better, in terms of BER, than existing algorithms which perform jointly-optimal sequence and channel estimation, or which do not take into account fast time-varying channel effects. I. Introduction Maximum Likelihood Sequence Estimation (MLSE) over an FIR channel with unknown coefficients can be formulated as either a channel estimation problem (followed by MLSE), a joint sequence/channel estimation problem, or a direct sequence estimation problem. Since MLSE is our primary concern, we consider only the latter two... - IEEE Trans. Inform. Theory , 2003 "... The problem of sequence detection in frequency-non-selective/time-selective fading channels, when channel state information (CSI) is not available at the transmitter and receiver, is considered in this paper. The traditional belief is that exact maximum likelihood sequence detection (MLSD) of an ..." Cited by 5 (1 self) Add to MetaCart The problem of sequence detection in frequency-non-selective/time-selective fading channels, when channel state information (CSI) is not available at the transmitter and receiver, is considered in this paper. The traditional belief is that exact maximum likelihood sequence detection (MLSD) of an uncoded sequence over this channel has exponential complexity in the channel coherence time. - In Proc. 29th Asilomar Conference on Signals, Systems and Computers, volume II , 1995 "... We consider the problem of blind equalization of linear communication channels. Some recent results indicate that the performance of Bussgang blind equalization algorithms can be improved by using diversity such as fractional spacing or antenna array reception. In this work we examine the performanc ..." Cited by 4 (1 self) Add to MetaCart We consider the problem of blind equalization of linear communication channels. Some recent results indicate that the performance of Bussgang blind equalization algorithms can be improved by using diversity such as fractional spacing or antenna array reception. In this work we examine the performance of such algorithms (especially of the popular CMA 2-2), when used in a decision-feedback setup. It turns out that such a simple structure may help avoiding the common problems of "zeros on the unit circle" (symbol-rate case) and of "zeros in common" (fractionally-spaced case). Theoretical analysis as well as computer simulations are provided in order to demonstrate this fact. 1 Introduction Blind equalization (BE) and channel identification is a field that has been receiving increased interest during the last years. Among several classes of methods, in this paper we are interested in the so-called Bussgang BE methods. These methods use a classical linear equalization scheme: the channel ... - in Proc. IEEE Statist. Signal Array Process. Workshop, Corfú , 2001 "... In this paper, the theory of hidden Markov models (HMM) is applied to the problem of blind (without training sequences) channel estimation and data detection. Within a HMM framework, the Baum--Welch (BW) identification algorithm is frequently used to find out maximum-likelihood (ML) estimates of the ..." Cited by 4 (1 self) Add to MetaCart In this paper, the theory of hidden Markov models (HMM) is applied to the problem of blind (without training sequences) channel estimation and data detection. Within a HMM framework, the Baum--Welch (BW) identification algorithm is frequently used to find out maximum-likelihood (ML) estimates of the corresponding model. However, such a procedure assumes the model (i.e., the channel response) to be static throughout the observation sequence. By means of introducing a parametric model for time-varying channel responses, a version of the algorithm, which is more appropriate for mobile channels [time-dependent Baum-Welch (TDBW)] is derived. Aiming to compare algorithm behavior, a set of computer simulations for a GSM scenario is provided. Results indicate that, in comparison to other Baum--Welch (BW) versions of the algorithm, the TDBW approach attains a remarkable enhancement in performance. For that purpose, only a moderate increase in computational complexity is needed. , 1999 "... This paper presents a new approach using EM (ExpectationMaximization) algorithms for ML (maximum likelihood) sequence estimation over unknown ISI (inter-symbol interference) channels with random channel coefficients. By using the EM formulation to marginalize over the channel coefficient distributio ..." Cited by 4 (2 self) Add to MetaCart This paper presents a new approach using EM (ExpectationMaximization) algorithms for ML (maximum likelihood) sequence estimation over unknown ISI (inter-symbol interference) channels with random channel coefficients. By using the EM formulation to marginalize over the channel coefficient distribution, maximum-likelihood estimates of the transmitted sequenceare obtained. The EM algorithms are shown to perform better, in terms of BER, than existing algorithms which perform jointly-optimal sequence and channel estimation. 1 Introduction In this paper we address the problem of estimation of a sequence of digital communication symbols transmitted over random ISI channels. For a known FIR channel, it is well known that the Viterbi algorithm [1] solves the ML sequence estimation (MLSE) problem. Although the computationally efficiency of this algorithm has lead to its broad use, the Viterbi algorithm requires knowledge of the channel (e.g. its impulse response) [2]. Here we are concerned wit...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1839310","timestamp":"2014-04-19T12:37:56Z","content_type":null,"content_length":"39709","record_id":"<urn:uuid:bc7a5160-4911-4833-92cb-cf596825c490>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
On a multi-part question when do you round to significant figure? hi silverpuma, Definitely leave the rounding until the end as each round off will introduce an error in the calculation. If you're asked for an answer at each stage, calculate it and round it for answer purposes. But use the most accurate value in the next calculation. UK exam boards expect you to round off sensibly. If, for example, you're asked for the diameter of a tree given its circumference, it would be silly to state the answer as 2.4567 m . (i) You couldn't measure the circumference that accurately in the first place and (ii) tree trunks aren't perfectly circular anyway. Here's an simple example that shows why 'over-acurate' answers are not appropriate. eg. What is the area of a rectangle 3.6 m by 4.2 m ? The calculator gives the answer as 15.12 But are we justified in giving that as the answer? A value of 3.6 suggests the measuring was only done to the nearest 0.1 So the true value could be anything from 3.55 to 3.65 Similarly the 4.2 could be anything from 4.15 to 4.25 Look what you get if you take the two smallest values and then the two largest values: 3.55 x 4.15 = 14.7325 3.65 x 4.25 = 15.5125 So the 'true' answer could lie anywhere in the range 14.7325 to 15.5125 That would suggest that a sensible answer after rounding would be 15 m^2 Sometimes, at GCSE, they ask for all the calculator figures (to show you have used the calc. correctly) and then expect you to decide on a sensible rounded figure. I used to advise my students to write down the calculator answer in the working and then put the rounded answer in the answer space. That way they were 'hedging their bets' about what was wanted. Obviously. if you're asked for, say, 2dp, then anything else would be wrong. On the exam board pages you can download mark schemes that show how markers are instructed to deal with answers. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=299218","timestamp":"2014-04-19T17:03:30Z","content_type":null,"content_length":"17871","record_id":"<urn:uuid:c7dcda9d-7361-41de-b26b-8272a34dca5f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Web Resources The Pythagorean Theorem TV This website shows videos to help students and teachers understand the Pythagorean Theorem. Pythagorean Theorem Rap Song This video puts the Pythagorean theorem in real-world terms, helping students visualize examples in which the formula comes into play. The professionally produced music and video is highly engaging for students. For teachers, there are links to song lyrics, activities, and questions. Many Flocabulary videos are only accessible through a paid subscription. *This particular video is available for free and without the need for an account sign-up. Teacher Tools The Pythagorean Theorem TV This website shows videos to help students and teachers understand the Pythagorean Theorem.
{"url":"http://alex.state.al.us/weblinks_category.php?stdID=54274","timestamp":"2014-04-16T16:06:16Z","content_type":null,"content_length":"21741","record_id":"<urn:uuid:57433bca-1efd-4cce-a6cc-d033eeb431b6>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
19 search hits Context Matters: The Illusive Simplicity of Macaque V1 Receptive Fields (2012) Robert Haslinger Gordon Pipa Bruss Lima Wolf Singer Emery N. Brown Sergio Neuenschwander Even in V1, where neurons have well characterized classical receptive fields (CRFs), it has been difficult to deduce which features of natural scenes stimuli they actually respond to. Forward models based upon CRF stimuli have had limited success in predicting the response of V1 neurons to natural scenes. As natural scenes exhibit complex spatial and temporal correlations, this could be due to surround effects that modulate the sensitivity of the CRF. Here, instead of attempting a forward model, we quantify the importance of the natural scenes surround for awake macaque monkeys by modeling it non-parametrically. We also quantify the influence of two forms of trial to trial variability. The first is related to the neuron’s own spike history. The second is related to ongoing mean field population activity reflected by the local field potential (LFP). We find that the surround produces strong temporal modulations in the firing rate that can be both suppressive and facilitative. Further, the LFP is found to induce a precise timing in spikes, which tend to be temporally localized on sharp LFP transients in the gamma frequency range. Using the pseudo R2 as a measure of model fit, we find that during natural scene viewing the CRF dominates, accounting for 60% of the fit, but that taken collectively the surround, spike history and LFP are almost as important, accounting for 40%. However, overall only a small proportion of V1 spiking statistics could be explained (R2~5%), even when the full stimulus, spike history and LFP were taken into account. This suggests that under natural scene conditions, the dominant influence on V1 neurons is not the stimulus, nor the mean field dynamics of the LFP, but the complex, incoherent dynamics of the network in which neurons are embedded. Bivariate and Multivariate NeuroXidence: A Robust and Reliable Method to Detect Modulations of Spike–Spike Synchronization Across Experimental Conditions (2011) Wu Wei Diek W. Wheeler Gordon Pipa Synchronous neuronal firing has been proposed as a potential neuronal code. To determine whether synchronous firing is really involved in different forms of information processing, one needs to directly compare the amount of synchronous firing due to various factors, such as different experimental or behavioral conditions. In order to address this issue, we present an extended version of the previously published method, NeuroXidence. The improved method incorporates bi- and multivariate testing to determine whether different factors result in synchronous firing occurring above the chance level. We demonstrate through the use of simulated data sets that bi- and multivariate NeuroXidence reliably and robustly detects joint-spike-events across different factors. Extraction of network topology from multi-electrode recordings: is there a small-world effect? (2011) Felipe Gerhard Gordon Pipa Bruss Lima Sergio Neuenschwander Wulfram Gerstner The simultaneous recording of the activity of many neurons poses challenges for multivariate data analysis. Here, we propose a general scheme of reconstruction of the functional network from spike train recordings. Effective, causal interactions are estimated by fitting generalized linear models on the neural responses, incorporating effects of the neurons’ self-history, of input from other neurons in the recorded network and of modulation by an external stimulus. The coupling terms arising from synaptic input can be transformed by thresholding into a binary connectivity matrix which is directed. Each link between two neurons represents a causal influence from one neuron to the other, given the observation of all other neurons from the population. The resulting graph is analyzed with respect to small-world and scale-free properties using quantitative measures for directed networks. Such graph-theoretic analyses have been performed on many complex dynamic networks, including the connectivity structure between different brain areas. Only few studies have attempted to look at the structure of cortical neural networks on the level of individual neurons. Here, using multi-electrode recordings from the visual system of the awake monkey, we find that cortical networks lack scale-free behavior, but show a small, but significant small-world structure. Assuming a simple distance-dependent probabilistic wiring between neurons, we find that this connectivity structure can account for all of the networks’ observed small-world-ness. Moreover, for multi-electrode recordings the sampling of neurons is not uniform across the population. We show that the small-world-ness obtained by such a localized sub-sampling overestimates the strength of the true small-world structure of the network. This bias is likely to be present in all previous experiments based on multi-electrode recordings. Effect of the topology and delayed interactions in neuronal networks synchronization (2011) Toni Pérez Guadalupe C. Garcia Víctor M. Eguíluz Raúl Vicente Gordon Pipa Claudio R. Mirasso As important as the intrinsic properties of an individual nervous cell stands the network of neurons in which it is embedded and by virtue of which it acquires great part of its responsiveness and functionality. In this study we have explored how the topological properties and conduction delays of several classes of neural networks affect the capacity of their constituent cells to establish well-defined temporal relations among firing of their action potentials. This ability of a population of neurons to produce and maintain a millisecond-precise coordinated firing (either evoked by external stimuli or internally generated) is central to neural codes exploiting precise spike timing for the representation and communication of information. Our results, based on extensive simulations of conductance-based type of neurons in an oscillatory regime, indicate that only certain topologies of networks allow for a coordinated firing at a local and long-range scale simultaneously. Besides network architecture, axonal conduction delays are also observed to be another important factor in the generation of coherent spiking. We report that such communication latencies not only set the phase difference between the oscillatory activity of remote neural populations but determine whether the interconnected cells can set in any coherent firing at all. In this context, we have also investigated how the balance between the network synchronizing effects and the dispersive drift caused by inhomogeneities in natural firing frequencies across neurons is resolved. Finally, we show that the observed roles of conduction delays and frequency dispersion are not particular to canonical networks but experimentally measured anatomical networks such as the macaque cortical network can display the same type of behavior. Spike train auto-structure impacts post-synaptic firing and timing-based plasticity (2011) Bertram Scheller Marta Castellano Raul Vicente Gordon Pipa Cortical neurons are typically driven by several thousand synapses. The precise spatiotemporal pattern formed by these inputs can modulate the response of a post-synaptic cell. In this work, we explore how the temporal structure of pre-synaptic inhibitory and excitatory inputs impact the post-synaptic firing of a conductance-based integrate and fire neuron. Both the excitatory and inhibitory input was modeled by renewal gamma processes with varying shape factors for modeling regular and temporally random Poisson activity. We demonstrate that the temporal structure of mutually independent inputs affects the post-synaptic firing, while the strength of the effect depends on the firing rates of both the excitatory and inhibitory inputs. In a second step, we explore the effect of temporal structure of mutually independent inputs on a simple version of Hebbian learning, i.e., hard bound spike-timing-dependent plasticity. We explore both the equilibrium weight distribution and the speed of the transient weight dynamics for different mutually independent gamma processes. We find that both the equilibrium distribution of the synaptic weights and the speed of synaptic changes are modulated by the temporal structure of the input. Finally, we highlight that the sensitivity of both the post-synaptic firing as well as the spike-timing-dependent plasticity on the auto-structure of the input of a neuron could be used to modulate the learning rate of synaptic modification. Goodness-of-fit tests for neural population models: the multivariate time-rescaling theorem (2010) Felipe Gerhard Robert Haslinger Gordon Pipa Poster Presentation from Nineteenth Annual Computational Neuroscience Meeting: CNS*2010 San Antonio, TX, USA. 24-30 July 2010 Statistical models of neural activity are at the core of the field of modern computational neuroscience. The activity of single neurons has been modeled to successfully explain dependencies of neural dynamics to its own spiking history, to external stimuli or other covariates [1]. Recently, there has been a growing interest in modeling spiking activity of a population of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing (existing models include generalized linear models [2,3] or maximum-entropy approaches [4]). For point-process-based models of single neurons, the time-rescaling theorem has proven to be a useful toolbox to assess goodness-of-fit. In its univariate form, the time-rescaling theorem states that if the conditional intensity function of a point process is known, then its inter-spike intervals can be transformed or “rescaled” so that they are independent and exponentially distributed [5]. However, the theorem in its original form lacks sensitivity to detect even strong dependencies between neurons. Here, we present how the theorem can be extended to be applied to neural population models and we provide a step-by-step procedure to perform the statistical tests. We then apply both the univariate and multivariate tests to simplified toy models, but also to more complicated many-neuron models and to neuronal populations recorded in V1 of awake monkey during natural scenes stimulation. We demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. ... Transfer entropy - a model-free measure of effective connectivity for the neurosciences (2010) Raúl Vicente Michael Wibral Michael Lindner Gordon Pipa Understanding causal relationships, or effective connectivity, between parts of the brain is of utmost importance because a large part of the brain’s activity is thought to be internally generated and, hence, quantifying stimulus response relationships alone does not fully describe brain dynamics. Past efforts to determine effective connectivity mostly relied on model based approaches such as Granger causality or dynamic causal modeling. Transfer entropy (TE) is an alternative measure of effective connectivity based on information theory. TE does not require a model of the interaction and is inherently non-linear. We investigated the applicability of TE as a metric in a test for effective connectivity to electrophysiological data based on simulations and magnetoencephalography (MEG) recordings in a simple motor task. In particular, we demonstrate that TE improved the detectability of effective connectivity for non-linear interactions, and for sensor level MEG signals where linear methods are hampered by signal-cross-talk due to volume conduction. Performance- and stimulus-dependent oscillations in monkey prefrontal cortex during short-term memory (2009) Gordon Pipa Ellen Städtler Eugenio F. Rodriguez James A. Waltz Lars Muckli Wolf Singer Rainer Goebel Matthias H. J. Munk Short-term memory requires the coordination of sub-processes like encoding, retention, retrieval and comparison of stored material to subsequent input. Neuronal oscillations have an inherent time structure, can effectively coordinate synaptic integration of large neuron populations and could therefore organize and integrate distributed sub-processes in time and space. We observed field potential oscillations (14–95 Hz) in ventral prefrontal cortex of monkeys performing a visual memory task. Stimulus-selective and performance-dependent oscillations occurred simultaneously at 65–95 Hz and 14–50 Hz, the latter being phase-locked throughout memory maintenance. We propose that prefrontal oscillatory activity may be instrumental for the dynamical integration of local and global neuronal processes underlying short-term memory. NeuroXidence: reliable and efficient analysis of an excess or deficiency of joint-spike events (2009) Gordon Pipa Diek W. Wheeler Wolf Singer Danko Nikolić Poster presentation: We present a non-parametric and computationally-efficient method named NeuroXidence (see http://www.NeuroXidence.com ) that detects coordinated firing within a group of two or more neurons and tests whether the observed level of coordinated firing is significantly different from that expected by chance. NeuroXidence [1] considers the full auto-structure of the data, including the changes in the rate responses and the history dependencies in the spiking activity. We demonstrate that NeuroXidence can identify epochs with significant spike synchronisation even if these coincide with strong and fast rate modulations. We also show, that the method accounts for trial-by-trial variability in the rate responses and their latencies, and that it can be applied to short data windows lasting only tens of milliseconds. Based on simulated data we compare the performance of NeuroXidence with the UE-method [2,3] and the cross-correlation analysis. An application of NeuroXidence to 42 single-units (SU) recorded in area 17 of an anesthetized cat revealed significant coincident events of high complexities, involving firing of up to 8 SUs simultaneously (5 ms window). The results were highly consistent with those obtained by traditional pair-wise measures based on cross-correlation: Neuronal synchrony was strongest in stimulation conditions in which the orientation of the sinusoidal grating matched the preferred orientation of most of the SUs included in the analysis, and was the weakest when the neurons were stimulated least optimally. Interestingly, events of higher complexities showed stronger stimulus-specific modulation than pair-wise interactions. The results suggest strong evidence for stimulus specific synchronous firing and, therefore, support the temporal coding hypothesis in visual cortex. ... Using transfer entropy to measure the patterns of information flow though cortex : application to MEG recordings from a visual Simon task (2009) Michael Wibral Raul Vicente Jochen Triesch Gordon Pipa Poster presentation: Functional connectivity of the brain describes the network of correlated activities of different brain areas. However, correlation does not imply causality and most synchronization measures do not distinguish causal and non-causal interactions among remote brain areas, i.e. determine the effective connectivity [1]. Identification of causal interactions in brain networks is fundamental to understanding the processing of information. Attempts at unveiling signs of functional or effective connectivity from non-invasive Magneto-/ Electroencephalographic (M/EEG) recordings at the sensor level are hampered by volume conduction leading to correlated sensor signals without the presence of effective connectivity. Here, we make use of the transfer entropy (TE) concept to establish effective connectivity. The formalism of TE has been proposed as a rigorous quantification of the information flow among systems in interaction and is a natural generalization of mutual information [2]. In contrast to Granger causality, TE is a non-linear measure and not influenced by volume conduction. ...
{"url":"http://publikationen.stub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Gordon+Pipa%22/start/0/rows/10/sortfield/year/sortorder/desc","timestamp":"2014-04-18T00:27:47Z","content_type":null,"content_length":"56058","record_id":"<urn:uuid:a1750eba-7e96-46fe-bd9b-b4fd4b2ec5c4>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00464-ip-10-147-4-33.ec2.internal.warc.gz"}