content
stringlengths
86
994k
meta
stringlengths
288
619
Inverse cosine: Representations through more general functions (subsection 26/02) Through Meijer G Classical cases for the direct function itself Classical cases involving algebraic functions in the arguments Classical cases involving unit step theta Classical cases for powers of cos^-1 Generalized cases for the direct function itself Generalized cases involving algebraic functions in the arguments Generalized cases involving unit step theta Generalized cases for powers of cos^-1
{"url":"http://functions.wolfram.com/ElementaryFunctions/ArcCos/26/02/ShowAll.html","timestamp":"2014-04-21T10:13:53Z","content_type":null,"content_length":"47445","record_id":"<urn:uuid:8ce7f532-6f0d-49ee-9561-b03b6bd84c76>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Parabola word problem about a suspension bridge July 19th 2010, 11:38 AM Parabola word problem about a suspension bridge The two towers on either end will be 50 feet high and 300 feet apart. The two supporting cables are connected at the top of the towers and hang in a curve that is a parabola. There are vertical cables that connect the walkway to the supporting cables. These cables will be connected every 15 feet from the walkway up to the supporting cables. At the center of the bridge, the parabola will be 5 feet above the walkway. The cable material can be purchased from Company A for $52.75 per 10 feet with a shipping charge of $300.00 for the entire order or from Company B for $432.90 per 100 feet with a shipping charge of $350.00 for the entire order. You must purchase full 10-foot cables from Company A or 100-foot cables from Company B. The cables can be cut or welded together. 1.Write an equation for the parabola that represents each of the support cables. 2. Determine the number of vertical cables needed. 3. Determine the length of each of the vertical cables. 4. How much does it cost to purchase the needed materials from each company? July 19th 2010, 02:12 PM The two towers on either end will be 50 feet high and 300 feet apart. The two supporting cables are connected at the top of the towers and hang in a curve that is a parabola. There are vertical cables that connect the walkway to the supporting cables. These cables will be connected every 15 feet from the walkway up to the supporting cables. At the center of the bridge, the parabola will be 5 feet above the walkway. The cable material can be purchased from Company A for $52.75 per 10 feet with a shipping charge of $300.00 for the entire order or from Company B for $432.90 per 100 feet with a shipping charge of $350.00 for the entire order. You must purchase full 10-foot cables from Company A or 100-foot cables from Company B. The cables can be cut or welded together. 1.Write an equation for the parabola that represents each of the support cables. 2. Determine the number of vertical cables needed. 3. Determine the length of each of the vertical cables. 4. How much does it cost to purchase the needed materials from each company? here's a sketch of the function to get you started ... July 19th 2010, 04:35 PM added graph the parabola with an vertex at $(0,5)$ is $(x)^2 = 4a(y-5)$ with it opening upwards (with the parabola 5 ft from the base and this assuming the tower height is also from the base with a point of $(150,50)$ we can find the value of $a$ $(150)^2 = 4a(50-5) \Rightarrow a = 125$ so then $x^2 = 4\left(125\right)(y-5)$ or $x^2=500(y-5)$ to graph this we solve for $y$ in terms of $x$ $y = \frac{x^2}{500}+5$ in order to get the lenghts of the cable supports just plug the x values to get y for the height
{"url":"http://mathhelpforum.com/pre-calculus/151372-parabola-word-problem-about-suspension-bridge-print.html","timestamp":"2014-04-19T22:06:22Z","content_type":null,"content_length":"10063","record_id":"<urn:uuid:d9c15341-feab-4bf4-8142-bb36284f59d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerically Equivalent Proof Nope, A ≈ B iff there exists a function θ that is 1-1 and onto. It's actually a lot easier than it seems. What you need to do is define the sets A and B without a loss of generality. So just make up names for them. a0, a1, a2, ..., a_n are all in A and b0, b1, b2, ..., b_n are all in B. You know they each go to n since they are finite and have the same number. Then let θ:A->B such that θ(a_k) = b_k. It should be really easy to show 1-1 and onto with this "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=33498","timestamp":"2014-04-19T22:29:19Z","content_type":null,"content_length":"10062","record_id":"<urn:uuid:37951c0f-9f24-4a55-a21d-d83f0db16c78>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
• There are two distinct concepts of division, the idea of dividing into equal groups and the idea of repeated subtraction. Since all students visualize and understand things differently, be sure to allow your students to use both concepts to model division. • Look at these expressions: Many students incorrectly evaluate one or both expressions. Tell your students to check their answers using multiplication. Is 4 If it is, then 0 Since this is incorrect, then 4 does not equal 0. Is 0 If it is, then 0 Since this is correct, 0 • Have students check a partner's division by multiplying. This may seem less tedious to students because they are not repeating their own work. Students may also take this as a challenge to find another student's errors. • Base ten blocks can be an excellent demonstration tool and powerful manipulative to teach division. If commercial blocks are not available, paper kits can be made using construction paper. • Practice labeling division problems with dividend, divisor, and quotient before teaching students how to solve them. This helps students to learn which number represents each part of the problem.
{"url":"http://www.eduplace.com/math/mathsteps/4/d/4.division.tips.html","timestamp":"2014-04-19T09:36:34Z","content_type":null,"content_length":"7147","record_id":"<urn:uuid:af69cad3-7015-4ab2-86a1-341eb4406ef9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimal fetal growth for the Caucasian singleton and assessment of appropriateness of fetal growth: an analysis of a total population perinatal database • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Pediatr. 2005; 5: 13. Optimal fetal growth for the Caucasian singleton and assessment of appropriateness of fetal growth: an analysis of a total population perinatal database The appropriateness of an individual's intra uterine growth is now considered an important determinant of both short and long term outcomes, yet currently used measures have several shortcomings. This study demonstrates a method of assessing appropriateness of intrauterine growth based on the estimation of each individual's optimal newborn dimensions from routinely available perinatal data. Appropriateness of growth can then be inferred from the ratio of the value of the observed dimension to that of the optimal dimension. Fractional polynomial regression models including terms for non-pathological determinants of fetal size (gestational duration, fetal gender and maternal height, age and parity) were used to predict birth weight, birth length and head circumference from a population without any major risk factors for sub-optimal intra-uterine growth. This population was selected from a total population of all singleton, Caucasian births in Western Australia 1998–2002. Births were excluded if the pregnancy was exposed to factors known to influence fetal growth pathologically. The values predicted by these models were treated as the optimal values, given infant gender, gestational age, maternal height, parity, and age. The selected sample (N = 62,746) comprised 60.5% of the total Caucasian singleton birth cohort. Equations are presented that predict optimal birth weight, birth length and head circumference given gestational duration, fetal gender, maternal height, age and parity. The best fitting models explained 40.5% of variance for birth weight, 32.2% for birth length, and 25.2% for head circumference at Proportion of optimal birth weight (length or head circumference) provides a method of assessing appropriateness of intrauterine growth that is less dependent on the health of the reference population or the quality of their morphometric data than is percentile position on a birth weight distribution. Being born small for one's gestational age is associated with adverse outcomes in both the short and long term [1-3]. However assessing whether a neonate is at risk of compromise on account of inappropriate intrauterine growth is complicated because not all fetuses should grow at the same rate [4]. Currently the appropriateness of fetal growth is usually inferred from the percentile position that the neonate's birth weight occupies on a gestation-specific birth weight distribution, that may also be specific for gender. This practice is unsuitable if the most appropriate birth weight for the neonate being assessed is not the same as that for all members of the population contributing to the distribution. Additional problems associated with percentile-based standards are (a) the implications of a given percentile position vary with the burden of growth restricting pathology in the source population, (b) the estimation of percentile position is imprecise at the extremes of a distribution where the information is of most clinical importance and (c) since percentile positions represent an ordinal rather than interval or ratio scale, the possibilities for valid statistical manipulation are limited [5]. This communication seeks to describe, demonstrate and justify the usefulness of an alternative method of assessing the appropriateness of fetal growth from information available in the neonatal This method is based on three underlying concepts: 1. Appropriateness of growth can be expressed as the ratio of the observed birth dimension to the optimal birth dimension for that an individual neonate. Considering the dimension of weight we refer to this ratio as the proportion of optimal birth weight (POBW): a concept similar to the birth weight ratio [6]. Assessing appropriateness of growth then requires values for the optimal birth dimensions for the neonate in question. 2. Optimal intrauterine growth is most likely to be achieved during pregnancies unaffected by any maternal or fetal pathology or exposures that can pathologically affect fetal growth, 3. The many determinants of fetal growth can be classified as having either a pathological or a non-pathological effect on growth. Factors with non-pathological effects on growth include gestational duration, gender [7-9], maternal size [10] and parity [7,11] and paternal size [12]. We define optimal birth weight as that achieved when no factors are present that can exert a pathological effect on growth. The central tendency of the distribution of birth weights in a population which experiences no factors that exert a pathological effect on intrauterine growth is taken as the optimal birth weight for neonates with the same combination of non-pathological determinants of fetal growth. Pathological growth determining factors, such as maternal vascular disease or those associated with congenital malformations, usually restrict fetal growth. More rarely fetal weight is pathologically increased, the most well known example being fetal macrosomia induced by maternal diabetes [13]. It is less useful to categorise as either pathological or non-pathological those determinants of growth that cannot easily be altered. Multiple pregnancy and maternal race are two examples of such This paper demonstrates our method of assessing the appropriateness of fetal growth by deriving equations for optimal birth weight, birth length and head circumference. Gestational duration, fetal gender and maternal height, age and parity are considered as potential independent variables representing the non pathological determinants of fetal growth. In order to select a population with optimal intrauterine growth, all births with evidence of having been exposed to pathological determinants of fetal growth are excluded. Appropriateness of fetal growth is then expressed as the ratio of the observed birth dimension to the estimated optimal birth dimension for a neonate with the same values for non-pathological determinants of fetal growth. The utility of this approach is Sample selection Records of the 126,393 births in Western Australia (WA) during the period of 1998 to 2002 were obtained from the Western Australian Maternal and Child Health Research Database (MCHRDB) [14]. This period was selected because data concerning whether the mother smoked during pregnancy, the most prevalent environmental exposure with a pathological effect on intrauterine growth, are available on this data base for births from 1998 onwards. The most recent available cohort at the time of writing was 2002. Of 1998–2002 WA births, 85% were to Caucasian women and 96.8% were singletons. This example therefore derives standards for singletons born to Caucasian women, see Discussion for generalisability. To achieve a cohort of Caucasian singletons anticipated to exhibit optimal fetal growth, any pregnancy with evidence to suggest that fetal growth may have been affected pathologically must be excluded. Stillbirths and deaths before 28 days were excluded as evidence of a suboptimal intrauterine course, which may be associated with abnormal growth, and, for stillbirths, because duration of intrauterine growth, as opposed to gestational duration to delivery, is not recorded. The selection of further exclusion criteria was guided, in part, by the extensive literature concerning growth restriction as reviewed by Resnik [15]. Suggested exclusion criteria for which data are available in the MCHRDB are listed in Table Table11 in order of their frequency observed in 1998–2002 WA births. Resnik [15] also suggests that maternal gestational use of anticonvulsants, cocaine, heroin or alcohol, maternal thrombophilic disorders and nutritional deprivation are risk factors for growth restriction. While these variables are not available on the MCHRDB, 0.5% of 1998–2002 WA mothers were recorded as having epilepsy and may have been on anticonvulsants. The use of cocaine and heroin are illegal in WA and very likely to be under-reported in medical records. Their use, along with that of excess alcohol, is associated with birth defects. Since both a birth defect and death before 28 days are exclusion criteria, it is anticipated that the majority of births significantly affected by these substances will be excluded. The incidence of thrombophilic disorders varies with ethnic background and no data are available concerning its frequency in WA pregnant women who are of mixed ethnic backgrounds. However neither thrombophilic disorders nor macro-nutrient deprivation are noted as problems in the WA pregnant population. Thus the factors listed in Table Table11 are anticipated to represent the most frequently occurring pathological determinants of fetal growth in our population and were excluded from the sample for the purposes of deriving measures of optimal fetal growth. Socio-demographic variables of the selected sample were compared with those of excluded Caucasian singletons. Observed frequency of factors known to be associated with pathological deviations in fetal growth: All Western Australian births 1998–2002. Gestational age data Since the primary determinant of birth dimensions is the duration of growth, reliable estimates of gestational duration (GA) are essential, yet exclusion criteria for poor quality gestational data are likely to exclude a biased sample. Details of the algorithm used to obtain the best estimate of GA from all available data are described and justified elsewhere [16]. Applying this method to the total 1998–2002 WA birth population resulted in no satisfactory gestational estimate being available for only 97 births (~0.1%) and being beyond the range of 23–42 weeks for a further 573 subjects. Birth weight and gestational duration data for remaining births were examined to exclude combinations so unlikely as to suggest error in the gestational datum. The cut off birth weights at each gestational age between 23 and 36 weeks, above which the observation was excluded, were selected with a view to excluding infants at least four gestational weeks older than reported, since break through bleeding at four-week intervals in early pregnancy is a source of gestational error in women claiming to be certain of the date of their last menstrual period. Due to the slower rate of weight accretion with respect to weight dispersion in infants born at term, this method of data cleaning is not applicable for births reported as being at greater than 36 weeks gestational duration [ For each of the three response variables (birth weight, length and head circumference) the Box-Cox transformation [18] was used to identify the optimal transformation to reduce non-normality and heteroscedasticity of errors. Fractional polynomial regression was then used to identify the best fit transformation of gestational age to account for any non-linearity in the relationship between gestational duration and each response variable [19]. Fractional polynomials are a means of identifying the curve of best fit in cases where non-linearity is possible but there is no scientific reason to specify the shape of the non-linear relationship. Royston and Altman claim that their set of power transformations have the flexibility to cover almost all likely shapes of non-linear relationship. The number of possible inflection points is determined by the order of the fractional polynomials fitted. In this case, where a sideways "S" shape is expected, 2nd order fractional polynomials, which allow for up to two inflection points, are sufficient. To aid computation, gestational duration was included in the fractional polynomial regressions as GA/100. Maternal height (cm) and maternal age (years) were included as linear predictor variables (see Discussion) centred on the population mean values of 162 cm and 25 years respectively. Infant sex and maternal parity were included as categorical variables. Parity was categorised with first birth as the reference, second and third birth as two separate categories, and fourth and subsequent births constituting the fourth category. Models were fitted using SAS (Version 8.2) (SAS Institute Inc., 2001). The fit of each model was tested by plotting residuals against GA and against the predicted dependent variable (weight, length or head circumference at birth). Additionally, POBWs for 3^rd, 10^th and 90^th percentile birth weight were estimated within each completed gestational week for sub-samples of parity and maternal height, to ensure that the model adjusted appropriately for these non-pathological determinants. Finally, to aid clinical interpretation of POBW values, POBW was estimated for the 3^rd, 10^th and 90^th percentile positions by taking the weighted mean of POBWs estimated for each parity/gender stratum of births between 38 and 41 gestational weeks inclusive, assuming a constant maternal height of 162 cm. Table Table22 lists the numbers of births sequentially excluded by each exclusion criterion, and shows that 62,746 singleton Caucasian births remained for analysis. Equations for optimal growth were derived from this total population sample of singleton, Caucasian births without recognised risk factors for growth abnormality, which comprised 49.7% of all Western Australian births and 60.5% of the Caucasian singleton births. Table Table33 compares the distributions of socio-demographic variables for included births with those of excluded Caucasian singleton births. As anticipated, given the sample size and the selection criteria, the difference in all distributions is statistically very significant, with the exception of gender, which is nonetheless significantly different at the p = .05 level. However the clinical differences tend to be small with the exception of the proportion of preterm and very preterm births. Sample selection: the number of births sequentially excluded by each exclusion criterion. Comparison of distributions of selected characteristics among Caucasian singleton births which were or were not included in the study. Birth weight The Box-Cox procedure suggested that square root was the optimal transformation to use for normalising birth weight. The optimal fractional polynomial gestational age terms were GA^3 and GA^3ln(GA). In multivariate analysis, maternal age was not a significant predictor of birth weight. Parameter estimates for the selected best fitting regression equations for the square root of birth weight are given in Table Table4.4. This model has an adjusted R^2 of 40.5%. The best fitting regression equation for estimating optimal birth weight (grams) can therefore be expressed as: Parameter estimates modelling the square root of birth weight (grams) and, of course, This equation suggests that under our standard conditions of birth at 40 weeks gestation to a 162 cm primiparous woman, female infants should weigh 3436.0 g and males 3576.4 g. Second births should weigh 123 g more, third births 158 g more and fourth or subsequent births 189 g more than the first birth. An example of the curves obtained from this equation is shown in Figure Figure1.1. The weighted mean POBWs (across the 8 parity/gender combinations) observed at the 3^rd, 10^th and 90^th percentile positions on the birth weight distribution are shown by gestational duration across the range 35–42 weeks in Figure Figure2.2. These ratios change little by gestational duration within this range. The weighted mean POBWs across the range 38–41 weeks for the 3^rd, 10^th and 90^th percentile birth weights are 81%, 87% and 115% respectively, Table Table77. Mean of male and female optimal birth weight by gestational age at delivery and parity, estimated for births to women of height 162 cm. Weighted mean POBW (across 8 parity/gender combinations) observed at the 3rd, 10th and 90th percentile positions on the birth weight distributions, by gestational age at delivery. Percentage of optimal birth dimension equivalences of percentile cut points from which appropriateness of growth has traditionally been inferred: as observed in this sample of optimally grown Birth length The Box-Cox procedure suggested that birth length raised to the power of 0.75 was the optimal transformation to use for normalising birth length. The optimal fractional polynomial gestational age terms were GA^2 and GA^3. In multivariate analysis, maternal age was not a significant predictor of birth length. Parameter estimates for the selected best fitting regression equations for the square root of birth weight are given in Table Table5.5. This model has an adjusted R^2 of 32.2%. The best fitting regression equation for estimating optimal birth weight (grams) can therefore be expressed Parameter estimates modelling birth crown heel length to the power of 0.75 (cm) and of course This equation suggests that under our standard conditions females should be 50.3 cm long at birth and males should be 0.83 cm longer. The weighted mean proportions of optimal crown heel length at 3^ rd, 10^th and 90^th percentile positions of crown heel length were found to be 93%, 95% and 105% respectively, see Table Table77. Birth head circumference The Box-Cox procedure suggested that it was neither necessary nor desirable to transform head circumference prior to modelling. The optimal fractional polynomial gestational age terms were GA and GAln(GA). All potential predictor variables significantly predicted head circumference, including maternal age. Parameter estimates for the selected best fitting regression equations for the square root of birth weight are given in Table Table6.6. This model has an adjusted R^2 of 25.2%. The best fitting regression equation for estimating optimal birth weight (grams) can therefore be expressed Parameter estimates modelling head circumference at birth (cm) This equation suggest that under our standard conditions and for mothers of 25 years, the optimal head circumference for females was 34.4 cm and for males, 0.61 cm larger. The weighted mean proportions of optimal head circumference at 3^rd, 10^th and 90^th percentile positions of head circumference were found to be 93%, 96% and 105% respectively, see Table Table77. These regression equations, derived from a population based sample of more than 62,000 singleton Caucasian pregnancies without the major risk factors for intrauterine growth anomaly, demonstrate the method used at our Institute to assess appropriateness of fetal growth. The method is applicable to all populations with suitable data available. Our results may be directly applicable to other populations besides Western Australian Caucasian and Aboriginal singletons, and we consider it likely to be applicable to all Caucasian populations, but applicability should be verified as suggested Advantages of the ratio method The use of ratios, such as POBW, in the measurement of intrauterine growth is not a novel idea, [6,20] but has not been universally adopted despite the many advantages of ratios over the more commonly used percentile positions. The following discussion applies to all ratios of optimal dimensions, but, for simplicity POBW will be used as the example throughout. These advantages are: a) Ratios, such as POBW, represent continuous interval measures. b) Estimations of POBW require only a single standard value, the predicted optimal birth weight, rather than values at several points on the birth weight distribution. The precision of estimating a percentile position varies inversely with observation density and, since the majority of distributions have fewer observations at the extremes, extreme observations will be the least precise, whatever the size of the sample generating the distribution. Extreme observations are also most subject to error. When verification of individual observations is not possible (as with de-identified data), it is a common practice to exclude extreme values on the assumption that they are in error, significantly altering the estimated value of extreme percentile positions. The positions of percentile extremes are therefore both imprecise and sensitive to actual and perceived data quality. The most precise percentile estimates are those at the highest observation densities, which, since many distributions are akin to Gaussian (particularly those of birth dimensions), is often the 50^th percentile or median[5]. c) Births affected by growth disturbing factors are over-represented in the extremes of the growth distribution. Hence the positions of extreme percentiles are sensitive to the incidence of growth affecting pathologies in the reference population and vary with the health of the reference population. For example, a newborn with a POBW of 85% might be at the 20^th percentile position of the birth weight distribution for a population with a high burden of growth restricting pathologies, but the 8^th percentile of a population with optimal fetal growth. For an extreme percentile position to be meaningful therefore, the health status of the population from which it is derived needs to be defined, whereas the predicted birth weight is less sensitive to disease burden. Though less sensitive, the proportion of the reference population with growth disturbing factors will also affect the predicted birth weight, except in the unlikely situation where pathological restriction is balanced by pathological acceleration. For this reason we sought to identify a population without growth disturbing factors. The ratio of observed birth weight to predicted birth weight is more generalisable than extreme percentile positions, and the ratio of observed to predicted optimal birth weight is even more generalisable[5]. d) POBW is a continuous scale that correlates with weight deficit, whereas percentile position is an ordinal scale that does not. For example, Table Table88 considers a population sample with a normal (Gaussian) birth weight distribution, mean birth weight of 3,400 g and standard deviation of 345 g. Being Gaussian, the predicted weight equals the mean (and 50^th percentile position) or 3,400 g. In Table Table8,8, changes in percentile position of 4 or 5 percentile points are shown to represent changes in weight of between 43 g and 151 g depending on the particular percentile positions, whereas, within a population, there is a linear correlation between differences in weight and change in POBW. Equivalent changes in percentile position do not represent equivalent changes in weight. Furthermore, in a total population the presence of growth restricting factors creates a negatively skewed (non-Gaussian) birth weight distribution, so the observed range of birth weights covered by extreme percentiles is broader than indicated in Table Table88 and is unpredictable. Comparison of changes in percentile position and in POBW for selected changes in birth weight, for a neonate with an estimated optimal birth weight of 3,400 g. Failure to utilise the advantages of a ratio may in part be due to clinical unfamiliarity. In contrast to percentile position, there is little literature describing the clinical associations of appropriateness of growth expressed as proportions of a desirable birth dimension [21]. For this reason we have included Table Table77 which gives the estimated mean, over gestational weeks 38 – 41 inclusive and each gender and parity group, proportion of optimal ratio values of the 3^rd, 10^th and 90^th percentile positions, of each distribution of weight, length and head circumference at birth. This table of equivalences enables an approximate translation of the literature using percentile positions to percentages of optimal dimensions. The populations from which percentiles are derived will seldom be confined to those without factors affecting growth pathologically. The proportion of optimal equivalences in Table Table77 will over-estimate the appropriateness of growth of percentile defined groups to an extent depending on the burden of growth restricting pathologies in their reference sample. When POBW becomes familiar, its numerical values will convey more precise and generalisable clinical meaning than the traditional percentile positions. Sample selection The sample was limited to singleton births to Caucasian mothers. It is not useful to classify all determinants of intrauterine growth according to whether or not they have a pathological effect on growth. For example, twin pregnancy slows fetal growth particularly in the third trimester; and gestation-specific perinatal outcomes for multiple births delivered at term are not as good as those for singletons [22-24]. However, it is seldom desirable to reduce twin pregnancies to singleton pregnancies and reasonable to ask whether a twin fetus is growing appropriately, given that it is a twin. Multiplicity-specific fetal growth standards would be required to answer this question. Maternal race is also a problematic factor. The observed variation in intra-uterine growth rates between ethnic groups [7,6] may reflect genetically determined differences in optimal rates and/or systematic differences in incidence of growth restricting pathologies and/or environmental exposures. That is, the association between growth and maternal race may arise as a result of either or both non-pathological and pathological determinants of fetal growth. If racial variation in intrauterine growth arises purely as a result of pathological determinants, maternal race is merely associated with growth rate, rather than being a determinant, and should not be controlled. The balance between non-pathological and pathological influences is likely to vary between ethnic groups and between locations. For example, in Western Australian (WA) Indigenous communities the tendency to slower fetal growth relative to Caucasians is believed to be primarily a result of a higher incidence of growth restricting pathologies and environmental exposures [11]. In south east Asian communities living in WA women also tend to have small babies but their perinatal outcomes are similar to those of WA Caucasians. It is reasonable to differentiate birth weight distributions by race only if race is itself a (non-pathological) determinant of fetal growth. Whether this is the case may be determined by comparing the estimations for optimal fetal growth, adjusted for non-pathological determinants, between populations of different races, after excluding pregnancies with evidence of exposure to pathological growth determining factors. If the estimations are significantly and systematically different, race specific standards are required. If they are not, the same standards for optimal fetal growth may be used even if the observed distributions in birth weight differ. The aim of many of our exclusion criteria was to select a sample of births that had not been exposed to factors that have a pathological effect on intrauterine growth. Although the selection criteria only consider causes of growth anomaly we anticipated that selected births would be more likely to be born at term, be larger and born to taller, older and less disadvantaged mothers for several reasons. For example, pathologically affected growth is more often restricted than accelerated and is also associated with preterm birth and maternal smoking, the most prevalent pathological determinant of intrauterine growth, is associated with maternal age and socio-economic circumstances. Table Table22 shows these anticipations to be realised. Selected births were also somewhat more likely to be female, supporting the general observation of the female advantage during gestation. We believe that the curves shown in Figure Figure22 demonstrate that the exclusion of pathologically affected intrauterine growth, and of erroneous gestational estimates, has been reasonably successful. Biological growth (of which fetal growth is an example) typically proceeds to produce a Gaussian distribution at any point in time, with the standard deviation being proportional to the mean. Charts of unselected birth weights against gestational duration usually demonstrate increasing dispersion with decreasing gestation of delivery for deliveries before 40 weeks. There are two reasons for this: (i) the proportion of erroneously reported gestational age values increases with decreasing gestation, simply because the number of births actually delivered at any gestation week decreases the further it is from the modal value. Typically, erroneous preterm gestational reporting underestimates true gestational duration, hence the birth weight associated with an erroneously reported preterm gestational age is typically higher than those of births actually delivered at the reported gestation. (ii) Birth much before term often has a pathological cause that also influences fetal growth. Thus the distributions of birth weights delivered at preterm gestations are no longer Gaussian, but typically, because pathological restriction occurs much more frequently than acceleration, are negatively skewed. Thus weights of neonates born at preterm gestations tend to be lower than fetuses of the same gestational age who go on to deliver at term. In this study we address (i) by using the best estimate of gestational age that can be derived from all available data[16] and to exclude the '4-week errors' arising from gestational break-through bleeding. If we have succeeded in excluding pregnancies exposed to factors known to pathologically affect fetal growth we have addressed (ii), though the cost is the exclusion of a disproportionate number of infants born preterm, particularly very preterm, as can be seen in Table Table2.2. The observation that the POBWs of the percentile positions are independent of gestational age, Figure Figure2,2, indicates that the dispersion is proportional to the mean across gestational age, and is compatible with both (i) and (ii) having been successfully addressed. Selection of independent variables Some may consider our selection of predictor variables incomplete as it does not include measures of paternal size, maternal weight or maternal weight gain. Paternal size While paternal size is known to influence fetal growth it was not included because the biological father cannot routinely be identified and therefore measures of paternal size are not available on our database. The proportion of variability accounted for by the regression equations would, no doubt, be increased by the inclusion of paternal height as an independent variable. Maternal weight It has been suggested that maternal size affects fetal growth because it correlates with the area of uterine endometrium available for placentation. Since this area is not directly measurable, it is logical to seek and adjust for the maternal dimension(s) that correlates most closely with it. Maternal height measures skeletal size in the vertical dimension only, while maternal weight is associated with skeletal size and soft tissue mass, including adipose tissue. Skeletal height tends to correlate with skeletal size, but the proportion of weight consisting of soft tissue, particularly adipose tissue, is very variable, weakening the correlation between maternal weight and skeletal size. Data from the 5 month Dutch famine suggest that maternal pre-natal weight for height, a measure of soft tissue mass, is not a strong determinant of birth weight [25]. We therefore suggest that maternal height is likely to correlate better with the uterine area available for placentation than is prenatal maternal weight or weight for height. Maternal weight gain Maternal weight gain is occasionally considered to be a determinant of birth weight [26]. However fetal weight can be expected to correlate with maternal weight gain, because fetal weight, and its correlate placental weight, are significant components of maternal weight gain. Thus rather than being a non-pathological determinant of fetal weight, maternal weight gain partially measures fetal growth, whether or not it is optimal and should therefore not be adjusted for when estimating appropriateness of growth. The non-pathological determinants of growth used in these models accounted for 40.5%, 32.2% and 25.2% of the variance in birth weight, length and head circumference respectively. The variation between these proportions may result from variation in the accuracy with which each birth dimension can be measured. Birth weight is routinely measured to within 5 g, representing about 0.15% of a median weight baby. Compared with birth weight birth, length is more difficult to measure reliably due to the tendency of the neonate to flex and the facility with which it may be stretched. Measured head circumference at delivery may be influenced by moulding of the head during passage through the birth canal. The effect of moulding on head circumference may be largely avoided by waiting until 2 days after birth before measurement. However with early discharge policies, such a wait risks failing to obtain any measurement of head circumference and in WA head circumference is routinely measured in the delivery room. The highly significant, though small, dependence of head circumference on maternal age, despite adjustment for parity, was unexpected and requires confirmation in independent samples Inclusion of maternal height and age as linear variables All three dependent variables were found to have a linear dependence on maternal height in the range 147–183 cm. Outside this range there was a tendency for regression to the mean value of the dimension. This may occur because we could not include a term for paternal height and there will be a tendency for women at the extremes of height to have partners with heights that are less extreme. However since only 269 (~0.4%) of our selected sample had a height outside this range, maternal height was included as a linear variable. Of the three dependent variables only head circumference had an association with maternal age, which was found to be linear up to age 45 years. Only 16 (0.03%) of our selected sample were older than 45 at the time of delivery, therefore maternal age was also included as a linear variable. Comparisons with previous methods of assessing intrauterine growth In 1963 Lubchenco and colleagues [27] presented the first percentile charts of gender- and gestation-specific birth weights for an unselected population of live births and thereby initiated the modern study of intrauterine growth. The next major innovation in the methods of assessing intrauterine growth was the development of customised computer generated charts for individual neonates [4] by adding to gender and gestational duration the following predictor variables: maternal height, weight, ethnic group, parity and the birth weight, gestational duration and gender of any previous siblings, with the option of further adding measures of growth taken during the index pregnancy. The charts were again presented as percentiles. These charts were designed to predict birth weight rather than assess appropriateness of growth, as not all the independent variables (eg. sibling growth and ethnic group) are necessarily non-pathological determinants of intrauterine growth. In 1993, Wilcox and colleagues [6] introduced the concept of the birth weight ratio, the ratio of the observed birth weight to the birth weight predicted given gestational duration, fetal gender, maternal height, weight, parity and ethnic group. Their study sample excluded multiple births, stillbirths and congenitally abnormal babies, and limited their analysis to term births, but did not attempt to exclude pregnancies affected by other pathological determinants of growth. Poor growth was defined on the basis of a percentile position of the birth weight ratio, thereby retaining the problems inherent in the use of percentile positions as standards. The method reported in this paper introduces two innovations, (i) using optimal, rather than expected, growth as the standard, and (ii) reporting the ratio of the observed to optimal birth dimension as the indicator of appropriateness of growth, rather than a percentile position. We sought a sample with optimal opportunities for fetal growth for the creation of standard both because this is the logical standard and also to avoid the problem of the varying incidence of growth restricting pathology and environmental exposures between populations. Our previously published birth weight standards for Western Australia [17] excluded only perinatal deaths from the reference sample because other relevant data were not available at the population level. In subsequent models, we explored the possibility of using ratios rather than percentiles [11,20], the effects of maternal height and parity were estimated in broad strata and those of maternal age were not considered. Although births affected by some factors suggesting supoptimal growth could be excluded, data concerning the most commonly occurring pathological growth restricting exposure, maternal smoking, were not then available at the population level. The creation of the standards for optimal fetal growth for Caucasian singletons presented here is possible in part due to additional methods of estimating gestational duration [16], the ability to exclude the large proportion of births to women who smoked or experienced factors known to affect fetal growth and more complete information concerning non-pathological determinants of growth. Computing and statistical methods have also been improved with the use of (a) the Box-Cox transformation to account for any non-normality in the distribution of the response variables (b) fractional polynomial regression which required no assumptions regarding the form of the relationship between gestational duration and each of the response variables and facilitated the use of continuous variables thereby allowing the effects of non-pathological determinants of growth to be estimated more precisely. We have presented a comprehensive guide to an alternative method of creating standards for newborn dimensions and assessing appropriateness of intrauterine growth. It is based on the estimation of the optimal value for the dimension which we define as the value obtained by regression techniques from a large sample of women without risk factors for intrauterine growth anomaly. In this method, appropriateness of intrauterine growth is expressed as the ratio of the observed birth dimension to the optimal birth dimension rather than as being above or below a specified percentile position of the population distribution of that dimension, avoiding the problems inherent in the use of percentile position. Since POBW is a measure of appropriateness of intrauterine growth it may be used as a continuous variable and subjected to parametric statistical analysis. The use of POBW in clinical and research settings will prove whether it is a more precise predictor of compromise within individuals than previously available indicators of intrauterine growth status. GA: best available estimate of gestational age at delivery. POBW: percentage of optimal birth weight. WA: Western Australia. Competing interests The author(s) declare that they have no competing interests. Authors' contributions EB conceived of and directed the study and drafted the manuscript. YL carried out initial statistical analyses. NHdeK gave statistical advice. DML carried out subsequent statistical analyses. All authors read and approved the final manuscript. Pre-publication history The pre-publication history for this paper can be accessed here: The authors are grateful to Vivien Gee and Western Australian midwives for collecting and providing the birth data, to Peter Cosgrove abstracting the required data and to R.S. Kirby for a very helpful review. The work reported in this paper has been supported financially by Program Grants #003209 and #353541 from the National Health and Medical Research Council of Australia. • Marlow N. Paediatric implications - neonatal complications. In: Kingdom J BP, editor. Intrauterine growth restriction: aetiology and management. London, Springer; 2000. pp. 337–349. • Blair E. Paediatric implications of IUGR with special reference to cerebral palsy. In: Kingdom J BP, editor. Intrauterine growth restriction: aetiology and management. London, Springer; 2000. pp. • Reynolds RM, Godfrey KM. Long term implications for adults health. In: Kingdom J BP, editor. Intrauterine growth restriction: aetiology and management. London, Springer; 2000. pp. 367–384. • Gardosi J, Chang A, Kalyan B, Sahota D, Symonds EM. Customised antenatal growth charts. Lancet. 1992;339:283–287. doi: 10.1016/0140-6736(92)91342-6. [PubMed] [Cross Ref] • Blair E. Uses and Misuses of the Percentile (Centile or Quantile) Position. Australasian Epidemiologist. 2003;10:26–28. • Wilcox MA, Maynard PV, Chilvers CED. The individualised birthweight ratio: a more logical outcome measure of pregnancy than birthweight alone. British journal of obstetrics and gynaecology. 1993; 100:342–347. [PubMed] • Zhang J, Bowes WA. Birth-weight-for -gestational age patterns by race, sex, and parity in the United States population. Obstetrics and Gynecology. 1995;1995:200–208. doi: 10.1016/0029-7844(95) 00142-E. [PubMed] [Cross Ref] • Roberts CL, Lancaster PAL. Australian national birthweight percentiles by gestational age. Medical Journal of Australia. 1999;170:114–118. [PubMed] • Skjerven R, Gjessing HK, Bakketeig LS. Birthweight by gestational age in Norway. Acta Obstetricia Gynecologica Scandinavica. 2000;79:440–449. doi: 10.1034/j.1600-0412.2000.079006440.x. [PubMed] [ Cross Ref] • Voorhorst FJ, Bouter LM, Besemer PD, Kurver PHJ. Maternal characteristics and expected birth weight. European Journal of Obstetrics & Gynecology and Reproductive Biology. 1993;50:115–122. doi: 10.1016/0028-2243(93)90175-C. [PubMed] [Cross Ref] • Blair E. Why do Aboriginal newborns weigh less? Determinants of birthweight for gestation. J Paediatrics and Child Health. 1996;32:498–503. [PubMed] • Wilcox MA, Newton CS, Johnson IR. Paternal influences on birthweight. Acta Obstetricia et Gynecologica Scandinavica. 1995;74:15–18. [PubMed] • Richard J. Identification of fetal growth abnormalities in diabetes mellitus. Seminars in Perinatology. 2002;26:190–195. [PubMed] • Stanley FJ, Croft ML, Gibbins J, Read AW. A population database for maternal and child health research in Western Australia using record linkage. Paediatric and Perinatal Epidemiology. 1994;8 :433–447. [PubMed] • Resnik R. Intrauterine growth restriction. Obstet Gynecol. 2002;99:490–496. doi: 10.1016/S0029-7844(01)01780-X. [PubMed] [Cross Ref] • Blair E, Liu Y, Cosgrove P. Choosing the best estimate of gestational age from routinely collected population based perinatal data. Paediatr Perinat Epidemiol. 2004;18:270–276. [PubMed] • Blair E, Stanley FJ. Intrauterine growth chart. Commonwealth Department of Health. 1985. • Box GEP, Cox DR. An analysis of transformations. Journal of the Royal Statistics Society. 1964;B-26:211–252. • Royston P, Altman DG. Regression using fractional polynomials for continuous covariates: parsimonious parametric modelling (with discussion) Applied Statistics. 1994;43:429–467. • Palmer L, Petterson B, Blair E, Burton P. Family patterns of gestational age at delivery and growth in utero in moderate and severe cerebral palsy. Developmental medicine and child neurology. 1994;36:1108–1119. [PubMed] • Palmer L, Blair E, Petterson B, Burton P. Antenatal antecedents of moderate and severe cerebral palsy. Paediatric and perinatal epidemiology. 1995;9:171–184. [PubMed] • Taylor GM, Owen P, Mires GJ. Foetal growth velocities in twin pregnancies. Twin Research. 1998;1:9–14. doi: 10.1375/136905298320566438. [PubMed] [Cross Ref] • Liu Y, Blair E. Predicted birthweight for singleton and twins. Twin Research. 2002;5:529–537. doi: 10.1375/136905202762341991. [PubMed] [Cross Ref] • Alexander GR, Kogan M, Martin J, Papiernik E. What are the fetal growth patterns of singletons, twins, and triplets in the United States? Clinical Obstetrics and Gynecology. 1998;41:115–125. doi: 10.1097/00003081-199803000-00017. [PubMed] [Cross Ref] • Morley R, Owens J, Blair E, Dwyer T. Is birthweight a good marker for gestational exposures that increase the risk of adult disease? Paediatric and Perinatal Epidemiology. 2002;16:194–199. doi: 10.1046/j.1365-3016.2002.00428.x. [PubMed] [Cross Ref] • Strauss RS, Dietz WH. Low maternal weight gain in the second or third trimester increases the risk for intrauterine growth retardation. Journal of Nutrition. 1999;129:988–993. [PubMed] • Lubchenco LO, Hansman C, Dressler M, Boyd E. Intrauterine growth as estimated from liveborn birth- weight data at 24 to 42 weeks of gestation. Pediatrics. 1963;32:793–800. [PubMed] • Bower C, Rudy E, Ryan A, Cosgrove P. Report of the birth defects registry of Western Australia, 1980-2003. Perth, Western Australia, King Edward Memorial Hospital, Women's and Children's Health Service.; 2004. • McLennan W. 1996 Census of population and housing: socio-economic indexes for areas. Canberra, Australia, Australian Bureau of Statistics; 1998. Articles from BMC Pediatrics are provided here courtesy of BioMed Central • MedGen Related information in MedGen • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1174874/?tool=pubmed","timestamp":"2014-04-21T03:39:18Z","content_type":null,"content_length":"118973","record_id":"<urn:uuid:671dcaf8-d0e7-4a2b-945f-fbee13abebb9>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 10 10 yield limit state that are calibrated with respect to safety factors that prevailed for the former allowable stress-based design (ASD). Table 7 is a summary of resistance factors for the yield limit state as presented in the current AASHTO specifications. The ASD employed safety factors of 1.8 (i.e., 1/0.55) or 2.1 (i.e., 1/0.48) relative to yield of strip-type reinforcements or grid- type reinforcements, respectively. The higher safety factor for grid reinforcing members corresponds to a lower resistance fac- tor and is intended to ensure that no individual wire is stressed to more than 0.55Fy. This compensates for interior longitudinal elements that carry higher load compared to exterior elements due to load transfer through the transverse members of the bar mat. The safety factor of 2.1, and corresponding resistance fac- tor of 0.65, is appropriate for bar mats with four or more longi- tudinal elements but should be higher for elements with only three longitudinal elements. However, this point is not Figure 3. Statistical model of limit state equation. addressed in the current AASHTO specifications. D'Appolonia (2007) assessed strength reduction factors for the yield limit state via reliability-based calibration, but did not Resistance Factors for Design consider metal loss from corrosion as a variable. This project of Earth Reinforcements extends these studies to consider variability of metal loss and Reliability-based calibration of the strength reduction fac- the impact that this has on computed levels of reliability using tor for LRFD modeling is focused on the design of MSE wall existing design methodologies and methods for computing systems, since the AASHTO LRFD specifications for MSE the load transferred to the reinforcements. Calibration of the walls include metal loss as an explicit part of the design. resistance factors uses load factors from the AASHTO LRFD Ground anchor systems described in the AASHTO specifica- specifications and calibration methodology recommended tions incorporate a Class I corrosion protection system, by Allen et al. (2005). The resistance factor is calibrated with therefore metal loss is not incorporated into the design calcu- respect to a target reliability index, T, (i.e., probability of lations. Current AASHTO specifications include resistance occurrence), which accounts for the redundancy of the system factors for the structural resistance of ground anchors that and load redistribution inherent to the yield limit state. consider variations inherent to steel manufacturing and fab- rication. The value of varies depending on steel type as 0.9 Probability of Occurrence (Exceeding Yield) for mild steel (ASTM A-615) and 0.8 for high-strength steel for Existing Construction tendons (ASTM A-722). The AASHTO specifications do not specifically address design calculations in support of rock-bolt Generally, MSE wall systems are prefabricated, resulting installations. To address this need, service life estimates and in distinct reinforcement and reinforcement spacing. Thus, example calibrations of resistance factors for rock bolts are reinforcement yield resistance is available in discrete incre- also included in this report. ments determined by the distinct size of the reinforcement The current AASHTO (2009) LRFD Bridge Design Specifica- and reinforcement spacing selected for the project. Reinforce- tions for design of MSE walls include resistance factors for the ment sizes and spacings are selected based on particular design locations, often near the base of the wall; and unless the wall is very tall, these dimensions are held constant throughout. Table 6. Relationship Therefore, yield resistance is not optimized with respect to the between and pf. yield limit state, and for many reinforcement locations, there is a large disparity between reinforcement loads and resistance. Reliability Index Probability of () occurrence D'Appolonia (2007) studied this case using data that included (pf) measurements of reinforcement load that could be compared 2.0 2.275 x 10-2 2.5 6.210 x 10-3 with the available yield resistance. Essentially, the results 3.0 1.350 x 10-3 reported by D'Appolonia describe the probability of occur- 3.5 2.326 x 10-4 4.0 3.167 x 10-5 rence for as-built conditions, rather than for a conceptual 4.5 3.398 x 10-6 design for which yield resistance is optimized with respect to 5.0 2.867 x 10-7 the limit state. OCR for page 10 11 Table 7. Resistance factors for yield resistance for MSE walls with metallic reinforcement and connectors from Table 11.5.6-1, AASHTO (2009). Reinforcement Type Loading Condition Resistance Factor Strip reinforcements1 Static loading 0.75 Combined static/earthquake loading 1.00 Grid reinforcements1,2 Static loading 0.65 Combined static/earthquake loading 0.85 1 Apply to gross cross section less sacrificial area. For sections with holes, reduce gross area in accordance with AASHTO (2009) Article 6.8.3 and apply to net section less sacrificial area. 2 Apply to grid reinforcements connected to rigid facing element, for example, a concrete panel or block. For grid reinforcements connected to a flexible facing mat or that are continuous with the facing mat, use the resistance factor for strip reinforcements. Results from Monte Carlo simulations of the limit state yield resistance. Furthermore, for tall walls there may be a function and comparison with closed form solutions as number of locations where yield resistance is selected to meet reported by D'Appolonia indicate that the probability of a given load. Thus, locally, the probability of occurrence may occurrence for as-built conditions is very low, corresponding be much higher than that predicted by D'Appolonia. to > 3.5 and pf < 0.0001. These results are insensitive to Alternatively, this report describes reliability-based cali- metal loss and do not depend on the choice of resistance fac- bration for resistance factors considering that the yield limit tor. This leads to the conclusion that reinforcement yield is state function is explicitly applied at every reinforcement very unlikely given the as-built conditions of MSE walls, and location. Thus, the potential for overdesign is not directly the yield limit state does not appear to have a significant included in the analysis; however, a target reliability index, impact on performance. T of 2.3 corresponding to pf = 0.01, is adopted considering The D'Appolonia model assumes that the difference between the large redundancy inherent to the system (Allen et al., yield resistance and reinforcement load is randomly distrib- 2005). Considering as-built conditions, the resistance factors uted. In reality this is not the case. For example, the difference computed by this technique are conservative, although they may be much smaller for reinforcements located near the base are in the range of those incorporated into AASHTO (2009) of the wall or other locations that may govern the required as shown in Table 7.
{"url":"http://www.nap.edu/openbook.php?record_id=14497&page=10","timestamp":"2014-04-19T07:19:38Z","content_type":null,"content_length":"49045","record_id":"<urn:uuid:19a2c833-211e-4809-910d-515c39f69016>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Analytic problem December 16th 2011, 03:13 AM #1 Dec 2011 Hong Kong Analytic problem Let $f(z)$ be analytic in the disk $|z| <1$. If $f(z)$ has a zero of order 2 at the origin and $|f(z)| \le 1$ in that disk. Prove that $|f(z)|\le|z|^2$ in $|z|<1$ I have no idea where to start. Thank you. Re: Analytic problem Hint: In the series expansion $f(z)=\sum_{n=0}^{+\infty}a_nz^n$ we have $a_0=a_1=0$ , so $f(z)/z^2$ is analytic in $|z|<1$ . Re: Analytic problem Thank you for you reply, I could do what you said but whats the next step? Thanks Re: Analytic problem By hypothesis $|f(z)|\leq 1$ so, $|f(z)/z^2|\leq 1/r^2$ for $|z|=r$ . This equality is also valid for $|z|\leq r$ according to the Maximum Modulus Principle. If we fix $z$ in $|z|<1$ we have $|f (z)|\leq |z|^2/r^2$ for all $r\geq |z|$ and $<1$ . You can conclude. Last edited by FernandoRevilla; December 16th 2011 at 06:59 AM. Re: Analytic problem Sorry but I didn't get it. How can I deduce $|f(z)/z^2|\leq 1/r^2$for $|z|\leq r$ according to the Maximum Modulus Principle? Thanks!!! Re: Analytic problem Re: Analytic problem Suppose $f$ is analytic on $|z-z_0|<\epsilon$. If $|f(z)| \le |f(z_0)|$ for $z$ on this region then $f(z) \equiv f(z_0)$ on the region. but $|f(z)/z^2|\leq 1/r^2$. How can I make sure that $f(z_0)=1/r^2$? Or I am in the wrong direction? Thank you Re: Analytic problem Better use this version: Let $D \subset \mathbb{C}$ be a bounded domain, and let $f$ be a continuous function on the closed set $\overline{D}$ that is analytic on $D$. Then the maximum value of $ |f|$ on $\overline{D}$ (which always exists) occurs on the boundary $\partial D$. So, in our case, it is not possible $|f(z)/z^2|>1/r^2$ if $|z|<r$ . December 16th 2011, 03:35 AM #2 December 16th 2011, 05:07 AM #3 Dec 2011 Hong Kong December 16th 2011, 06:31 AM #4 December 16th 2011, 07:17 AM #5 Dec 2011 Hong Kong December 16th 2011, 07:25 AM #6 December 16th 2011, 07:48 AM #7 Dec 2011 Hong Kong December 16th 2011, 11:34 AM #8
{"url":"http://mathhelpforum.com/differential-geometry/194368-analytic-problem.html","timestamp":"2014-04-17T01:24:12Z","content_type":null,"content_length":"60393","record_id":"<urn:uuid:729ca4a9-52d2-465a-824c-a7f3525fe465>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Simulating particle motion in real time Thanks for replying so fast. OK, I'll elaborate on my OP: I want to simulate the behavior of some atoms that effuse from an oven, but I assume that I can do it classically. So no QM for now. I want to do it such that I can get quantities like the flux, but you said that a "one-particle-at-a-time"-approach is not an option. I did not know that, thanks for that! The problem now is that I am not sure how to assign a velocity vector to an atom, after it leaves the oven. The component must depend on the oven aperture (I would intuitively expect that), but I am not sure what distribution describes the three components. Thanks for the help so far.
{"url":"http://www.physicsforums.com/showthread.php?p=4152943","timestamp":"2014-04-16T18:57:24Z","content_type":null,"content_length":"32029","record_id":"<urn:uuid:5a9d3fa8-43e5-4831-9694-84a49f144cc3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
What is Thora Birch measurements? You asked: What is Thora Birch measurements? Thora Birch Thora Birch is 5 feet and 4 inches tall. Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/what_is_thora_birch_measurements","timestamp":"2014-04-16T22:41:32Z","content_type":null,"content_length":"55004","record_id":"<urn:uuid:3746e57c-32aa-4e8a-b19e-dc6f62870742>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematica TM |A System for Doing Mathematics by Computer Results 1 - 10 of 12 - PAMI , 1996 "... This technical report examines the fundamental ambiguities and uncertainties inherent in recovering structure from motion. By examining the eigenvectors associated with null or small eigenvalues of the Hessian matrix, we can quantify the exact nature of these ambiguities and predict how they affect ..." Cited by 50 (4 self) Add to MetaCart This technical report examines the fundamental ambiguities and uncertainties inherent in recovering structure from motion. By examining the eigenvectors associated with null or small eigenvalues of the Hessian matrix, we can quantify the exact nature of these ambiguities and predict how they affect the accuracy of the reconstructed shape. Our results for orthographic cameras show that the bas-relief ambiguity is significant even with many images, unless a large amount of rotation is present. Similar results for perspective cameras suggest that three or more frames and a large amount of rotation are required for metrically accurate reconstruction. - Computer Vision and Image Understanding , 1997 "... The Cambridge laboratory became operational in 1988 and is located at One Kendall Square, near MIT. CRL engages in computing research to extend the state of the computing art in areas likely to be important to Digital and its customers in future years. CRL’s main focus is applications technology; th ..." Cited by 20 (0 self) Add to MetaCart The Cambridge laboratory became operational in 1988 and is located at One Kendall Square, near MIT. CRL engages in computing research to extend the state of the computing art in areas likely to be important to Digital and its customers in future years. CRL’s main focus is applications technology; that is, the creation of knowledge and tools useful for the preparation of important classes of applications. CRL Technical Reports can be ordered by electronic mail. To receive instructions, send a message to one of the following addresses, with the word help in the Subject line: , 1995 "... We describe our implementation of several PRAM graph algorithms on the massively parallel computer MasPar MP-1 with 16,384 processors. Our implementation incorporated virtual processing and we present extensive test data. In a previous project [13], we reported the implementation of a set of paralle ..." Cited by 14 (3 self) Add to MetaCart We describe our implementation of several PRAM graph algorithms on the massively parallel computer MasPar MP-1 with 16,384 processors. Our implementation incorporated virtual processing and we present extensive test data. In a previous project [13], we reported the implementation of a set of parallel graph algorithms with the constraint that the maximum input size was restricted to be no more than the physical number of processors on the MasPar. The MasPar language MPL that we used for our code does not support virtual processing. In this paper, we describe a method of simulating virtual processors on the MasPar. We re-coded and fine-tuned our earlier parallel graph algorithms to incorporate the usage of virtual processors. Under the current implementation scheme, there is no limit on the number of virtual processors that one can use in the program as long as there is enough main memory to store all the data required during the computation. We also give two general optimization techniq... , 1995 "... We present the perturbation theory of the Chern-Simons gauge field theory and prove that to second order it indeed gives knot invariants. We identify these invariants and show that in fact we get a previously unknown integral formula for the Arf invariant of a knot, in complete agreement with ear ..." Cited by 12 (1 self) Add to MetaCart We present the perturbation theory of the Chern-Simons gauge field theory and prove that to second order it indeed gives knot invariants. We identify these invariants and show that in fact we get a previously unknown integral formula for the Arf invariant of a knot, in complete agreement with earlier non-perturbative results of Witten. We outline our expectations for the behavior of the theory beyond two loops. , 1996 "... In this paper, we propose a new technique for the numerical treatment of external flow problems with oscillatory behavior of the solution in time. Specifically, we consider the case of unbounded compressible viscous plane flow past a finite body (airfoil). Oscillations of the flow in time may be cau ..." Cited by 7 (6 self) Add to MetaCart In this paper, we propose a new technique for the numerical treatment of external flow problems with oscillatory behavior of the solution in time. Specifically, we consider the case of unbounded compressible viscous plane flow past a finite body (airfoil). Oscillations of the flow in time may be caused by the time-periodic injection of fluid into the boundary layer, which in accordance with experimental data, may essentially increase the performance of the airfoil. To conduct the actual computations, we have to somehow restrict the original unbounded domain, that is, to introduce an artificial (external) boundary and to further consider only a finite computational domain. Consequently, we will need to formulate some artificial boundary conditions (ABC's) at the introduced external boundary. The ABC's we are aiming to obtain must meet a fundamental requirement. One should be able to uniquely complement the solution calculated inside the finite computational domain to its infinite exteri... , 1996 "... In this paper we describe the DCEL system: a geometric software package which implements a polyhedral programming environment. This package enables fast prototyping of geometric algorithms for polyhedra or for polyhedral surfaces. We provide an overview of the system's functionality and demonstrate ..." Cited by 1 (1 self) Add to MetaCart In this paper we describe the DCEL system: a geometric software package which implements a polyhedral programming environment. This package enables fast prototyping of geometric algorithms for polyhedra or for polyhedral surfaces. We provide an overview of the system's functionality and demonstrate its use in several applications. Keywords: geometric software, databases, programming environments, polyhedra. 1. Introduction Computational geometry has offered a large amount of algorithms during the last two decades. Software implementation of these algorithms makes them valuable not only for theoreticians but also for practitioners in academia and industry. This is in many cases the appropriate tool for choosing the best algorithm for a specific problem in a given context: hardware platform, operating system, programming language, typical inputs of the application, robustness considerations, etc. The importance of applied computational geometry is now being recognized. 10 Dedicated ... "... This article appears in the Association for Automated Reasoning Newsletter No. 37 (August 1997), ..." Add to MetaCart This article appears in the Association for Automated Reasoning Newsletter No. 37 (August 1997), , 1992 "... This paper presents a deterministic procedure for tailoring the continuum stiffness and strength of uniform space-filling truss structures through the appropriate selection of truss geometry and member sizes (i.e., flexural and axial stiffnesses and length). The trusses considered herein are generat ..." Add to MetaCart This paper presents a deterministic procedure for tailoring the continuum stiffness and strength of uniform space-filling truss structures through the appropriate selection of truss geometry and member sizes (i.e., flexural and axial stiffnesses and length). The trusses considered herein are generated by uniform replication of a characteristic truss cell. The repeating cells are categorized by one of a set of possible geometric symmetry groups derived using crystallographic techniques. The elastic symmetry associated with each geometric symmetry group is identified to help select an appropriate truss geometry for a given application. Stiffness and strength tailoring of a given truss geometry is enabled through explicit expressions relating the continuum stiffnesses and failure stresses of the truss to the stiffnesses and failure loads of its members. These expressions are derived using an existing equivalent continuum analysis technique and a newly developed analyt...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1616701","timestamp":"2014-04-16T11:07:32Z","content_type":null,"content_length":"33561","record_id":"<urn:uuid:2faefc46-0061-4f8c-9477-1e3cfa1b3650>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Butterfly Chart – Excel Chart with Dual Converging Scales A Butterfly chart is a chart where two entities are compared side by side using scales meeting at the center. Due to its shape, the chart resembles a butterfly and hence the name. These charts are sometimes also known as Funnel or Tornado Charts though I find “butterfly” to be a better description as it allows for a greater variation in shape than a funnel or a tornado does ! So let’s jump straight into creating a beautiful looking butterfly chart. Getting the Data for the chart Although a simple looking butterfly chart is as easy to create as a bar chart, there is some value in adding labels, converging scales and the other embellishments. They make the chart look so much cleaner and professional and more importantly, help the user get a ‘feel’ of the data faster. For the puropose of this example, let us take the case of a firm called …..what else ….Butterfly Inc. This small has has two stores engaged in the sales of various products. We would like to compare the performance of each of these stores by placing them side by side (I mean the data) and then get a quick grasp of how each one performs compared to the other. The first three columns essentially contain all the data related to the business. The remaining columns merely help us organize them in the chart. As you might have guessed, the butterfly chart is a stacked bar chart where the various bar series are arranged in such a manner that they meet/align at the center. The padding A and padding B are two special series which simply help us align the actual data series better. What we do is to take a large value (say 100) and then if the actual value of a particular category is, say 45 then its corresponding padding becomes 55 (which is 100-45). We do the same both both the entities – Store A and B in this case. The “gap” is another dummy series that helps us separate the bar and provides a placeholder for the category name/labels. (Biologically speaking – that would be the Thorax of the butterfly !) The last two columns are for creating the ‘special’ axis – where the value of 0 lies at the center and the twin scales proceed Making the basic Chart Let’s create a basic chart with five series. By default Excel will plot the series in the order in which they appear in a range. So rather than selecting the entire range (consisting of the first five columns) at one go, we insert one series at a time. We begin with Padding A, followed by values for Store A, then the gap, followed by value for Store B and finally the padding for B. Adding the XY series for the dummy scales Excel does not provide the functionality to create an axis which begins at 0 and has two scales extending outwards – something that we do require for creating a butterfly chart. So we crate one on our own. Let’s begin by plotting an XY chart using the last two columns. The series marked as label acts as the Y-Axis and the the other one as the X-Axis. You may want to give this part a bit of focus as the placement of various XY points are determined by the values that you provide here. Once we’ve inserted the XY-Series the chart looks like this: Although it may need look much like a butterfly chart, the above pretty much has all the components and is just a few steps away from being one. All that needs to do now is to format the chart. Aligning the XY points to the X axis If you noticed, the points are not aligned to the X-Axis. In order to force them to align with the X-axis, you can change the vertical scale towards the right and make the minimum value 0 and provide an arbitrarily large value to the maximum. Let’s delete to default chart grid lines. We will inset our own custom gridlines by adding the the Y error bars to each of the points. Adding/Modifying the legend Let’s turn on the chart legend and place it at the bottom. In order to remove the extra values from the legend, you can select individual named by placing two slow single clicks on it. Once the individual label has been selected, you can use the delete button to remove the label. One by one, apply this step to all the labels that are not required. The last few steps and our Butterfly Chart is ready to fly The last few steps are: 1. Turn on the labels for the center bar (use Category name as label) 2. Turn on the labels for the XY Points (use Y-axis as label) 3. Add Title to the chart 4. Remove the marker for the XY points 5. Remove the fill from the first, middle and the last series of bars or fill them with white color. 6. Turn on the labels for the the values for the second and fourth (Store A and Store B) (Use values as labels) And here is our beautiful butterfly chart. You can download a sample worksheet with a example of Butterfly Chart here or click on the button below:
{"url":"http://www.databison.com/butterfly-chart-excel-chart-with-dual-converging-scales/","timestamp":"2014-04-16T21:52:10Z","content_type":null,"content_length":"56922","record_id":"<urn:uuid:7377a29d-4804-406c-b92e-35ee5127da82>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Firestone Algebra Tutor Find a Firestone Algebra Tutor ...I have over 100 hours of training with this agency. The training taught me study skills for all types of subjects, tests, and learning styles. I taught the Biology section of the MCAT for a national test prep agency for over 1 year. 26 Subjects: including algebra 2, algebra 1, reading, geometry ...My name is Peter, and I am a CU graduate, going to graduate school to earn my PhD in chemistry in the Fall. I have TA'd organic, general, and introductory level courses, as well as taken classes in pedagogy to further improve my skills as a teacher. I have served as a private tutor for many stu... 6 Subjects: including algebra 2, precalculus, algebra 1, chemistry ...I begin with ascertaining what the problem is, whether it is comprehension and vocabulary to the student knowing how to pronounce letters, sounds, and words. Then, I try to quickly build confidence and teach good reading habits involving keeping a dictionary handy, sounding out words, and how th... 41 Subjects: including algebra 1, algebra 2, Spanish, reading ...In my time at University of Colorado I participated on the Women's Division 1 volleyball team, and after injury moved to a student assistant role. Through my volleyball career, I have gained the opportunity to work with kids of all ages. I look forward to sharing my passion for learning and hel... 22 Subjects: including algebra 2, algebra 1, chemistry, English ...My tutoring style is using hints, facts, and questions to guide student's thinking. Please do NOT expect me to provide answers directly as that only hinder's a learning process. Please only contact me if you interested in learning rather than ONLY needing help completing an assignment as I will not be doing any problems for you. 7 Subjects: including algebra 1, algebra 2, chemistry, ACT Math
{"url":"http://www.purplemath.com/Firestone_Algebra_tutors.php","timestamp":"2014-04-18T21:41:39Z","content_type":null,"content_length":"23778","record_id":"<urn:uuid:460be81e-3f0b-400d-b1fb-6b0ae6c95a79>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Determination of price level, Macroeconomics P and Y are both endogenous variables and according to the quantity theory of money we need P.Y = constant. If we divide both sides by P we get Y = constant / P. Because Y = Y[D] in the classical model, we can write Y[D] = constant / P. This relationship is sometimes known as 'classical aggregate demand' as it relates real aggregate demand for services and goods Y[D] to the price level P. Figure: Determination of price level. Though it is significant to remember that it isn't price adjustments which make aggregate demand equal to aggregate supply in the chart above. Aggregate demand is always equal to aggregate supply by Say's Law. In the classical model, Y[D] isn't determined by P though rather the opposite; P is determined by Y[D] (that is equal to Y[S]) and money supply (which is included in the constant). Posted Date: 8/14/2013 2:17:21 AM | Location : United States Your posts are moderated equilibrium in money market and derivation of lm curve Treatment of Na2PdCI4 with 3-chloro-2-methyl-1-propene under an atmosphere of CO yields the dimer [(11 3 - C 4 H 7 )PdClb (A). The 1H NMR spectrum of A at 298 K shows 3 signals: a Price 10,9,8,7,6,5,4,3,2,1 QD 0,1,2,3,4,5,6,7,8,9,10 TR? Ed?. what is static and dynamic multiplier in keynesian theory? Problem 1 a. Define ERP. Explain the terminology related to ERP. b. How ERP evolved in a system? a. Definition. >>Description on point of sale, MRP-I, MRP-I What impact will high and variable rates of inflation have on the economy? How will they influence the risk accompanying long-term contracts and related business decisions? Let kids denote the number of children ever born to a woman, and let educ denote years of education for the woman. A simple model relating fertility to years of education is kids = What is Inherent Limitation? Oil price shocks lead to large adverse supply shocks in the macroeconomy, infer Dornbusch et al (2008) who define an adverse supply shock as; ‘one that shifts the aggregate supply Suppose that Mr. Chauncey Gardener consumes two goods, X 1 and X 2 .His preferences can be described by the following utility function: U = X 1 0.5 X 2 0.5 He
{"url":"http://www.expertsmind.com/questions/determination-of-price-level-30197719.aspx","timestamp":"2014-04-20T15:52:17Z","content_type":null,"content_length":"29537","record_id":"<urn:uuid:345aadca-227e-4776-b942-28c511b936ff>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Radiation Treatment Planning Optimization Toy Example - File Exchange - MATLAB Central This program is a stand-alone toy example of radiation treatment planning(RTP) optimization for a brain tumor case. The program generates a toy patient head model using scaled and shifted ellipsoids and p-norm sublevel sets to represent all the volumes of interest (VOIs), including skin, eyes, optic nerves, brain stem, a tumor and artificial "shell" around the tumor. These VOI sublevel sets are then discretized into PointClouds, by retaining the points in a discrete 3-D grid of volume elements (voxels) that lie in the 1-sublevel set of the VOI's p-norm model. Then we compute candidate beam directions by firing beams at random points on the tumor surface from a set of about 150 nodes, located uniformly in a spherical region of radius 80cm from the patient's head. Then we compute the dose vectors associated with each beam in each of the VOIs, by evaluating a dose function in a cylindrical region around each beam direction. The dose function is a crude model of the roll-off with depth, and radial diffusion or scatter. For each candidate beam, this rolloff/scatter function is evaluated for each voxel in each VOI. Thus each beam results in a column of the patient dose matrix. Finally, we set the min and max dosages for each VOI. We then pass the dose matrices and min and max specs to one of two solvers (CPLEX or ADMM), which find the optimal set of beams and intensities (beam weights) for this patient case. This is the optimal "plan". The optimization formulation is based on a problem from the Stanford EE364b final exam from 2011, by Stephen Boyd and Eric Chu. Then we visualize the optimal plan by plotting the dose levels for all the voxels of each VOI, and the dose volume histograms. The program consists of this Main() function and several "classes": (1) Patient: cell array of VOIs, names, etc (2) VOI: p-norm model and associated point cloud and boundary, dose specs (3) Beams: beam heads,tails,dose vector computation + nodes + collimators (4) Model: p-norm models and gradients (5) Point Cloud & 3-D geometry functions (6) Optimizer functions: CPLEX wrapper and ADMM solver (ala EE364b) H. Hindi, "A Tutorial on Optimization Methods for Cancer Radiation Treatment Planning," Proc. American Control Conference, 2013. Written by Haitham Hindi, 2012/01/18 DISCLAIMER: This code is intended for algorithm research purposes only! It is not intended to be used to generate real treatment plans. Author makes no claims as to the realism, accuracy, or correctness of any part of this code, including (but not limited to): patient anatomy and dimensions; plan safety; beam weights,doses,physics; dose units, dose specs; plan quality, evaluation metrics. The author is an engineer with no medical training whatsoever. This code is supplied as-is, with no guarantee of correctness, and the author accepts no responsibility for any damage or false conclusions from the use of this code. Of course, we would appreciate hearing about any bugs you might find. Please login to add a comment or rating.
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/42558-radiation-treatment-planning-optimization-toy-example","timestamp":"2014-04-21T12:46:08Z","content_type":null,"content_length":"27218","record_id":"<urn:uuid:b1ed2653-adf8-4117-ba7e-09c94ec90868>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Linear Differential Equation with a substitution. Wow I am so very bad at differential equations.. :( The problem Here is the exact problem I'm given: attempt at a solution I'm guessing that I need to differentiate y(x) that I am given and substitute that into the left hand side and then put the y function [sinx + 1/u(x)] as y in the right hand side (y^2). Then re-arrange for u(x) and differentiate to get du/dx and hopefully it will be the third given equation. After that I think I just have to solve that ODE and get it back in terms of y. Assuming that is what I have to do, sadly I cannot even get that far.. differentiating y(x) = sinx + 1/u(x) must this be done implicitly? I get confused as u(x) is a function and not a variable. I can see that the third equation must be solved linearly such that dy/dx + utan(x) = -1/2 sec(x) And I (think) that I solved the integrating factor to be 1/cos(x). But I am so bad at differential equations and the sources on the internet seem to be so confusing that I'm quite stuck from here but this is what I have: (1/cos(x))*du/dx + u*(1/cos(x))*tan(x) = -(1/2)*(1/cos(x))*sec(x)=d/dx(u/cos(x)) <- multiplying by integrating factor and copying form of examples from the net u/cos(x) = integral of -(1/2)*(1/cos(x))*sec(x) and I know I'm way off here so I think I might aswell just stop... Also it would be great if someone could point me in the direction of a site that shows how to type all those math symbols such as the integral sign. Thanks in advance. A lot.
{"url":"http://www.physicsforums.com/showpost.php?p=1894553&postcount=1","timestamp":"2014-04-17T03:53:01Z","content_type":null,"content_length":"10087","record_id":"<urn:uuid:95ea9ea6-b728-4e17-80e1-fd4b0c7b9155>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Programs - Overview and Learning Goals In addition to our B.Sc. degree in mathematics, the department has a number of formal programs that offer academic credit. These are either entirely housed within Mathematics or offered jointly with other departments. Here is a brief list: Where to find it The standard and official location for all Rose-Hulman programs is the Undergraduate Bulletin (current version). To make things a bit easier to find, and to add additional information, the department has created webpages specific to the mathematics department. Annual Integrated Math catalogues. These pages contain all information about the math program in the Institute bulletin. In particular they contain the following.(click on the link to go directly to the relevant sect of the current catalogue).. In addition, there are specific webpages and sites that give additional information Updates, Recent Versions, and Integrated Math Catalogues Our web-only catalogue, is updated and frozen annually during the summer, to be in force for students entering the following September. To help students keep track of the many updates the department has constructed annual "integrated math catalogues", consisting of all mathematics requirements and courses -- and their updates -- up to the freeze date. Mathematics majors may graduate under the catalogue in force in the year in which they entered or a later catalogue, (provided courses still exist) but may not mix requirements among years. Any matters of interpretation among the various versions of the catalogue will be resolved by the Head of the Mathematics Department. The various versions of recent math catalogues may be found at these links: In addition, scheduling and checklist templates for the major and double major are in the ANGEL Major Group for Mathematics. Student learning goals These goals are taken from our Mission, Vision, and Goals Statement. Goal for all students: To provide all undergraduate students at Rose-Hulman with an education in mathematics which will serve as part of a foundation for life-long learning of science, engineering and mathematics. Objectives for this goal: All students should • become competent users of mathematics, • appreciate mathematics as an intellectual endeavor in its own right, • become familiar with basic mathematical and statistical thinking and modeling, • understand the use of mathematics in other disciplines, and become competent at the application of mathematics to these disciplines, • become effective problem solvers, • become competent in using the computer as an aid to mathematical modeling and computation, and • develop communication skills appropriate in a mathematical context. Goal for Mathematics Majors: To graduate majors who have become liberally educated and are prepared for a mathematically based career. Objectives for this goal: Our majors should be able to • formulate and solve problems from a mathematical perspective, • understand the relationship of mathematics to other technical fields and develop competence at the application of mathematics in one or more of these areas, • use technology effectively in mathematics and the application of mathematics, • communicate effectively (reading, writing, speaking and listening) to both technical and non-technical audiences, and • work cooperatively with others.
{"url":"http://www.rose-hulman.edu/math/programs.php","timestamp":"2014-04-16T16:33:21Z","content_type":null,"content_length":"18261","record_id":"<urn:uuid:377eaf83-bb46-4f9a-9236-088427226f12>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Quick Sort 08-20-2007 #1 Registered User Join Date Jan 2007 Quick Sort here is my sort algorithm: // ---------------------------------------------------------------- // Name: QuickSort // Description: quicksorts the array // Arguments: p_array: the array to sort // p_first: first index of the segment to sort // p_size: size of the segment // p_compare: comparison function // Return Value: None // ---------------------------------------------------------------- template<class DataType> void QuickSort( vector<DataType>& p_array, int p_first, int p_size, int (*p_compare)(DataType, DataType) ) DataType pivot; int last = p_first + p_size - 1; // index of the last cell int lower = p_first; // index of the lower cell int higher = last; // index of the upper cell int mid; // index of the median value // if the size of the array to sort is greater than 1, then sort it. if( p_size > 1 ) // find the index of the median value, and set that as the pivot. mid = FindMedianOfThree( p_array, p_first, p_size, p_compare ); pivot = p_array[mid]; // move the first value in the array into the place where the pivot was p_array[mid] = p_array[p_first]; // while the lower index is lower than the higher index while( lower < higher ) // iterate downwards until a value lower than the pivot is found while( p_compare( pivot, p_array[higher] ) < 0 && lower < higher ) // if the previous loop found a value lower than the pivot, // higher will not equal lower. if( higher != lower ) // so move the value of the higher index into the lower index // (which is empty), and move the lower index up. p_array[lower] = p_array[higher]; // now iterate upwards until a value greater than the pivot is found while( p_compare( pivot, p_array[lower] ) > 0 && lower < higher ) // if the previous loop found a value greater than the pivot, // higher will not equal lower if( higher != lower ) // move the value at the lower index into the higher index, // (which is empty), and move the higher index down. p_array[higher] = p_array[lower]; // at the end of the main loop, the lower index will be empty, so // put the pivot in there. p_array[lower] = pivot; // recursively quicksort the left half QuickSort( p_array, p_first, lower - p_first, p_compare ); // recursively quicksort the right half. QuickSort( p_array, lower + 1, last - lower, p_compare ); now i got this working before with one of my classes before the class i decared this function: //use quick sort to keep windows drawn in the proper order // compare the y values only. int CompareZ( cBaseWindow* l, cBaseWindow* r ) if (l->getZ() == r->getZ() ) if( l->getZ() < r->getZ() ) return 1; if( l->getZ() > r->getZ() ) return -1; if( l->getZ() < r->getZ() ) return -1; if( l->getZ() > r->getZ() ) return 1; return 0; then a member of my class uses the QuickSort (look at the bottom bit): void cGUIManager::GUIMouseDown(float x, float y) int tmpZ = WindowList.size()+1; for (unsigned int iloop=0;iloop<WindowList.size();iloop++) if (WindowList[iloop]->MouseTest(x,y)) if(WindowList[iloop]->getZ()<tmpZ && WindowList[iloop]->getVisible()==true && WindowList[iloop]->getEnabled()) { tmpZ=WindowList[iloop]->getZ() ; if (tmpZ > 0 && tmpZ<WindowList.size()+1) { //Brought a window to the front so reorder the others down one for (unsigned int iloop=0;iloop<WindowList.size();iloop++) //Resort the List for the New Order and this all worked fine, then i decided that i wanted to have the CompareZ function as a member of the class. so i did this: int cGUIManager::CompareZ( cBaseWindow* l, cBaseWindow* r ) if (l->getZ() == r->getZ() ) if( l->getZ() < r->getZ() ) return 1; if( l->getZ() > r->getZ() ) return -1; if( l->getZ() < r->getZ() ) return -1; if( l->getZ() > r->getZ() ) return 1; return 0; and this: void cGUIManager::GUIMouseDown(float x, float y) int tmpZ = WindowList.size()+1; for (unsigned int iloop=0;iloop<WindowList.size();iloop++) if (WindowList[iloop]->MouseTest(x,y)) if(WindowList[iloop]->getZ()<tmpZ && WindowList[iloop]->getVisible()==true && WindowList[iloop]->getEnabled()) { tmpZ=WindowList[iloop]->getZ() ; if (tmpZ > 0 && tmpZ<WindowList.size()+1) { //Brought a window to the front so reorder the others down one for (unsigned int iloop=0;iloop<WindowList.size();iloop++) //Resort the List for the New Order but it did not work, i got the following error: no matching function for call to `QuickSort(std::vector<cBaseWindow*, std::allocator<cBaseWindow*> >&, int, size_t, <unknown type>)' it seems the quick sort algorithm did not like accepting cGUIManager::CompareZ as a comparision function to use, but i am not sure why, or what i can to to make it accept the classes member any help will be greatly appriciated I thought one of the "nice" things with C++ and templates is that you can define an operator< or operator> that would compare two items and return a bool - and thus be able to replace: while( p_compare( pivot, p_array[lower] ) > 0 && lower < higher ) while( pivot > p_array[lower] && lower < higher ) Am I missing something here? Firstly, did you write your own quicksort, because you didn't think std::sort can't handle your class? If you are sure you want to do it this way you'll need to keep in mind that using the function pointer of a member function is different. For one thing they have the hidden this parameter (static members don't have it). It also seems that you have done a lot of extra work to provide the strict ordering (return values -1, 0, 1 from compare function). To quicksort stuff you'll only need weak ordering (bool - is left smaller than right). (This is what standard algorithms use.) Anyway, the best thing to do is to provide a C++ standard compatible compare function/functor for the objects you want to sort and use std::sort. I might be wrong. Thank you, anon. You sure know how to recognize different types of trees from quite a long way away. Quoted more than 1000 times (I hope). i'm not sure, i did not write that quick sort algorith, i am mearly try to use it, i am having trouble calling the quicksort function if my p_compare: comparison function is a member of a class i will have a look into std::sort however i would like to know why i get the error when i try to pass a member function, and how i can correctly pass the memeber function. Wrap the member function call in a free function. With std::sort, a function object could be used instead of a free function. C + C++ Compiler: MinGW port of GCC Version Control System: Bazaar Look up a C++ Reference and learn How To Ask Questions The Smart Way so it's not possible to pass it a memeber function directly? so it's not possible to pass it a memeber function directly? Yes, since it is designed to accept a free function (or for std::sort, a free function or function object). C + C++ Compiler: MinGW port of GCC Version Control System: Bazaar Look up a C++ Reference and learn How To Ask Questions The Smart Way so that's why i get the no matching function for call to `QuickSort(std::vector<cBaseWindow*, std::allocator<cBaseWindow*> >&, int, size_t, <unknown type>)' also would it be possible to rewrite that QuickSort function to accept a member function? Last edited by e66n06; 08-20-2007 at 01:52 PM. A member function doesn't apparently match the prototype of the function pointer (or however it is called). The idea of using a plain member function here is flawed anyway: you only work with two object pointers, and you don't use the this pointer that is passed to each non-static member function explicitly. If you made the compare function static and return a bool for weak ordering (is-smaller-than) then std::sort would probably work for you. I might be wrong. Thank you, anon. You sure know how to recognize different types of trees from quite a long way away. Quoted more than 1000 times (I hope). Yes, like brewbuck says. Essentially it boils down to "object::func(int a, float b)" can be rewritten as "func(object *this, int a, float b)" - this function is obviously not compatible with a prototype of "proto(int a, float b)", since it's got another argument added - albeit a hidden argument. That's a seriously overkill function there. You first check if l->getZ() and r->getZ() are equal, and if the ARE, you then ask if they aren't again - twice! In that case those will of course fail, and then you do the same test twice again outside of the if-statement, both of which will also fail. Talk about redundant code! The whole bracketed if-statement can be deleted entirely. You should read over your code after you write it so that you catch daft things like that. Just like you'd proofread a novel. Secondly, all you're missing to use this function is the 'static' keyword. Lastly, try not to mix tabs and spaces. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" bool cGUIManager::CompareZ( const cBaseWindow* l, const cBaseWindow* r ) return l->getZ() < r->getZ(); Here's an even simpler compare function that would get the job done. In the sort function you are only using is-less-than and is-larger-than and not is-equal-to, and you don't need a is-larger-than case because you can simply switch around the parameters. So the usage becomes: // while( p_compare( pivot, p_array[higher] ) < 0 && lower < higher ) while (p_compare(pivot, p_array[higher]) && lower < higher) // now iterate upwards until a value greater than the pivot is found //while( p_compare( pivot, p_array[lower] ) > 0 && lower < higher ) while (p_compare(p_array[lower], pivot) && lower < higher) It also seems to me that the sort algorithm may use too many lower-higher comparisons. For example in these loops the algorithm itself should guarantee that only the first condition is enough to make it stop at the right place (although I might be wrong here). Anyway, the above function (if you make it static or - I'd prefer - non-member, as it is probably using only a public getter) should be fine for std::sort. I might be wrong. Thank you, anon. You sure know how to recognize different types of trees from quite a long way away. Quoted more than 1000 times (I hope). 08-20-2007 #2 Kernel hacker Join Date Jul 2007 Farncombe, Surrey, England 08-20-2007 #3 The larch Join Date May 2006 08-20-2007 #4 Registered User Join Date Jan 2007 08-20-2007 #5 Registered User Join Date Jan 2007 08-20-2007 #6 08-20-2007 #7 Registered User Join Date Jan 2007 08-20-2007 #8 08-20-2007 #9 Registered User Join Date Jan 2007 08-20-2007 #10 08-20-2007 #11 The larch Join Date May 2006 08-20-2007 #12 Kernel hacker Join Date Jul 2007 Farncombe, Surrey, England 08-21-2007 #13 08-21-2007 #14 The larch Join Date May 2006
{"url":"http://cboard.cprogramming.com/cplusplus-programming/92834-quick-sort.html","timestamp":"2014-04-19T23:10:56Z","content_type":null,"content_length":"101718","record_id":"<urn:uuid:2b905d25-e753-411c-a10c-4b7d7ae53f14>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
limit sinx/x - Math Central Jackie, if x is in degrees, then let y = x(2π / 360). Then y is in radians. So limit sin (x degrees) / x as x tends to zero = limit sin (y / (2π / 360) / [y / (2π / 360) ] as y tends to zero. Hope this helps, Stephen La Rocque. Jackie wrote back dear Stephen La Rocque., thanks very much for your kind hint to my problem but this is very part I got into difficulty and I would very grateful if you can show me how to evaluate this expression limit sin (x degrees) / x as x tends to zero = limit sin (y / (2π / 360) / [y / (2π / 360) ] as y tends to zero. Hi Jackie, There are two sine functions in this problem. You know them both because they are both on your calculator. One of them returns the sine of an angle if you input the measure of the angle in degrees and the other returns the sine of an angle if you input the measure of the angle in radians. We usually refer to both of these functions by sin(t) because in a specific problem we know whether we are working in degrees or radians. In this problem you are dealing with both functions so I want to distinguish between them. • Let sin(t) be the function that returns the sine of an angle if t is the measure of the angle in degrees. • Let SIN(t) be the function that returns the sine of an angle if t is the measure of the angle in radians. What you know from your calculus class is that limit SIN(t)/t approaches 1 as t approaches zero. As Stephen pointed out, if x is the measure of an angle in degrees then y is the measure of the angle in radians if y = x(2π / 360). In this situation sin(x) = SIN(y), it's the same angle you have just used different units to measure it. Thus sin(x)/x = SIN (y)/[y / (2π / 360) ] = (SIN(y)/y) × π/180. Finally as x approaches zero so does y and hence sin(x)/x approaches 1 × π/180 = π/180. I hope this helps,
{"url":"http://mathcentral.uregina.ca/QQ/database/QQ.09.08/h/jackie1.html","timestamp":"2014-04-18T15:38:09Z","content_type":null,"content_length":"8627","record_id":"<urn:uuid:df18fe1e-8821-48f0-861f-4fad75842403>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Integrable Functions Now we can define what it means for a general real-valued function (not just a simple function) to be integrable: a function $f$ is integrable if there is a mean Cauchy sequence of integrable simple functions $\{f_n\}$ which converges in measure to $f$. We then define the integral of $f$ to be the limit $\displaystyle\int f(x)\,d\mu(x)=\int f\,d\mu=\lim_{n\to\infty}\int f_n\,d\mu$ But how do we know that this doesn’t depend on the sequence $\{f_n\}$? We recall that we defined $\displaystyle N(f)=\{x\in X\vert f(x)eq0\}$ which must be measurable for any measurable function $f$. This is the only part of the space that matters when it comes to integrating $f$; clearly we can see that $\displaystyle\int f\,d\mu=\int\limits_{N(f)}f\,d\mu$ since $f$ is zero everywhere outside $N(f)$. Now, if both $\{f_n\}$ and $\{g_n\}$ converge in measure to $f$, then we can define $E$ to be the (countable) union of all the $N(f_n)$ and $N(g_n)$. Just as clearly, we can see that \displaystyle\begin{aligned}\int f_n\,d\mu&=\int\limits_Ef_n\,d\mu=u_n(E)\\\int g_n\,d\mu&=\int\limits_Eg_n\,d\mu=\lambda_n(E)\end{aligned} where $u_n$ is the indefinite integral of $f_n$, and $\lambda_n$ is the indefinite integral of $g_n$. Then if we use $\{f_n\}$ to define the integral of $f$ we get $\displaystyle\lim\limits_{n\to\infty}\int f_n\,d\mu=\lim\limits_{n\to\infty}u_n(E)=u(E)$ while if we use $\{g_n\}$ we get $\displaystyle\lim\limits_{n\to\infty}\int g_n\,d\mu=\lim\limits_{n\to\infty}\lambda_n(E)=\lambda(E)$ But we know that since $\{f_n\}$ and $\{g_n\}$ both converge in measure to the same function, the limiting set functions $u$ and $\lambda$ coincide, and thus $u(E)=\lambda(E)$. The value of the integral, then, doesn’t depend on the sequence of integrable simple functions! 5 Comments » 2. [...] Convergence Properties of Integrals Okay, we’ve got our general definition of integrable functions, and we’ve reestablished a bunch of our basic properties in this setting. Let’s [...] Pingback by Mean Convergence Properties of Integrals « The Unapologetic Mathematician | June 4, 2010 | Reply • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2010/06/02/integrable-functions/","timestamp":"2014-04-19T17:18:39Z","content_type":null,"content_length":"80338","record_id":"<urn:uuid:13dcf20f-6efc-40e7-b452-a689fba77104>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Infinity and the "Noble Lie" Karlis Podnieks Karlis.Podnieks at mii.lu.lv Fri Jan 6 09:08:10 EST 2006 From: <joeshipman at aol.com> Sent: Thursday, January 05, 2006 9:37 AM Are you prepared to say that the question of the "truth" of an arithmetical statement proved using the axiom of infinity is also Couldn't be the invention of the axiom of infinity simply an act of fantasy? One is iterating the successor function, and, watching the process, asks the question: how long could this last without changes? Of course, if "applying successor function" would mean adding a U235 atom, then the conditions of the process will change sometime... Thus, in the physical world, it always depends on the implementation, how long the iteration process can last. And thus, an iteration process that lasts without changes and never stops, can be only an invention, an act of fantasy (some people call this act "idealization"). In a similar way, in 17th century, people invented the "uniform movement" that also did not exist in the physical world, but, nevertheless, could serve as a basis for much better modeling principles (Newton's Laws) than the obvious principle "any movement stops, if one does not apply force". The same kind of process leads to creatures populating Disneyland. Of course, all that is trivial, but do we need a more complicated ("nobler"?) philosophy here? Karlis.Podnieks at mii.lu.lv University of Latvia Institute of Mathematics and Computer Science More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-January/009524.html","timestamp":"2014-04-16T05:59:58Z","content_type":null,"content_length":"3987","record_id":"<urn:uuid:f67d71c2-7762-474e-89fd-1f820a076a02>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Baseball Prospectus | Fantasy Freestyle: Going Beyond 5x5 December 19, 2013 Fantasy Freestyle Going Beyond 5x5 Most fantasy players who ask me questions play in a “standard” 5x5 category league. This is why most of my pricing as well as subsequent discussions about what a player is worth tend to revolve around the idea of five hitting categories (typically home runs, runs batted in, stolen bases, runs, and batting average) and five pitching categories (wins, saves, ERA, WHIP, and strikeouts). However, there are more than a few fantasy gamers out there who play with more than 10 categories. While there are some fanatics who play in 9x9 or 10x10 leagues, most of the Rotisserie-style leagues that play with extra categories don’t dive that far into the pool and play 6x6, or maybe 7x7. But because these leagues aren’t “standard,” it is often the case that they get little if any attention. Without fail, I get at least one question a year asking how to value players for a 6x6 league. It isn’t necessarily “difficult” to do this, but there are two typical pitfalls that lead to incorrect assumptions or ideas surrounding 6x6 valuation: 1. Fantasy players assign too much value to the category being added 2. Fantasy players fail to redistribute the money from the existing categories to the new category and overspend on everyone The second mistake is easy enough to fix. However you decide to allocate your dollars, just make sure that the money adds up to $3,120 for a 12-team league with a $260 per team budget. What the hitters and pitchers are “worth”, though, depends a lot on which categories your league has added. Here is an example of two 6x6 leagues I have received questions about in the past. • 6x6 League 1: The sixth offensive category is doubles; the sixth pitching category is holds. • 6x6 League 2: The sixth offensive category is slugging percentage; the sixth pitching category is holds. And here is how the valuation “should” play out: • Standard 5x5 League: $175 per team for hitting, $85 per team for pitching, • 6x6 League 1: $172 per team for hitting, $88 per team for pitching. • 6x6 League 2: $158 per team for hitting, $102 per team for pitching. To understand why the values play out this way, it is important to take a step back and understand why pitchers are worth less than hitters (in theory, at least). Since hitters and pitchers each contribute to the same number of categories, it would seem logical to assume that hitters and pitchers should each get paid 50% of the budgetary pie (or $130 for hitting/$130 for pitching). However, where teams derive permanent benefit from the quantitative categories, the benefit they derive from the qualitative categories can be fleeting. A win is yours to keep forever, whereas an eight-inning, four-hit shutout can be undone by a four-inning, eight-earned-run nightmare the very next day. While generally speaking the best pitchers are nearly as reliable as the best hitters, the auction market doesn’t treat them this way, which is how the pricing discrepancy came to be in the first place. Using the same theoretical baseline, adding holds and doubles to the mix in League 1 adds more value to pitchers. Even though one quantitative category is being added on both sides of the game, pitchers see a higher increase in the percentage of quantitative categories - from three out of five (60%) to four out of six (67%)—than the hitters do (4/5, 80%; 5/6, 83%). The difference in the hitting/pitching split is slight—only three dollars per team—but there is a difference. League 2 sees the hitters lose ground in the quantitative categories. Instead of contributing in four out of five quantitative categories, now the hitters are “only” contributing in four out of six. This is why pitcher prices jump in this scenario, from $85 per team to $102 per team. Theoretically, the average team in a 6x6 league using slugging percentage and holds as the extra categories should spend far more on pitching than the average league. To see how this would look, I pulled data from the PFM for a standard 5x5 league, a 6x6 league with doubles and holds, and a 6x6 league with doubles and slugging percentage. Table 1: 5x5 vs. 6x6 Valuation Comparisons: Hitters │Player │5x5 Rank│5x5 $ │6x6 2B Rank│6x6 2B $│6x6 SLG Rank│6x6 SLG $│ │Cabrera, Miguel │ 1│$43.19│ 4│ $33.75│ 1│ $44.06│ │Trout, Mike │ 2│$41.13│ 2│ $37.48│ 3│ $38.68│ │Davis, Chris │ 3│$39.96│ 1│ $38.21│ 2│ $42.62│ │Goldschmidt, Paul │ 4│$38.32│ 3│ $34.35│ 4│ $36.80│ │Jones, Adam │ 5│$32.12│ 7│ $29.27│ 5│ $28.89│ │McCutchen, Andrew │ 6│$31.53│ 6│ $29.91│ 6│ $28.39│ │Pence, Hunter │ 7│$29.24│ 9│ $27.09│ 8│ $26.09│ │Ellsbury, Jacoby │ 8│$28.39│ 13│ $24.50│ 20│ $21.40│ │Rios, Alex │ 9│$28.25│ 11│ $25.42│ 16│ $22.30│ │Gomez, Carlos │ 10│$27.59│ 22│ $22.44│ 11│ $24.86│ │Carpenter, Matt │ 14│$25.11│ 5│ $32.18│ 14│ $22.69│ │Cano, Robinson │ 11│$26.61│ 8│ $27.68│ 7│ $26.62│ │Bruce, Jay │ 20│$23.20│ 10│ $25.98│ 18│ $21.70│ │Ortiz, David │ 18│$24.38│ 12│ $24.76│ 9│ $25.34│ │Encarnacion, Edwin │ 15│$24.99│ 24│ $21.58│ 10│ $25.09│ In order to provide this list with a little more flavor, I pulled the top 10 hitters from all three potential formats. For the most part, the lists stay pretty static but it is interesting to see how each category contributes. Adding slugging seems to lend to some fairly predictable results. Miguel Cabrera was valuable to begin with; adding slugging to the mix and he’s even more of a stud in the depressed offensive valuation context of 6x6 with slugging. On the other hand, adding a sixth quantitative category does seem to be the great equalizer. Not only is Cabrera not as good as he was in 5x5, but his “paltry” doubles total only ranks him as the fourth best hitter overall in 6x6 with doubles. Still, most of the takeaway from this chart is that A comes far closer to equaling A than you might expect. Yes, Matt Carpenter is an outlier, but his valuation spike in 6x6 with doubles takes a whopping 55 doubles to accomplish, and even this huge jump in doubles only gains Carpenter seven dollars in overall earnings. Table 2: 5x5 vs. 6x6 Valuation Comparisons: Pitchers │Player │5x5 Rank│5x5 $ │6x6 2B Rank│6x6 2B $│6x6 SLG Rank│6x6 SLG $│ │Kershaw, Clayton │ 1│$38.08│ 1│ $45.38│ 1│ $52.44│ │Scherzer, Max │ 2│$30.55│ 2│ $32.23│ 2│ $37.20│ │Wainwright, Adam │ 3│$26.30│ 5│ $27.25│ 5│ $31.43│ │Kimbrel, Craig │ 4│$25.29│ 3│ $31.27│ 3│ $36.09│ │Lee, Cliff │ 5│$24.48│ 8│ $26.32│ 8│ $30.35│ │Iwakuma, Hisashi │ 6│$23.31│ 9│ $25.17│ 9│ $29.01│ │Darvish, Yu │ 7│$23.05│ 10│ $24.42│ 10│ $28.15│ │Holland, Greg │ 8│$22.94│ 4│ $29.23│ 4│ $33.72│ │Nathan, Joe │ 9│$20.57│ 11│ $24.30│ 11│ $28.01│ │Harvey, Matt │ 10│$20.15│ 12│ $22.55│ 12│ $25.98│ │Uehara, Koji │ 16│$16.40│ 6│ $26.55│ 6│ $30.61│ │Jansen, Kenley │ 19│$15.38│ 7│ $26.47│ 7│ $30.52│ There is even less variability among the 10 best pitchers, both in terms of how many arms slip past the Top 10 but also in terms of how the rankings move (or, rather, don’t move). The biggest news in the top 10 is that Uehara and Jansen—both middle relievers at the beginning of the season—jump up a great deal because of their combination of saves and holds. Beyond the holds bump, the jump for most other pitchers has more to do with format. Pitchers are “worth” more in 6x6 that uses hitter doubles and “worth” even more in 6x6 leagues with hitter slugging percentage. An ace is worth is weight in gold in a 5x5 league, but is an even more significant impact player in 6x6. Or, at least, this is the instruction valuation theory offers. The problem with all of this is that if your league continues to spend $175 per team for hitters and $85 for pitchers, this is immaterial. In reality, if your league is anything like Tout Wars or LABR or some of the other expert leagues out there it probably spends $180 or slightly more per team for hitters. If your league does spend $85 per team for pitchers in a 6x6 holds league, the values would look very different: Table 3: 5x5 vs. 6x6 Valuation Comparisons: Pitchers (adjusted) │Player │5x5 Rank│5x5 $ │6x6 2B Rank│6x6 2B $│6x6 SLG Rank│6x6 SLG $│ │Kershaw, Clayton │ 1│$38.08│ 1│ $43.87│ 1│ $43.87│ │Scherzer, Max │ 2│$30.55│ 2│ $31.16│ 2│ $31.16│ │Wainwright, Adam │ 3│$26.30│ 5│ $26.36│ 5│ $26.36│ │Kimbrel, Craig │ 4│$25.29│ 3│ $30.24│ 3│ $30.24│ │Lee, Cliff │ 5│$24.48│ 8│ $25.46│ 8│ $25.46│ │Iwakuma, Hisashi │ 6│$23.31│ 9│ $24.34│ 9│ $24.34│ │Darvish, Yu │ 7│$23.05│ 10│ $23.62│ 10│ $23.62│ │Holland, Greg │ 8│$22.94│ 4│ $28.26│ 4│ $28.26│ │Nathan, Joe │ 9│$20.57│ 11│ $23.50│ 11│ $23.50│ │Harvey, Matt │ 10│$20.15│ 12│ $21.82│ 12│ $21.82│ │Uehara, Koji │ 16│$16.40│ 6│ $25.68│ 6│ $25.68│ │Jansen, Kenley │ 19│$15.38│ 7│ $25.60│ 7│ $25.60│ I would argue that 6x6 should flatten all of the pitching categories and that even a titan like Kershaw should be worth less in 6x6, not more. However, for most of the pitchers here the differences between 5x5 and 6x6 are negligible. This is fine. Since 6x6 isn’t a standard format, I suspect most 6x6 owners are spending the same amounts on hitters and pitchers across the board that their 5x5 counterparts are. If I were putting together a list of practical, no nonsense bid limits for a 6x6 league, I would mostly leave my standard list of 5x5 bid limits intact. The more categories there are, the more the values across the board should get flattened out. The elite hitters are worth a little less; the boring guys who get 600 plate appearances and do a little bit of everything are worth a little bit more. You want hitters who will produce something in every game, and if you can put together a team with 14 of these hitters, all the better. On the pitching side, bump set up men up a little bit. Instead of paying $1 for a set up, push these guys up to somewhere between $3-5, even in an only league. I wouldn’t go too crazy. While holds aren’t quite ubiquitous, there were 91 major-league relievers in 2013 with 10 holds or more. Only 37 relievers saved 10 games or more. Even more than with saves, you can find holds in the free agent pool during the season. Mike Gianella is an author of Baseball Prospectus. Click here to see Mike's other articles. You can contact Mike by clicking here
{"url":"http://www.baseballprospectus.com/article.php?articleid=22446","timestamp":"2014-04-16T10:29:21Z","content_type":null,"content_length":"75497","record_id":"<urn:uuid:de4d75e0-e2a8-4eb7-b5cc-288b672dbb3e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
A question about inner product Thanks for your answers, everyone! First off, the definition of a vector space also includes two binary operations: addition and multiplication with a scalar. So you can add vectors. Yes, of course, thanks for the correction. You still get a vector, tho, that's what I was trying to say, poorly. That is something you simply have to define. I have to say it never occured me it could be like that. Here I was trying to figure out how could I produce a number(scalar) out of these abstract vectors, whilst I could just define <apple,pear> = 1. It's a very interesting thought! If you have a basis for the vector space then constucting an inner product is easy. For instance you can declare the basis vectors fo be ortonormal. Ah, yes. But I was kinda trying to avoid that. After all, maybe I don't want the basis of my choice to be orthogonal/orthonormal. Actually, I was trying to understand how to construct an inner product without involving any kind of a basis. After all, in the definition of an inner product, you don't see the concept of a basis as a requirement anywhere. @Erland Thanks, that is a good example of an inner product without a mention of any basis. Here is the thing, though. Functions, vectors, matrices, polynomials etc., those are all "constructions", to put it that way, that involve numbers in them at their very core. So extracting a number out of them is not an impossible task to imagine. What I was trying to explain to myself, is how would you do that for some general, abstract vector space, which, by itself, has no numbers in them whatsoever, aside from those numbers being an underlying scalar field. Every vector space, V, of finite dimension, n, is isomorphic to Rn and, given a basis for V, an inner product on it can written in terms of the "standard" inner product on Rn. (But different bases will give different inner products.) Yeah, we don't even have to use standard product on Rn, but that only complicates things, I guess. However, must we do it like this? This way, we depend on Rn, if you get what I'm trying to say. In fact, the moment we define inner product this way, we only work with Rn and our former vector space doesn't really matter anymore, as far as computations go. Don't get me wrong, I have no problem with this. I was just wondering if we could properly define an inner product without invoking Rn. It seems to me that, if we cannot do this, then the idea of an inner product is not really universal for all vector spaces, only for Rn. It gives us a scalar. All vector spaces have an underlying scalar field by definition. The function maps an ordered pair of vectors to some scalar. There's no need to start talking about "numbers". Yes, of course, I've been reckless, thank you. So, we have two dinstict objects, vectors and scalars. The question is how is it possible to map two vectors to a scalar. I like Serena 's suggestion is to simply define it. I think that sounds reasonable. So the orthonormal trigonometric functions described above would not be a basis since in general infinitely many of them are required to span a function. I think that, in an infinite dimensional case, you a basis by saying that a closure of its span , well, spans the space. That way you deal with all those functions that represent the limits of your finite combinations. Of course, it has to be possible that every function can be expressed as a limit of your finite combinations, that's what makes basis, well, a basis. That is how I understand it, anyway. On the real numbers define the usual inner product - just multiply the two numbers together. Now consider the real numbers to be a vector space over the rational numbers. The inner product is not defined in teems of a basis for this vector space. In fact, I don't think it is possible to describe a basis for this vector space. This is a nice thought experiment! I tried by choosing a 5 to be my basis vector. Then, in this setting, number 1 is a represenation of a number 5, number 3 is a representation of a number 15, number 0.4 is a representation of a number 2 etc. Then, I can define inner product on this "underlying" R space as a standard multiplication. So, for example, inner product [itex]<2,4>[/itex] would be [itex]0.4 \cdot 0.8 = 0.32[/itex]. But, it's clearly possible for a result of this inner product to be a real number. And that can't be, since the scalar field here are the rational numbers only. Now I, as well, am not sure how would I construct it! The way you did it at the beginning of your post On the real numbers define the usual inner product - just multiply the two numbers together. was without using any kind of basis. I guess all I maunder here is to see if this is possible in general, when your abstract vectors can't just "multiply together" in order to produce back a number.
{"url":"http://www.physicsforums.com/showthread.php?p=4164277","timestamp":"2014-04-20T08:31:40Z","content_type":null,"content_length":"76989","record_id":"<urn:uuid:3156f352-e669-4f53-bc01-b523bddac317>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
The derivative is the first of the two main tools of calculus (the second being the integral). The derivative is the instantaneous rate of change of a function at a point in its domain. This is the same thing as the slope of the tangent line to the graph of the function at that point. In order to give a rigorous definition for the derivative, we need the concept of limit introduced in the preceding section. Given a function f , we can define a derivative function f' to take on the value of the derivative of f at each point in the domain. For example, if Otis drives in a straight line from his home to Grand Rapids, Michigan, and the function f (t) gives his distance from home at time t , then the function f'(t) gives his "instantaneous rate of change", or his velocity, at time t . Once we have taken the derivative of a function f once, we can take the derivative again. This is called the second derivative of the original function f , and equals the "instantaneous rate of change of the instantaneous rate of change" of f . In the example above, this corresponds to how quickly Otis is speeding up or slowing down, that is, his acceleration. We can continue in this manner as long as we like, taking successive derivatives. In this SparkNote, we define derivatives and seek to develop an intuitive understanding of their meaning. In the following chapters, we will see how to compute derivatives and will explore some of their many applications.
{"url":"http://www.sparknotes.com/math/calcbc1/thederivative/summary.html","timestamp":"2014-04-21T09:54:00Z","content_type":null,"content_length":"51735","record_id":"<urn:uuid:99115d04-a28d-41fc-a7fd-2c8018957c95>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Research Projects Professor L. Ramdas Ram-Mohan Worcester Polytechnic Institute, Worcester, MA 01609, USA The recent projects with Professor Ram-Mohan at Worcester Polytechnic Institute involve fundamental theoretical issues in opto-electronics and the modeling and simulation of physical systems. The initial 14 years of his career in Relativistic Field Theory and Many-Body Theory has provided the backdrop to the methods he continues to employ in the analysis of physical problems. Over the past 25 years he has worked in Condensed Matter Physics with emphasis on quantum semiconductor heterostructures and their linear and nonlinear optical properties. The present on-going research projects he is involved in are listed below. All the projects involve theory and also numerical modeling. Besides a strong mathematical physics background, knowledge of C-programming for scientific applications would be useful for graduate work on these projects. Graduate Research Assistantships are available for suitable candidates. Interested applicants should email the Physics Department at WPI and contact either Professor G. Iannacchione (gsiannac@wpi.edu), the head of the physics department, or Professor Ram-Mohan (lrram@wpi.edu). Undergraduate students interested in pursuing Major Qualifying Projects on the following topics are encouraged to contact Professor Ram-Mohan directly. Sophomore/Junior level students in Physics, ECE and Applied Mathematics would find their backgrounds more suitable for such MQP topics than others. 1. "Investigation of Physical Mechanisms in Multi-band Tunneling in Layered Semiconductor Structures." This project was supported by the NSF.(Principal Investigator: L. R. Ram-Mohan, WPI) Aim of the project: The aim of this project is to model layered semiconductor structures that can be grown using molecular-beam-epitaxy. If the individual layers are compound semiconductors such as GaAs/AlGaAs or CdTe/ZnSe we model the heterostructure using the energy band structure of the individual layers. The mathematical issue is the development of solutions for energy eigenvalues, wavefunctions, and tunneling currents through these structures, by solving the multi-component Schrödinger equations for the carriers in the energy bands in solids. The ability to model heterostructures with accuracy has led to the paradigm of wavefunction engineering, an approach through which he has successfully designed mid-IR (2-6 µm). These lasers are in the GaSb/InAs/AlSb system which has narrdow effective energy band gaps because they are Type-II heterostructures due to their energy band alignments. In the present project, the tunneling current and the I-V characteristics are being modeled in a multi-band framework. This is a theoretically challenging and computationally demanding project. Outcomes: A proper appreciation of the mechanisms playing a role in multiband tunneling will help us design better quantum heterostructure devices. Most devices work under external bias with carriers moving from one region of the device to another. 2. "Spintronics and carrier induced ferromagnetism in Antimonide Quantum Heterostructures: Simulation and Device Modeling." This project was funded by DARPA. (P.I.: L. R. Ram-Mohan, WPI) Aim of the project: The aim of this project is to understand the carrier-induced ferromagnetism in semiconducting materials that are doped with magnetic ions. When Manganese (Mn) ions are inserted into the crystal matrix of III-V compound semiconductors, the Mn go in substitutionally replacing the Group III cations. They also enter the crystal as acceptors, releasing holes in the valence band. The carriers interact with the Mn ions via the magnetic exchange interaction and align their spins, or magnetic dipole moments. We can then generate ferromagnetic behavior in ordinary semiconductors and control their ferromagnetic behavior by applying external bias to control the carrier densities in the heterostructures. Outcomes: Spin oriented carrier injection would allow us to design spin-LEDs (light-emitting diodes) and when the spin-polarized lifetime of the carrier is large it could be used for quantum 3. "Sensors: A New Class of Devices Based on Interfacial Effects in Metal-Semiconductor hybrid Structures." Funded by NSF. (P.I.: S. Solin, Washington University at St. Louis; Co-PI: L. R. Ram-Mohan, WPI) Aim of the project: Work at NEC Research Institute by S. Solin led to the discovery of the phenomenon of Extraordinary Magnetoresistance (EMR) in semiconductors with metallic inclusions in them. The resistance changes in the presence of a magnetic field and typical structures have responses of 100% to 700,000%. This level of sensitivity to magnetic fields and the display of the property down to nanoscopic sizes imply that one could use them to make magnetic sensors. What is remarkable about this is that the semiconductor-metal structures are free of magnetic metals that are typically used to make read-heads for computer hard-drives. Also, the property is dependent on the geometry of the distribution of the metal in the semiconductor. The theory and modeling is done at WPI, and Professor Solin who is now at Washington University at St. Louis will do the experiments. Outcomes: The design and modeling of these read-heads together with their experimental performance suggests that storage capacity of hard-drives can be extended from the present 15Gbit/sq.in. upto 1Terabit/sq.in. We have now proposed the development of a new class of sensors for strain, electric field, and optical detectors that exploit the geometry dependent properties. The design and simulation of new devices is done using novel applications of the finite element method. 4. "Wavefunction Engineering of Spintronic Devices in GaN/AlN and ZnO/MgO Quantum Structures doped with Transition Metal Ions." Funded by the AFOSR. (P.I.: L. R. Ram-Mohan, WPI) Aim of the project: The optoelectronic properties of layered heterostructures of the semiconductors with Wurtzite crystallographic structure are still being investigated. The material systems GaN /AlN and also ZnO/MgO have energy band gaps ranging from 3.5 eV to 6.25 eV, a range for laser operation from the blue region of the spectrum to the ultraviolet (UV). This project is for exploring theory and simulations for the carrier-induced ferromagnetic behavior of layered quantum semiconductor structures of III-V and II-VI materials of Wurtzite structure doped with transition metal The two material systems behave differently. In GaN/AlN the Mn enter the lattice substituting the cation while also being an acceptor. On the other hand, Mn enters the lattice as a spin-dopant. The interaction of carriers with Mn spin-sites in both systems has a large exchange constant that bodes well for carrier induced ferromagnetic behavior. In ZnO/MgO additional p-doping has been shown to induce ferromagnetic behavior. With 3~5% Mn concentrations the number of available carriers can be as large as ~1020 cm-3 so that we have to account for (i) very large band bending by solving for Schrödinger-Poisson selfconsistency, (ii) selfconsistency with the carrier induced ferromagnetism due to the exchange interaction, and (iii) internal spontaneous and piezoelectric polarization fields. Outcomes: With the calculations developed in this work, designs for spin devices such as spin-LEDs, tunable polarized lasers, spin-injection structures will be simulated easily. At present there are no guidelines for the optimized design of such structures. 5. "A Systematic Study for the development of Zero-Flux Planes, Diffusion Paths, Diffusion Structures in Multicomponent, Multiphase Systems." Funded at Purdue University by the NSF. (Collaboration with Professor M. A. Dayananda). Aim of the project: The phenomenon of interdiffusion in multicomponent, multiphase alloys is encountered in a wide variety of materials systems, ranging from high temperature alloys, coatings, claddings, nuclear fuels, nuclear wastes, thin films, composites, among others. The redistribution of the components, the development of the phase layers, and the evolution of the interdiffusion microstructure within the diffusion zone are governed not only by the relative diffusion behavior of the individual components in the different phases but also by the thermodynamic, kinetic and stereological characteristics of phase boundaries and interfaces within the diffusion structure. There had been no good way to extract the diffusion coefficients from the experimental concentration curves. The experimental work of Dayananda together with his approach of obtaining averaged diffusion coefficients has led to a breakthrough in the problem. With known diffusion coefficients, the aim is to predict the diffusion path and the concentration curves by theoretical and numerical methods to be developed in this collaboration. Outcomes: The developments of the diffusion zone and diffusion structures in multiphase assemblies have both theoretical and practical implications. When our predictive approach is demonstrated to work, it will have a profound impact on the development of metallic alloys and their uses in industry. 6. "Development of a Phonon-Mediated Quantum-Cascade Terahertz Laser." Funded at Univ. of Massachusetts at Lowell by DARPA. (Collaboration with Professors W. Goodhue and J. Waldman at UML.) Aim of the project: The THz region of the electromagnetic spectrum is receiving a significant amount of research attention for numerous applications. THz radiation in this effort is defined as emission in the range of 1-6 THz (300 - 50 µm, or 4.13 - 24.8 meV, respectively). The modeling is being done at WPI while the experimental work is done at UMass-Lowell. The modeling uses finite element methods for the design of a quantum cascade structure in which the carrier depletion is done extremely efficiently using interface-phonons. Outcomes: THz pulses have demonstrated the ability to nondestructively penetrate certain membranes, such as ceramic, paper, and cardboard, and provide image information behind these media. Proposed uses for this radiation have included airport and shipping security, and the medical and dental communities are already utilizing such systems for imaging through bones and other structures in a manner less harmful than X-ray imaging. Furthermore, THz signals may prove extremely effective in tracking chemical and biological signatures for advanced threat detection. Water molecules and other commonly abundant gas species significantly attenuate THz signals. Nevertheless, THz radiation has been suggested for usage in exo-atmospheric (above the stratosphere) radar applications. This radiation can potentially provide much higher resolution than traditional RF through much smaller apertures. 7. "A Finite-Element Approach to the Modeling of the Nematic Phase and the Nematic-Isotropic Phase Transition of Liquid Crystals." Proposal is being readied for submission to funding agencies. (PI: L. R. Ram-Mohan; Co-PI: Professor Germano Iannacchione, Physics Department, WPI.) Aim of the project: A finite-element modeling approach is proposed for simulating liquid crystal phase structures and transitions in a self-consistent manner, so that parameters for the theory are determined by thermodynamic experiments on liquid crystals of finite volume with surface interaction energies accounted for. The project will begin by using as a model system, the nematic phase and the nematic to isotropic phase transition, in order to establish a full complement of tools necessary to describe phase structure and transitions at the mean-field level. The benefits of this project include the advancement on several fundamental fronts concerning phase transitions and phase structure modeling across a spectrum of other condensed matter systems besides the modeling of liquid crystals that is used as the starting point in this proposal. In addition, several applications of liquid crystals would directly benefit, including electro-optical optimization for a variety of important photonic and "smart-material" applications. Outcomes: A unique and potentially dramatic new application is the use of liquid crystalline materials as sensors that will be active very near the nematic to isotropic phase transition. The idea would be to exploit the highly nonlinear electro-optical nematic responses in the transition region. 8. "Other diverse topics: Novel applications of the Finite Element Method to Nonlinear Systems; issues in Sparse Matrix Computation; Use of Variational Principles in Computations." Frequently, in research one has "targets of opportunity" that open up and are resolved with some modest effort. A number of such topics are available for exploration, development, and eventual publication as journal articles.
{"url":"http://users.wpi.edu/~lrram/MQP_and_PHD_Projects.html","timestamp":"2014-04-18T20:45:52Z","content_type":null,"content_length":"23556","record_id":"<urn:uuid:151c3903-c848-4d8b-bb31-d73b67b6190e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Visibility-based pursuit-evasion in a polygonal environment Results 1 - 10 of 76 - ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS , 2003 "... ..." - In Proceedings of the 11th ACM-SIAM Symposium on Discrete Algorithms (SODA , 2000 "... Abstract We consider the problem of locating a continuously-moving target using a group of guardsmoving inside a simple polygon. Our guards always form a simple polygonal chain within the polygon such that consecutive guards along the chain are mutually visible. We developalgorithms that sweep such ..." Cited by 36 (2 self) Add to MetaCart Abstract We consider the problem of locating a continuously-moving target using a group of guardsmoving inside a simple polygon. Our guards always form a simple polygonal chain within the polygon such that consecutive guards along the chain are mutually visible. We developalgorithms that sweep such a chain of guards through a polygon to locate the target. Our two main results are the following: 1. an algorithm to compute the minimum number r * of guards needed to sweep an n-vertexpolygon that runs in O(n3) time and uses O(n2) working space, and 2. a faster algorithm, using O(n log n) time and O(n) space, to compute an integer r suchthat max( r- 16, 2) < = r * < = r and P can be swept with a chain of r guards. We develop two other techniques to approximate r*. Using O(n2) time and space, we show howto sweep the polygon using at most r * + 2 guards. We also show that any polygon can be sweptby a number of guards equal to two more than the link radius of the polygon. As a key component of our exact algorithm, we introduce the notion of the link diagramof a polygon, which encodes the link distance between all pairs of points on the boundary of the polygon. We prove that the link diagram has size \Theta (n3) and can be constructed in \Theta (n3)time. We also show link diagram provides a data structure for optimal two-point link-distance queries, matching an earlier result of Arkin et al.As a key component of our O(n log n)-time approximation algorithm, we introduce the notionof the &quot;link width &quot; of a polygon, which may have independent interest, as it captures important - International Journal of Robotics Research , 2004 "... We address an on-line version of the visibility-based pursuit-evasion problem. We take a minimalist approach in modeling the capabilities of a pursuer robot. A point pursuer moves in an unknown, simplyconnected, piecewise-smooth planar environment, and is given the task of locating any unpredictable ..." Cited by 35 (8 self) Add to MetaCart We address an on-line version of the visibility-based pursuit-evasion problem. We take a minimalist approach in modeling the capabilities of a pursuer robot. A point pursuer moves in an unknown, simplyconnected, piecewise-smooth planar environment, and is given the task of locating any unpredictable, moving evaders that have unbounded speed. The evaders are assumed to be points that move continuously. To solve the problem, the pursuer must for each target have an unobstructed view of it at some time during execution. The pursuer is equipped with a range sensor that measures the direction of depth discontinuities, but cannot provide precise depth measurements. All pursuer control is specified either in terms of this sensor or wall-following movements. The pursuer does not have localization capability or perfect control. We present a complete algorithm that enables the limited pursuer to clear the same environments that a pursuer with a complete map, perfect localization, and perfect control can clear (under certain general position assumptions). Theoretical guarantees that the evaders will be found are provided. The resulting algorithm to compute this strategy has been implemented in simulation. Results are shown for several examples. The approach is efficient and simple enough to be useful towards the development of real robot systems that perform visual searching. 1 "... This paper considers the computation of motion strategies to efficiently build polygonal layouts of indoor environments using a mobile robot equipped with a range sensor. This problem requires repeatedly answering the following question while the model is being built: Where should the robot go to pe ..." Cited by 33 (3 self) Add to MetaCart This paper considers the computation of motion strategies to efficiently build polygonal layouts of indoor environments using a mobile robot equipped with a range sensor. This problem requires repeatedly answering the following question while the model is being built: Where should the robot go to perform the next sensing operation? A next-best-view planner is proposed which selects the robot's next position that maximizes the expected amount of new space that will be visible to the sensor. The planner also takes into account matching requirements for reliable self-localization of the robot, as well as physical limitations of the sensor (range, incidence). The paper argues that polygonal layouts are a convenient intermediate model to perform other visual tasks. 1. Introduction Automatic model construction is a core problem in mobile robotics [1, 2, 3]. After being introduced into an unknown environment, a robot, or a team of robots, must perform sensing operations at multiple locations... - In Proc. Workshop on the Algorithmic Foundations of Robotics , 2004 "... Abstract. In this paper we present our advances in a data structure, the Gap Navigation Tree (GNT), useful for solving different visibility-based robotic tasks in unknown planar environments. We present its use for optimal robot navigation in simply-connected environments, locally optimal navigation ..." Cited by 22 (10 self) Add to MetaCart Abstract. In this paper we present our advances in a data structure, the Gap Navigation Tree (GNT), useful for solving different visibility-based robotic tasks in unknown planar environments. We present its use for optimal robot navigation in simply-connected environments, locally optimal navigation in multiply-connected environments, pursuit-evasion, and robot localization. The guiding philosophy of this work is to avoid traditional problems such as complete map building and exact localization by constructing a minimal representation based entirely on critical events in online sensor measurements made by the robot. The data structure is introduced from an information space perspective, in which the information used among the different visibility-based tasks is essentially the same, and it is up to the robot strategy to use it accordingly for the completion of the particular task. This is done through a simple sensor abstraction that reports the discontinuities in depth information of the environment from the robot’s perspective (gaps), and without any kind of geometric measurements. The GNT framework was successfully implemented on a real robot platform. 1 - IN PROCEEDINGS OF THE GRACE HOPPER CONFERENCE ON CELEBRATION OF WOMEN IN COMPUTING , 2002 "... Spatial localization or the ability to locate nodes is an important building block for next generation pervasive computing systems, but a formidable challenge, particularly, for very small hardware and energy constrained devices, for noisy, unpredictable environments and for very large ad hoc deploy ..." Cited by 22 (0 self) Add to MetaCart Spatial localization or the ability to locate nodes is an important building block for next generation pervasive computing systems, but a formidable challenge, particularly, for very small hardware and energy constrained devices, for noisy, unpredictable environments and for very large ad hoc deployed and networked systems. In this paper, we describe, validate and evaluate in real environments a very simple self localization methodology for RF-based devices based only on RF-connectivity constraints to a set of beacons (known nodes), applicable outdoors. Beacon placement has a significant impact on the localization quality in these systems. To self-configure and adapt the localization in noisy environments with unpredictable radio propagation vagaries, we introduce the novel concept of adaptive beacon placement. We propose several novel and density adaptive algorithms for beacon placement and demonstrate their effectiveness through evaluations. We also outline an approach in which beacons leverage a software controllable variable transmit power capability to further improve localization granularity. These combined features allow a localization system that is scalable and ad hoc deployable, long-lived and robust to noisy environments. The unique aspect of our localization approach is our emphasis on adaptive self-configuration. - International Journal of Computational Geometry and Applications , 2000 "... We present an algorithm for a single pursuer with one ashlight that searches for an unpredictable, moving target with unbounded speed in a polygonal environment. The algorithm decides whether a simple polygon with n edges and m concave regions (m is typically much less than n, and always bounded ..." Cited by 22 (4 self) Add to MetaCart We present an algorithm for a single pursuer with one ashlight that searches for an unpredictable, moving target with unbounded speed in a polygonal environment. The algorithm decides whether a simple polygon with n edges and m concave regions (m is typically much less than n, and always bounded by n) can be cleared by the pursuer, and if so, constructs a search schedule in time O(m 2 + m log n + n). The key ideas in this algorithm include a representation called the \visibility obstruction diagram" and its \skeleton," which is a combinatorial decomposition based on a number of critical visibility events. An implementation is presented along with a computed example. 1 Introduction Consider the following scenario: in a dark polygonal region there are two moving points. The rst one, called the pursuer, has the task to nd the second one, called the evader. The evader can move arbitrarily fast, and his movements are unpredictable by the pursuer. The pursuer is equipped with a... "... This paper examines the problem of locating a mobile, non-adversarial target in an indoor environment using multiple robotic searchers. One way to formulate this problem is to assume a known environment and choose searcher paths most likely to intersect with the path taken by the target. We refer to ..." Cited by 21 (12 self) Add to MetaCart This paper examines the problem of locating a mobile, non-adversarial target in an indoor environment using multiple robotic searchers. One way to formulate this problem is to assume a known environment and choose searcher paths most likely to intersect with the path taken by the target. We refer to this as the Multi-robot Efficient Search Path Planning (MESPP) problem. Such path planning problems are NP-hard, and optimal solutions typically scale exponentially in the number of searchers. We present an approximation algorithm that utilizes finite-horizon planning and implicit coordination to achieve linear scalability in the number of searchers. We prove that solving the MESPP problem requires maximizing a nondecreasing, submodular objective function, which leads to theoretical bounds on the performance of our approximation algorithm. We extend our analysis by considering the scenario where searchers are given noisy non-line-of-sight ranging measurements to the target. For this scenario, we derive and integrate online Bayesian measurement updating into our framework. We demonstrate the performance of our framework in two large-scale simulated environments, and we further validate our results using data from a novel ultra-wideband ranging sensor. Finally, we provide an analysis that demonstrates the relationship between MESPP and the intuitive average capture time metric. Results show that our proposed linearly scalable approximation algorithm generates searcher paths competitive with those generated by exponential algorithms. 1 - In Proc. IEEE Int’l Conf. on Robotics and Automation , 1954 "... We consider the problem of searching for an unpredictable moving target, using a robot that lacks a map of the environment, lacks the ability to construct a map, and has imperfect navigation ability. We present a complete algorithm, which yields a motion strategy for the robot that guarantees the el ..." Cited by 20 (13 self) Add to MetaCart We consider the problem of searching for an unpredictable moving target, using a robot that lacks a map of the environment, lacks the ability to construct a map, and has imperfect navigation ability. We present a complete algorithm, which yields a motion strategy for the robot that guarantees the elusive target will be detected, if such a strategy exists. It is assumed that the robot has an omnidirectional sensing device that is used to detect moving targets and also discontinuities in depth data in a 2D environment. We also show that the robot has the same problem-solving power as a robot that has a complete map and perfect navigation abilities. The algorithm has been implemented in simulation, and some examples are shown. 1 - PROCEEDINGS OF THE EIGHTH INTERNATIONAL SYMPOSIUM OF ROBOTICS RESEARCH , 1998 "... Autonomous Observers are mobile robots that cooperatively perform vision tasks. Their design raises new issues in motion planning, where visibility constraints and motion obstructions must be simultaneously taken into account. This paper presents the concept of an Autonomous Observer and its applica ..." Cited by 19 (5 self) Add to MetaCart Autonomous Observers are mobile robots that cooperatively perform vision tasks. Their design raises new issues in motion planning, where visibility constraints and motion obstructions must be simultaneously taken into account. This paper presents the concept of an Autonomous Observer and its applications. It discusses three problems in motion planning with visibility constraints: model building, target finding, and target tracking.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=193262","timestamp":"2014-04-16T21:28:13Z","content_type":null,"content_length":"41955","record_id":"<urn:uuid:ea16a71e-9c9a-4d42-a628-89f7479e7a52>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Is time is One dimensional? if yes, How ? Replies: 279 Last Post: Aug 8, 2009 2:57 AM Messages: [ Previous | Next ] Cobra Re: Is time is One dimensional? if yes, How ? Posted: Jun 5, 2009 12:36 AM Posts: 414 Registered: 10/30/08 On Jun 4, 12:51 pm, "Do any of you remember the Republican Party?" <goofin...@gmail.com> wrote: > On Jun 3, 9:39 am, Nimo <azeez...@gmail.com> wrote: [snip the above] > > ________ > > politics,philosophy religion & society > > are quite good to HIM rather than > > physics; > > and this quote is best suited to him at this point.., > > ?Every street urchin in our mathematical Göttingen > > knows more about four-dimensional geometry than > > Einstein. Nevertheless, it was Einstein who did the work, > > not the great mathematicians. > > David Hilbert > > (January 23, 1862 ? February 14, 1943) > > the quote's author had shown his sarcasm very cleverly :-) > The tragic thing about Hilbert is how his greatest enterprise was > dashed by one young Mathematician philosopher, Kurt Godel. Wir müssen wissen. Wir werden wissen. As translated into English the inscriptions read: We must know. We will know. Ironically, the day before Hilbert pronounced this phrase at the 1930 annual meeting of the Society of German Scientists and Physicians, Kurt Gödel?in a roundtable discussion during the Conference on Epistemology held jointly with the Society meetings?tentatively announced the first expression of his (now-famous) incompleteness theorem,the news of which would make Hilbert "somewhat angry". In 1920 he proposed explicitly a research project (in metamathematics, as it was then termed) that became known as Hilbert's program. He wanted mathematics to be formulated on a solid and complete logical He believed that in principle this could be done, by showing that: 1. all of mathematics follows from a correctly-chosen finite system of axioms; and 2. that some such axiom system is provably consistent through some means such as the epsilon calculus. He seems to have had both technical and philosophical reasons for formulating this proposal. It affirmed his dislike of what had become known as the ignorabimus, still an active issue in his time in German thought, and traced back in that formulation to Emil du Bois-Reymond. This program is still recognizable in the most popular philosophy of where it is usually called formalism. For example, the Bourbaki group a watered-down and selective version of it as adequate to the requirements of their twin projects of (a) writing encyclopedic foundational works, and (b) supporting the axiomatic method as a research tool. This approach has been successful and influential in relation with Hilbert's work in algebra and functional analysis, but has failed to engage in the same way with his interests in physics and logic. Gödel's work Hilbert and the talented mathematicians who worked with him in his enterprise were committed to the project. His attempt to support axiomatized mathematics with definitive principles, which could theoretical uncertainties, was however to end in failure. Gödel demonstrated that any non-contradictory formal system, which was comprehensive enough to include at least arithmetic, cannot demonstrate its completeness by way of its own axioms. In 1931 his incompleteness theorem showed that Hilbert's grand plan was impossible as stated. The second point cannot in any reasonable be combined with the first point, as long as the axiom system is genuinely finitary. Nevertheless, the subsequent achievements of proof theory at the very clarified consistency as it relates to theories of central concern to Hilbert's work had started logic on this course of clarification; the need to understand Gödel's work then led to the development of theory and then mathematical logic as an autonomous discipline in the The basis for later theoretical computer science, in Alonzo Church Alan Turing also grew directly out of this 'debate'. Date Subject Author 6/3/09 Is time is One dimensional? if yes, How ? Cobra 6/3/09 Re: Is time is One dimensional? if yes, How ? John Sefton 6/3/09 Re: Is time is One dimensional? if yes, How ? Witziges Rätsel 6/3/09 Re: Is time is One dimensional? if yes, How ? jesko 6/3/09 Re: Is time is One dimensional? if yes, How ? The Sorcerer 6/3/09 Re: Is time is One dimensional? if yes, How ? dlzc@aol.com \(formerly\) 6/3/09 Re: Is time is One dimensional? if yes, How ? Ahmed Ouahi, Architect 6/3/09 Re: Is time is One dimensional? if yes, How ? Wisey 6/3/09 Re: Is time is One dimensional? if yes, How ? Tom Roberts 6/3/09 Re: Is time is One dimensional? if yes, How ? ZZBunker 6/3/09 Re: Is time is One dimensional? if yes, How ? Wisey 6/3/09 Re: Is time is One dimensional? if yes, How ? Grendel 6/4/09 Re: Is time is One dimensional? if yes, How ? Tom Potter 6/3/09 Re: Is time is One dimensional? if yes, How ? mahipal7638@gmail.com 6/3/09 Re: Is time is One dimensional? if yes, How ? suzysewnshow 6/3/09 Re: Is time is One dimensional? if yes, How ? Grendel 6/3/09 Re: Is time is One dimensional? if yes, How ? Ace0f_5pades 6/3/09 Re: Is time is One dimensional? if yes, How ? Ace0f_5pades 6/3/09 Re: Is time is One dimensional? if yes, How ? schoenfeld.forever@gmail.com 6/3/09 Re: Is time is One dimensional? if yes, How ? Sam Wormley 6/4/09 Re: Is time is One dimensional? if yes, How ? Ahmed Ouahi, Architect 6/4/09 Re: Is time is One dimensional? if yes, How ? Sam Wormley 6/4/09 Re: Is time is One dimensional? if yes, How ? Ahmed Ouahi, Architect 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 6/17/09 Re: Is time is One dimensional? if yes, How ? MeAmI.org 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 7/3/09 Re: Is time is One dimensional? if yes, How ? John Doe 6/4/09 Re: Is time is One dimensional? if yes, How ? SolomonW 6/4/09 Re: Is time is One dimensional? if yes, How ? Tom Roberts 6/4/09 Re: Is time is One dimensional? if yes, How ? hanson 6/6/09 Re: Is time is One dimensional? if yes, How ? schoenfeld.forever@gmail.com 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 6/6/09 Re: Is time is One dimensional? if yes, How ? schoenfeld.forever@gmail.com 6/6/09 Re: Is time is One dimensional? if yes, How ? spiritual matter 6/6/09 Re: Is time is One dimensional? if yes, How ? spiritual matter 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? Errol 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/9/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/9/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/9/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/14/09 Re: Is time is One dimensional? if yes, How ? Michael Press 6/10/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/10/09 Re: Is time is One dimensional? if yes, How ? Tim Golden http://bandtech.com 6/10/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/10/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/10/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/10/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/10/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/10/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/10/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/14/09 Re: Is time is One dimensional? if yes, How ? Michael Press 6/11/09 Re: Is time is One dimensional? if yes, How ? Errol 6/11/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/15/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/16/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/16/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/16/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/16/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/16/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/16/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/18/09 Re: Is time is One dimensional? if yes, How ? Aatu Koskensilta 6/18/09 Re: Is time is One dimensional? if yes, How ? Han de Bruijn 6/18/09 Re: Is time is One dimensional? if yes, How ? Chris Menzel 6/16/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/16/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/16/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/16/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/16/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/16/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/16/09 Re: Is time is One dimensional? if yes, How ? Nick 6/16/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/16/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/17/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/17/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/17/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/17/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/17/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/17/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/17/09 Re: Is time is One dimensional? if yes, How ? MeAmI.org 6/17/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/17/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/17/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/17/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/17/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/18/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/18/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/19/09 Re: Is time is One dimensional? if yes, How ? herb z 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 6/21/09 Re: Is time is One dimensional? if yes, How ? Ace0f_5pades 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? Guest 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? Zurikela 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? Ace0f_5pades 6/22/09 Re: Is time is One dimensional? if yes, How ? Ace0f_5pades 6/23/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 7/3/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/10/09 Re: Is time is One dimensional? if yes, How ? Nick 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 6/3/09 Re: Is time is One dimensional? if yes, How ? mahipal7638@gmail.com 6/3/09 Re: Is time is One dimensional? if yes, How ? Guest 6/3/09 Re: Is time is One dimensional? if yes, How ? spiritual matter 6/4/09 Re: Is time is One dimensional? if yes, How ? Guest 6/4/09 Re: Is time is One dimensional? if yes, How ? spiritual matter 6/4/09 Re: Is time is One dimensional? if yes, How ? mathew_orman@onet.eu 6/4/09 Re: Is time is One dimensional? if yes, How ? Tim Golden http://bandtech.com 6/4/09 Re: Is time is One dimensional? if yes, How ? Y.Porat 6/4/09 Re: Is time is One dimensional? if yes, How ? The tragedy of balding men with long hair 6/4/09 Re: Is time is One dimensional? if yes, How ? Jericho 6/4/09 Re: Is time is One dimensional? if yes, How ? Ace0f_5pades 6/4/09 Re: Is time is One dimensional? if yes, How ? don 6/4/09 Re: Is time is One dimensional? if yes, How ? Yousuf Khan 6/5/09 Re: Is time is One dimensional? if yes, How ? The Sorcerer 6/5/09 Re: Is time is One dimensional? if yes, How ? Cobra 6/10/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/11/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/11/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/11/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/11/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/14/09 Re: Is time is One dimensional? if yes, How ? Guest 6/15/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/16/09 Re: Is time is One dimensional? if yes, How ? Aatu Koskensilta 6/16/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/11/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/11/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/11/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/11/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/11/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/11/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/11/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/11/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/11/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/11/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/11/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/11/09 Re: Is time is One dimensional? if yes, How ? Marshall 6/14/09 Re: Is time is One dimensional? if yes, How ? Michael Press 6/12/09 Re: Is time is One dimensional? if yes, How ? Zurikela 6/12/09 Re: Is time is One dimensional? if yes, How ? Marshall 6/12/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/12/09 Re: Is time is One dimensional? if yes, How ? Tim Little 6/12/09 Re: Is time is One dimensional? if yes, How ? Marshall 6/13/09 Re: Is time is One dimensional? if yes, How ? herb z 6/12/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/12/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/14/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/14/09 Re: Is time is One dimensional? if yes, How ? Y.Porat 6/14/09 Re: Is time is One dimensional? if yes, How ? Marshall 6/14/09 Re: Is time is One dimensional? if yes, How ? Michael Press 6/17/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/18/09 Re: Is time is One dimensional? if yes, How ? Aatu Koskensilta 6/17/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/17/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/18/09 Re: Is time is One dimensional? if yes, How ? Aatu Koskensilta 6/17/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/17/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/18/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/18/09 Re: Is time is One dimensional? if yes, How ? Han de Bruijn 6/19/09 Re: Is time is One dimensional? if yes, How ? herb z 6/18/09 Re: Is time is One dimensional? if yes, How ? Zurikela 6/19/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 6/18/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/19/09 Re: Is time is One dimensional? if yes, How ? herb z 6/18/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/18/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/18/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/18/09 Re: Is time is One dimensional? if yes, How ? Nick 6/19/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/19/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/19/09 Re: Is time is One dimensional? if yes, How ? Michael Stemper 6/19/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/19/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/19/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/19/09 Re: Is time is One dimensional? if yes, How ? herb z 6/19/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/19/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/19/09 Re: Is time is One dimensional? if yes, How ? MeAmI.org 6/20/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 6/19/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/20/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? Michael Stemper 6/20/09 Re: Is time is One dimensional? if yes, How ? Guest 6/20/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/20/09 Re: Is time is One dimensional? if yes, How ? Guest 6/22/09 Re: Is time is One dimensional? if yes, How ? Zurikela 6/22/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/23/09 Re: Is time is One dimensional? if yes, How ? Alan Smaill 6/23/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/23/09 Re: Is time is One dimensional? if yes, How ? Alan Smaill 6/23/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/23/09 Re: Is time is One dimensional? if yes, How ? Alan Smaill 6/23/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/23/09 Re: Is time is One dimensional? if yes, How ? Alan Smaill 6/24/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/24/09 Re: Is time is One dimensional? if yes, How ? Guest 6/25/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/25/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/25/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/25/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/23/09 Re: Is time is One dimensional? if yes, How ? herb z 6/22/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/22/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/24/09 Re: Is time is One dimensional? if yes, How ? Michael Stemper 6/25/09 Re: Is time is One dimensional? if yes, How ? Aatu Koskensilta 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/22/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/22/09 Re: Is time is One dimensional? if yes, How ? Marshall 6/25/09 Re: Is time is One dimensional? if yes, How ? Aatu Koskensilta 6/23/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/24/09 Re: Is time is One dimensional? if yes, How ? Ace0f_5pades 6/24/09 Re: Is time is One dimensional? if yes, How ? Guest 6/25/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/25/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/25/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/25/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/25/09 Re: Is time is One dimensional? if yes, How ? Jack Markan 6/25/09 Re: Is time is One dimensional? if yes, How ? mikegordge@xtra.co.nz 6/25/09 Re: Is time is One dimensional? if yes, How ? Ace0f_5pades 6/25/09 Re: Is time is One dimensional? if yes, How ? Ace0f_5pades 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 7/3/09 Re: Is time is One dimensional? if yes, How ? Ahmed Ouahi, Architect 7/30/09 Re: Is time is One dimensional? if yes, How ? Scotius 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 7/3/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 7/3/09 Re: Is time is One dimensional? if yes, How ? Tim Golden http://bandtech.com 7/11/09 Re: Is time is One dimensional? if yes, How ? Autymn D. C. 8/4/09 Re: Is time is One dimensional? if yes, How ? Scotius 8/4/09 Re: Is time is One dimensional? if yes, How ? FACE 8/4/09 Re: Is time is One dimensional? if yes, How ? FACE 8/5/09 Re: Is time is One dimensional? if yes, How ? Ahmed Ouahi, Architect 8/5/09 Re: Is time is One dimensional? if yes, How ? FACE 8/5/09 Re: Is time is One dimensional? if yes, How ? Marshall 8/5/09 Re: Is time is One dimensional? if yes, How ? FACE 8/5/09 Re: Is time is One dimensional? if yes, How ? Ahmed Ouahi, Architect 8/5/09 Re: Is time is One dimensional? if yes, How ? Scotius 8/5/09 Re: Is time is One dimensional? if yes, How ? Ahmed Ouahi, Architect 8/5/09 Re: Is time is One dimensional? if yes, How ? FACE 8/5/09 Re: Is time is One dimensional? if yes, How ? Mathal 8/5/09 Re: Is time is One dimensional? if yes, How ? Ahmed Ouahi, Architect 8/5/09 Re: Is time is One dimensional? if yes, How ? Scotius 8/5/09 Re: Is time is One dimensional? if yes, How ? Ahmed Ouahi, Architect 8/8/09 Re: Is time is One dimensional? if yes, How ? Scotius 8/4/09 Re: Is time is One dimensional? if yes, How ? jason 8/5/09 Re: Is time is One dimensional? if yes, How ? penislord
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1948101&messageID=6739643","timestamp":"2014-04-20T16:11:13Z","content_type":null,"content_length":"353687","record_id":"<urn:uuid:19a2bf28-e0ca-4ba1-b386-f5d4cf80cadb>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Andy is parasailing and is at point A of the towline AB. The point B of the towline is connected to a boat, as shown. What is the length, in feet, of the towline AB? Best Response You've already chosen the best response. Best Response You've already chosen the best response. soh cah toa, basic trig. cos 43 = 500/AB AB = 500/cos 43 approx = 683.6637 feet Best Response You've already chosen the best response. |dw:1333067352819:dw| B) 500 cosec 43 degrees C) 500 sec 43 degrees Best Response You've already chosen the best response. You can do this yourself... Best Response You've already chosen the best response. so A ? Best Response You've already chosen the best response. Do you know what cot, sec, and csc mean? Best Response You've already chosen the best response. Best Response You've already chosen the best response. What about cotangent, secant, and cosecant. Best Response You've already chosen the best response. i've heard of them but i don't really know what the meaning.. Best Response You've already chosen the best response. But you know what sine, cosine, and tangent (sin, cos, and tan) mean, right? Best Response You've already chosen the best response. no not really . Best Response You've already chosen the best response. What math are you taking? Best Response You've already chosen the best response. it's somewhere in my text book but i don't understand it . Best Response You've already chosen the best response. i'm taking 10th grade geometry . Best Response You've already chosen the best response. So you don't know how to use trig functions (sin, cos, tan, etc...)? I can link you to some good videos if you want. Best Response You've already chosen the best response. Please do :) I need all the help I can get !! Best Response You've already chosen the best response. My finals are coming up at the end of next month ! Best Response You've already chosen the best response. http://www.khanacademy.org/math/trigonometry/v/basic-trigonometry Start there and click "next video" for as long as you need to. This is the very beginning of the trigonometry playlist, and this guy explains things really well. You can go to the main site to see a bunch of other vids on a ton of other topics (there's geometry in there too). I think after watching the first one you'll remember what your teacher said in class, so: cosecant x = csc x = 1/sin x secant x = sec x = 1/cos x cotangent x = cot x = 1/tan x I dunno when he introduces those ideas. Very simple though when you know what sine, cosine, and tangent are though. Good luck on your final! Best Response You've already chosen the best response. thank you so much :) Best Response You've already chosen the best response. No problem. Best Response You've already chosen the best response. cos 43 = 500/AB AB = 500/ [cos 43] -----> which is equivalent to C) 500 sec 43 degrees --> Answer AB = 683.664 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f74fe39e4b0f07ddab15ced","timestamp":"2014-04-18T10:49:55Z","content_type":null,"content_length":"112276","record_id":"<urn:uuid:af30684c-745a-402e-9327-1031299c432d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Lowest Common Ancestor of a Binary Tree up vote 13 down vote favorite This is a popular interview question and the only article I can find on the topic is one from TopCoder. Unfortunately for me, it looks overly complicated from an interview answer's perspective. Isn't there a simpler way of doing this other than plotting the path to both nodes and deducing the ancestor? (This is a popular answer, but there's a variation of the interview question asking for a constant space answer). java binary-tree add comment 6 Answers active oldest votes Constant space answer: (although not necessarily efficient). Have a function findItemInPath(int index, int searchId, Node root) up vote 1 down then iterate from 0 .. depth of tree, finding the 0-th item, 1-th item etc. in both search paths. vote accepted When you find i such that the function returns the same result for both, but not for i+1, then the i-th item in the path is the lowest common ancestor. thank you, I think that's a great idea. We might also need to add another parameter in the function that says what the parent node at each step is, so that we can print it as soon as we find different results in the (i+1)th call. Sounds great - thanks again! – user183037 Apr 5 '11 at 5:02 I think this method is only useful as an intellectual exercise, because the time is O(N) unless sorted and balanced, in which case O(log(n)) time. But the natural algorithm for this uses O(log(N)) space and would be a simple modification of the code below to search the tree based on the value of key (choose left subtree or right subtree) rather than just prefix order (search left then right). – Larry Watanabe Apr 5 '11 at 14:27 add comment A simplistic (but much less involved version) could simply be (.NET guy here Java a bit rusty, so please excuse the syntax, but I think you won't have to adjust too much). This is what I threw together. class Program static void Main(string[] args) Node node1 = new Node { Number = 1 }; Node node2 = new Node { Number = 2, Parent = node1 }; Node node3 = new Node { Number = 3, Parent = node1 }; Node node4 = new Node { Number = 4, Parent = node1 }; Node node5 = new Node { Number = 5, Parent = node3 }; Node node6 = new Node { Number = 6, Parent = node3 }; Node node7 = new Node { Number = 7, Parent = node3 }; Node node8 = new Node { Number = 8, Parent = node6 }; Node node9 = new Node { Number = 9, Parent = node6 }; Node node10 = new Node { Number = 10, Parent = node7 }; Node node11 = new Node { Number = 11, Parent = node7 }; Node node12 = new Node { Number = 12, Parent = node10 }; Node node13 = new Node { Number = 13, Parent = node10 }; Node commonAncestor = FindLowestCommonAncestor(node9, node12); public class Node public int Number { get; set; } public Node Parent { get; set; } public int CalculateNodeHeight() return CalculateNodeHeight(this); private int CalculateNodeHeight(Node node) if (node.Parent == null) return 1; up vote 11 down } return CalculateNodeHeight(node.Parent) + 1; public static Node FindLowestCommonAncestor(Node node1, Node node2) int nodeLevel1 = node1.CalculateNodeHeight(); int nodeLevel2 = node2.CalculateNodeHeight(); while (nodeLevel1 > 0 && nodeLevel2 > 0) if (nodeLevel1 > nodeLevel2) node1 = node1.Parent; else if (nodeLevel2 > nodeLevel1) node2 = node2.Parent; if (node1 == node2) return node1; node1 = node1.Parent; node2 = node2.Parent; return null; Thanks Mirko, but having a parent pointer makes the question trivial. My bad though - I forgot to mention that in the question. Thanks a ton for the solution though :) – user183037 Apr 5 '11 at 5:07 very nice Solution....though making use of parent pointer.....don't understand why it is not accepted !! – Amol Sharma Mar 23 '12 at 11:40 add comment The main reason why the article's solutions are more complicated is that it is dealing with a two-stage problem- preprocessing and then queries- while from your question it sounds like you're only doing one query so preprocessing doesn't make sense. It's also dealing with arbitrary trees rather than binary trees. The best answer will certainly depend on details about the tree. For many kinds of trees, the time complexity is going to be O(h) where h is the tree's height. If you've got pointers to parent nodes, then the easy "constant-space" answer is, as in Mirko's solution, to find both nodes' height and compare ancestors of the same height. Note that this works for any tree with parent links, binary or no. We can improve on Mirko's solution by making the height function iterative and by separating the "get to the same depth" loops from the main loop: int height(Node n){ int h=-1; return h; Node LCA(Node n1, Node n2){ int discrepancy=height(n1)-height(n2); while(discrepancy>0) {n1=n1.parent;discrepancy--;} while(discrepancy<0) {n2=n2.parent;discrepancy++;} return n1; up vote 2 } down vote The quotation marks around "constant-space" are because in general we need O(log(h)) space to store the heights and the difference between them (say, 3 BigIntegers). But if you're dealing with trees with heights too large to stuff in a long, you likely have other problems to worry about that are more pressing than storing a couple nodes' heights. If you have a BST, then you can easily take a common ancestor (usu. starting with root) and check its children to see whether either of them is a common ancestor: Node LCA(Node n1, Node n2, Node CA){ if(n1.val<CA.val & n2.val<CA.val) CA=CA.left; else if (n1.val>CA.val & n2.val>CA.val) CA=CA.right; else return CA; As Philip JF mentioned, this same idea can be used in any tree for a constant-space algorithm, but for a general tree doing it this way will be really slow since figuring out repeatedly whether CA.left or CA.right is a common ancestor will repeat a lot of work, so you'd normally prefer to use more space to save some time. The main way to make that tradeoff would be basically the algorithm you've mentioned (storing the path from root). Thank you - yes, I wasn't looking to do pre-processing at all for this particular scenario. – user183037 Apr 5 '11 at 5:05 add comment It matters what kind of tree you are using. You can always tell if a node is the ancestor of another node in constant space, and the top node is always a common ancestor, so getting the Lowest Common Ancestor in constant space just requires iterating your way down. On a binary search tree this is pretty easy to do fast, but it will work on any tree. Many different trade offs are relevant for this problem, and the type of tree matters. The problem tends is much easier if you have pointers to parent nodes, and not just to children up vote 1 (Mirko's code uses this) down vote See also: http://en.wikipedia.org/wiki/Lowest_common_ancestor add comment The obvious solution, that uses log(n) space, (n is the number of nodes) is the algorithm you mentioned. Here's an implementation. In the worst case it takes O(n) time (imagine that one of the node you are searching common ancestor for includes the last node). using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication2 class Node private static int counter = 0; private Node left = null; private Node right = null; public int id = counter++; static Node constructTreeAux(int depth) if (depth == 0) return null; Node newNode = new Node(); newNode.left = constructTree(depth - 1); newNode.right = constructTree(depth - 1); return newNode; public static Node constructTree(int depth) if (depth == 0) return null; Node root = new Node(); root.left = constructTreeAux(depth - 1); root.right = constructTreeAux(depth - 1); return root; private List<Node> findPathAux(List<Node> pathSoFar, int searchId) if (this.id == searchId) if (pathSoFar == null) pathSoFar = new List<Node>(); return pathSoFar; if (left != null) List<Node> result = left.findPathAux(null, searchId); if (result != null) return result; if (right != null) List<Node> result = right.findPathAux(null, searchId); if (result != null) return result; return null; public static void printPath(List<Node> path) if (path == null) Console.Out.WriteLine(" empty path "); for (int i = 0; i < path.Count; i++) Console.Out.Write(path[i] + " "); up vote 1 } down vote public override string ToString() return id.ToString(); /// <summary> /// Returns null if no common ancestor, the lowest common ancestor otherwise. /// </summary> public Node findCommonAncestor(int id1, int id2) List<Node> path1 = findPathAux(null, id1); if (path1 == null) return null; path1 = path1.Reverse<Node>().ToList<Node>(); List<Node> path2 = findPathAux(null, id2); if (path2 == null) return null; path2 = path2.Reverse<Node>().ToList<Node>(); Node commonAncestor = this; int n = path1.Count < path2.Count? path1.Count : path2.Count; for (int i = 0; i < n; i++) if (path1[i].id == path2[i].id) commonAncestor = path1[i]; return commonAncestor; return commonAncestor; private void printTreeAux(int depth) for (int i = 0; i < depth; i++) Console.Write(" "); if (left != null) left.printTreeAux(depth + 1); if (right != null) right.printTreeAux(depth + 1); public void printTree() public static void testAux(out Node root, out Node commonAncestor, out int id1, out int id2) Random gen = new Random(); int startid = counter; root = constructTree(5); int endid = counter; int offset = gen.Next(endid - startid); id1 = startid + offset; offset = gen.Next(endid - startid); id2 = startid + offset; commonAncestor = root.findCommonAncestor(id1, id2); public static void test1() Node root = null, commonAncestor = null; int id1 = 0, id2 = 0; testAux(out root, out commonAncestor, out id1, out id2); commonAncestor = root.findCommonAncestor(id1, id2); if (commonAncestor == null) Console.WriteLine("Couldn't find common ancestor for " + id1 + " and " + id2); Console.WriteLine("Common ancestor for " + id1 + " and " + id2 + " is " + commonAncestor.id); Thanks, I picked your other solution since it uses constant space. – user183037 Apr 5 '11 at 5:06 it may use constant space, but I think it will be hard to efficiently implement in constant space. For example, how do you find the first element on the path to an id? The answer would be to search either the left or right subtree, then chooose the apprpriate one. But searching the subtree takes O(N) time, so this is going to be about O(N*2) time. – Larry Watanabe Apr 5 '11 at 14:31 I think the question about a constant space answer is sort of a trick question. You should say "Yes, there is a constant space answer (describe my answer above) but it would be impractical because it would take O(N*2) time, so it's better to use the solution above which is O(N) and which can easily be improved to O(log(N)) by balancing and sorting the tree". – Larry Watanabe Apr 5 '11 at 14:32 add comment The bottom up approach described here is an O(n) time, O(1) space approach: Node *LCA(Node *root, Node *p, Node *q) { if (!root) return NULL; if (root == p || root == q) return root; up vote 0 down vote Node *L = LCA(root->left, p, q); Node *R = LCA(root->right, p, q); if (L && R) return root; // if p and q are on both sides return L ? L : R; // either one of p,q is on one side OR p,q is not in L&R subtrees add comment Not the answer you're looking for? Browse other questions tagged java binary-tree or ask your own question.
{"url":"http://stackoverflow.com/questions/5534440/lowest-common-ancestor-of-a-binary-tree","timestamp":"2014-04-24T21:24:02Z","content_type":null,"content_length":"100189","record_id":"<urn:uuid:0f3f9427-6960-48c0-81ea-af66e2396315>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about mastermind on Lucky's Notes Solving the AB Game by Brute Force November 7, 2011 The AB game, a variant of Mastermind, is played with pencil and paper and with two people. One person chooses a four digit number in which no digit is repeated, say X, and the other person tries to guess it in as few moves as possible. After the guesser guesses some number, say Y, the other person gives information on how closely Y matched with X: • If some digit in Y coincides with a digit in X (in the same position), then the guesser scores an A. • If some digit in Y exists in X but is in the wrong place, then the guesser scores a B. For instance, if X is 1234 and we guess 2674, we get an A and a B, because the 4 is in the right place, and the 2 is one of the right numbers but isn’t in the right place. This proceeds until the guesser gets the exact number. When humans (or at least beginners) play the AB game, they usually do some form of uncoordinated trial and error, which gets the right answer after some large number of moves. This takes anywhere from about 8 to possibly 30 guesses. When I played this game with a friend, I didn’t have a very systematic strategy, but I wondered if a computer program could solve the game, always entering the optimal guess. A Bruteforce Solver My first approach happened to work fairly well. Simply, the computer keeps track of a list of all possible numbers that the answer can be. At random, the computer guesses one of the numbers, and upon receiving feedback, eliminates every number in its list that doesn’t match that feedback. Quickly it eliminates whole swaths of combinations and arrives at the answer. A typical session might go like this: Guess 1: 6297. Score: 5040 Guess 2: 8512. Score: 1440 Guess 3: 2315. Score: 83 Guess 4: 5842. Score: 29 Guess 5: 9581. Score: 13 Guess 6: 8021. Score: 1 It took only 5 guesses for the computer to narrow down the choices to the only possible answer (8021). A variant that is usually much harder for humans to solve is to allow the number chosen to have repeats. Although significantly trickier for humans to guess by trial and error, brute force doesn’t seem to be affected too much by it: Guess 1: 1796. Score: 10000 Guess 2: 5881. Score: 3048 Guess 3: 0131. Score: 531 Guess 4: 3311. Score: 15 Guess 5: 3201. Score: 9 Guess 6: 2011. Score: 1 The computer’s average of five or six is much better than a human can normally do! (although I haven’t researched possible human algorithms). Optimal or Not? At this point you may begin to wonder if this strategy is the optimal one. Unfortunately, it is not — and I only need one counterexample to demonstrate that. Suppose that instead of four numbers, you were allowed to choose 4 letters from A to Z. You choose the combination ABCW. Now suppose that the computer ‘knows’ that the first three letters are ABC — that is, it has eliminated all other combinations except for ABCD, ABCE, …, ABCZ. By the random guessing algorithm, the computer is forced to guess ABCJ, ABCP, etc, until it eventually hits on ABCW at random. This may take a very high number of guesses. A smarter strategy would be to guess combinations of four unknown letters, say DEFG, then HIJK, etc. Instead of eliminating one letter, you eliminate four letters at a time. Although some guesses have no chance of being correct, the number of guesses required is fewer in the long run. Source Code I’ll put my somewhat unwieldy java code here for all to see: import java.util.*; public class ABSolver{ // Does cc1 and cc2 together match the pattern? // If repeats are allowed, cc2 is matched against cc1. static boolean fits(char[] cc1, char[] cc2, int A, int B){ int a = 0; int b = 0; for(int i=0; i<4; i++){ if(cc1[i] == cc2[i]) a++; for(int j=0; j<4; j++){ if(i != j && cc1[j] == cc2[i] && cc1[j]!=cc2[j]){ return a==A && b==B; public static void main(String[] args){ Random rand = new Random(); Scanner scan = new Scanner(System.in); // Combinations that haven't been eliminated yet List<char[]> ok = new ArrayList<char[]>(10000); // Generate all possible combinations for(int i = 0; i <= 9999; i++){ String i_s = Integer.toString(i); while(i_s.length() != 4) i_s = "0" + i_s; // Check for sameness char a = i_s.charAt(0); char b = i_s.charAt(1); char c = i_s.charAt(2); char d = i_s.charAt(3); boolean same = a==b || a==c || a==d || b==c || b==d || c==d; // Comment this line out if we're allowing repeats if(same) continue; char[] i_ia = i_s.toCharArray(); // Pick the first guess randomly char[] firstg = ok.get(rand.nextInt(ok.size())); char[] guess = null; int nguesses = 1; // Question answer cycle until we get it right // Ask for user response if(nguesses > 1){ String ans = scan.nextLine(); int A = 0; int B = 0; for(char cc : ans.toCharArray()){ if(cc == 'a') A++; if(cc == 'b') B++; if(A==4) return; // For each one check to see if it still fits List<char[]> ok_ = new ArrayList<char[]>(ok.size()); for(char[] zhe : ok){ if(fits(zhe, guess, A, B)) ok = ok_; char[] nextguess = null; if(nguesses == 1) nextguess = firstg; else nextguess = ok.get(rand.nextInt(ok.size())); System.out.println("Guess " + nguesses + ": " + new String(nextguess) + ". Score: " + ok.size()); // we win! if(ok.size() == 1) guess = nextguess;
{"url":"http://luckytoilet.wordpress.com/tag/mastermind/","timestamp":"2014-04-19T09:24:57Z","content_type":null,"content_length":"34864","record_id":"<urn:uuid:1d69109b-79cb-4924-ae31-27dfaf3808b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with Matrices and Eigenvalues March 18th 2012, 10:51 AM #1 Mar 2012 United Kingdom Help with Matrices and Eigenvalues Firstly sorry if this is in the wrong section, I wasn't sure where to put it, please feel free to move it. I was just looking to get some help on a question I'm stuck on. Vector r = Rotation R = cos(θ) 0 sin(θ) -sin(θ) 0 cos(θ) r' = Rr It asks me to prove that r'.r' = r.r I found r' to be xcos(θ) + xsinθ -zsin(θ) + xcos(θ) To get r'.r' am I right to just multiply two of the above together, as in (xcos(θ) + xsin(θ))(xcos(θ) + xsin(θ)) (-zsin(θ) + xcos(θ))(-zsin(θ) + xcos(θ)) Because this is the way I did it and it doesn't lead to the same answer. Obviously these should all have big brackets around them, but I am unsure of how to represent them on here, if someone would advise me I would gladly fix that. Second part of the question is about eigenvalues, it asks me to find the three eigenvalues of R. I used the formula det(m - λI) Where m = R, λ = the eigenvalues and I is the appropriate identity matrix I end up with λ = cos() or 1, which is clearly wrong as there isn't enough answers. Somebody in class mentioned imaginary numbers, but I'm unsure as to how to proceed. Any advice with either part would be much appreciated, I can provide working out of what I have done so far if wanted/needed. Thank you. Re: Help with Matrices and Eigenvalues You're not doing it well, revise in your textbook how to multiply two matrices: $r'=\begin{bmatrix} \cos(\theta) & 0 & \sin(\theta) \\ 0 & 1 & 0 \\ -\sin(\theta) & 0 & \cos(\theta) \end{bmatrix} \begin{bmatrix}x \\ y \\ z \end{bmatrix} = \begin{bmatrix} x\cos(\theta)+z\sin(\ theta) \\ y \\ -x\sin(\theta)+z\cos(\theta) \end{bmatrix}$ To multiply two vectors you're supposed to use the definition of dot product: $\begin{bmatrix}a_1 \\ b_1 \\ c_1 \end{bmatrix} \begin{bmatrix}a_2 \\ b_2 \\ c_2 \end{bmatrix}=a_1a_2+b_1b_2+c_1c_2$, with this information try to check $r'\cdot r' =r\cdot r$. For the second part a possible way is to use cofactors along the second row, so that you get $\det\left(\begin{bmatrix} \cos(\theta)-\lambda & 0 & \sin(\theta) \\ 0 & 1-\lambda & 0 \\ -\sin(\theta) & 0 & \cos(\theta)-\lambda \end{bmatrix}\right)=(1-\lambda)(\lambda^2-2\lambda \cos(\ and this is 0 if $\lambda=1$ or $\lambda^2-2\lambda \cos(\theta)+1=0$ (use the quadratic formula). March 19th 2012, 12:10 PM #2 Jul 2011
{"url":"http://mathhelpforum.com/number-theory/196102-help-matrices-eigenvalues.html","timestamp":"2014-04-16T05:57:00Z","content_type":null,"content_length":"36449","record_id":"<urn:uuid:c074a6b4-3db3-48df-aa34-18f0738ba64e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Euler's theorem for Tetration, Pentation, etc. (Superexponentiation) up vote 0 down vote favorite Knuth's up-arrow notation extends the concept of exponentiation in a recursive manner. Euler's theorem can be restated in the arrow notation as: $a \uparrow \phi(m) \equiv 1 \mod m$, whenever $(a, m) = 1$. I was wondering what function $f_k(m)$ would ensure that $a \uparrow^k f_k(m) \ equiv 1 \mod m$ and under what conditions can we ensure that such a $f_k(m)$ exists. It is elementary to note that, when $\phi(m) | a$, for all values $n, k >= 2$, we have: $a \uparrow^k n \equiv 1 \ mod m$. I could neither find such a $f_k(m)$ nor prove that no such number exists when $\phi(m) \not | a$ As Euler's $\phi(n)$ doesn't always equal the order of $n$ in $m$, I'm only looking for a function $f_k(m)$ that ensures that $a \uparrow^k f_k(m)$ is congruent to $1$ modulo $m$ (under appropriate conditions); not necessarily the smallest such number. It looks like $f_k(m) = 0$ is an answer to your question, but perhaps an unsatisfying one. Since this question is accumulating votes to close (but without much of an explanation), it might find a more welcome home at math.stackexchange.com – S. Carnahan♦ Apr 3 '11 at 16:47 It might be of interest to know how I came across this. I was trying to find the last d decimal digits of $a \uparrow \uparrow b$ and found a simple recurrence for $r_{i,j} \equiv a \uparrow \ uparrow i \mod \phi_i (10^d)$ and we require $r_{b, 0}$, which solves the question when $(a, 10) = 1$. For finding the last digits of $a \uparrow^k b$ when $k > 2$, we require some sort of extension of Euler's theorem to reduce the large exponents to unity. – quantumelixir Apr 3 '11 at 19:08 However, I later found that the last $d$ decimal digits $a \uparrow^k b$ attain a fixed point and do not change further as the height of the tower increases. For instance, the last 500 digits of [Graham's number](en.wikipedia.org/wiki/Graham's_number) is computed noting this fact. This implies that the original motivating question on last digits of large numbers is rather un-interesting, unless of course $d$ is very large. – quantumelixir Apr 3 '11 at 19:17 1 Observation 1: if you iterate $(1+x)$,$(1+x)^{(1+x)}$, $(1+x)^{(1+x)^{(1+x)}}$ and so on you will observe, that the leading entries of the according taylor-series stabilize and the power series does not change in the leading terms. Observation 2: it is different, whether you iterate (mod some number) or whether you take the complete iterate (mod some number). Since the basic definition of tetration is still derived from the iteration-paradigm, it should explicitely noted, which procedure one takes here. – Gottfried Helms Apr 3 '11 at 21:58 Very nice. Your first observation might explain why the digits of a power tower eventually stabilize and do not change with further increase in the height. The procedure I'm referring to here is that of taking the complete iterate modulo a power of 10. There is an interesting phenomenon wherein you are able to use the other method you just referred to (iterating modulo the same power of 10) till you attain a fixed point. This fixed point is the same (except for small tower heights) as that obtained by the more difficult procedure of computing the complete iterate modulo the power of 10. – quantumelixir Apr 3 '11 at 23:05 show 1 more comment 1 Answer active oldest votes Suppose that for a positive integer $m$, there exists a positive integer $a>1$ such that $(a,m)=1$ and $\mathrm{rad}(\mathrm{ord}_m(a))\not|a$. Then $f_2(m)>0$ does not exists. (Here $\mathrm{ord}_m(a)$ is the multiplicative order of $a$ modulo $m$, and $\mathrm{rad}(n)$ is the radical of an integer $n$.) Indeed, for any $n>0$, $a\uparrow^2 n = a\uparrow (a\uparrow^2 (n-1))$ and $\mathrm{ord}_m(a)\not|(a\uparrow^2 (n-1))$, implying that $a\uparrow^2 n\not\equiv 1\pmod{m}$. up vote 1 down vote Similarly, under the same conditions, $f_k(m)>0$ does not exist for all $k>2$. Below $10^6$ only for $m=2$ and $m=3$, the anticipated $a$ does not exists. I believe such $a$ actually exists for all $m>3$. UPD. Above I implicitly assumed that $a < m$. If we drop this restriction, the problem becomes completely trivial. add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/60429/eulers-theorem-for-tetration-pentation-etc-superexponentiation?sort=votes","timestamp":"2014-04-18T03:12:51Z","content_type":null,"content_length":"58970","record_id":"<urn:uuid:362c482c-543b-41f6-9dd4-6615346932b0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Size of stationary sets MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. What can we say about the size of stationary subsets of $P_{\kappa}(\lambda)$ for infinite cardinals $\kappa, \lambda,$ especially when $\kappa=\aleph_1.$ Please give me some references, if there are any. up vote 7 down vote favorite 4 set-theory forcing infinite-combinatorics add comment What can we say about the size of stationary subsets of $P_{\kappa}(\lambda)$ for infinite cardinals $\kappa, \lambda,$ especially when $\kappa=\aleph_1.$ There exists a stationary subset of $P_{\omega_1} (\omega_2)$ of size $\aleph_2$. This is a result of Baumgartner and you can find a proof for this here: Why is this set stationary? I don't dare to answer the general case as I don't know much about it. I think it is a quite complicated issue depending on several things as cardinal arithmetic and even large cardinals. up vote 8 However you can generalize the proof of Solovays Splitting theorem which says that every stationary subset of a regular cardinal $\kappa$ can be split into $\kappa$-many pairwise disjoint down vote stationary sets, to obtain that every stationary subset of $P_{\kappa} (\lambda)$ can be split into $\kappa$ many pairwise disjoint stationary sets, which gives you a lower bound for the size of stationary sets. (An obvious upper bound is of course $\lambda^{< \kappa}$) add comment There exists a stationary subset of $P_{\omega_1} (\omega_2)$ of size $\aleph_2$. This is a result of Baumgartner and you can find a proof for this here: Why is this set stationary? I don't dare to answer the general case as I don't know much about it. I think it is a quite complicated issue depending on several things as cardinal arithmetic and even large cardinals. However you can generalize the proof of Solovays Splitting theorem which says that every stationary subset of a regular cardinal $\kappa$ can be split into $\kappa$-many pairwise disjoint stationary sets, to obtain that every stationary subset of $P_{\kappa} (\lambda)$ can be split into $\kappa$ many pairwise disjoint stationary sets, which gives you a lower bound for the size of stationary sets. (An obvious upper bound is of course $\lambda^{< \kappa}$) Shelah proved using his pcf theory that the least cardinality of a stationary subset of $P_\kappa(\lambda)$ is equal to the least cardinality of a cofinal subset of $P_\kappa(\lambda)$. up vote 6 See here: M. Shioya: A proof of Shelah's strong covering theorem for $P_\kappa(\lambda)$, Asian J. Math, 12(2008), 83-98. down vote add comment Shelah proved using his pcf theory that the least cardinality of a stationary subset of $P_\kappa(\lambda)$ is equal to the least cardinality of a cofinal subset of $P_\kappa(\lambda)$. See here: M. Shioya: A proof of Shelah's strong covering theorem for $P_\kappa(\lambda)$, Asian J. Math, 12(2008), 83-98.
{"url":"http://mathoverflow.net/questions/63600/size-of-stationary-sets","timestamp":"2014-04-18T11:06:06Z","content_type":null,"content_length":"54526","record_id":"<urn:uuid:5291266e-9349-4bae-b9c8-be0fd7885413>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Laurys Station Algebra 2 Tutor ...I played on Lower Merion High School's Varsity Tennis team for four years and Muhlenberg College's varsity tennis team for one year. I also individually taught group tennis clinics through Lower Merion Township to children ranging from 7-17 years old. Ever since I was little I have been an avid tennis player and have taken lessons to improve my skills. 26 Subjects: including algebra 2, reading, calculus, statistics ...In addition to the standard Algebra 2 and Precalculus curriculums, I tutor advanced topics, such as problem solving strategies for success on the AMC and AIME competitions. As Mark Twain said, "The difference between the right word and the almost-right word is the difference between the lightnin... 34 Subjects: including algebra 2, English, writing, physics ...I have been a practicing engineer for many years and thus I am familiar with many practical applications of math concepts to real world examples. My teaching philosophy is to maintain a student-focused and student-engaged learning environment to ensure student comprehension and student success. ... 12 Subjects: including algebra 2, calculus, statistics, geometry ...I enjoy teaching math and science, and am trying to make some money on the side since I am only a part-time student this semester. My favorite subjects are chemistry, physics, and any math subject, but would be willing to step outside my comfort zone as I am exposed to many math/science subjects... 10 Subjects: including algebra 2, chemistry, calculus, physics ...Algebra 2 is an extension of Algebra I. I've spent many years using it in my engineering career and I have also taught this at the high school Level. As an engineer, Excel has been a very effective tool in documenting and testing ideas. 11 Subjects: including algebra 2, physics, probability, ACT Math
{"url":"http://www.purplemath.com/laurys_station_algebra_2_tutors.php","timestamp":"2014-04-16T22:19:58Z","content_type":null,"content_length":"24266","record_id":"<urn:uuid:3538cd65-5e78-43c4-b3d4-99bb42221af5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
Expressive power of first-order logic for embedded finite models Belegradek, O (Istanbul Bilgi) Thursday 27 January 2005, 11:00-12.30 Seminar Room 1, Newton Institute I will discuss some questions posed by J.T.Baldwin and M.Benedikt [Trans. AMS 352 (2000)] concerning a relation between expressive power of first-order logic over finite models embedded in a model M and stability-theoretic properties of M.
{"url":"http://www.newton.ac.uk/programmes/MAA/seminars/2005012711001.html","timestamp":"2014-04-21T03:19:14Z","content_type":null,"content_length":"3926","record_id":"<urn:uuid:7ad3df8a-05a5-430f-a1d8-287ef98d95a7>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
c Problems - AP level Worksheet - Body-Centered Cubic Problems - AP level Go to some face-centered cubic problems Go to some general unit cell problems Return to the Liquids & Solids menu Here are the problems: Problem #1: The edge length of the unit cell of Ta, is 330.6 pm; the unit cell is body-centered cubic. Tantalum has a density of 16.69 g/cm^3 (a) calculate the mass of a tantalum atom. (b) Calculate the atomic weight of tantalum in g/mol. Problem #2: Chromium crystallizes in a body-centered cubic structure. The unit cell volume is 2.583 x 10¯^23 cm^3. Determine the atomic radius of Cr in pm. Problem #3: Barium has a radius of 224 pm and crystallizes in a body-centered cubic structure. What is the edge length of the unit cell? Problem #4: Metallic potassium has a body-centered cubic structure. If the edge length of unit cell is 533 pm, calculate the radius of potassium atom. Problem #5: Sodium has a density of 0.971 g/cm^3 and crystallizes with a body-centered cubic unit cell. (a) What is the radius of a sodium atom? (b) What is the edge length of the cell? Give answers in picometers. Problem #6: At a certain temperature and pressure an element has a simple body-centred cubic unit cell. The corresponding density is 4.253 g/cm^3 and the atomic radius is 1.780 Å. Calculate the atomic mass (in amu) for this element. Problem #7: Mo crystallizes in a body-centered cubic arrangement. Calculate the radius of one atom, given the density of Mo is 10.28 g /cm^3. Problem #8: see problem at end of file. Problem #1: The edge length of the unit cell of Ta, is 330.6 pm; the unit cell is body-centered cubic. Tantalum has a density of 16.69 g/cm^3. (a) calculate the mass of a tantalum atom. (b) Calculate the atomic weight of tantalum in g/mol. 1) Convert pm to cm: 330.6 pm x 1 cm/10^10 pm = 330.6 x 10¯^10 cm = 3.306 x 10¯^8 cm 2) Calculate the volume of the unit cell: (3.306 x 10¯^8 cm)^3 = 3.6133 x 10¯^23 cm^3 3) Calculate mass of the 2 tantalum atoms in the body-centered cubic unit cell: 16.69 g/cm^3 times 3.6133 x 10¯^23 cm^3 = 6.0307 x 10¯^22 g 4) The mass of one atom of Ta: 6.0307 x 10¯^22 g / 2 = 3.015 x 10¯^22 g 5) The atomic weight of Ta in g/mol: 3.015 x 10¯^22 g times 6.022 x 10^23 mol¯^1 = 181.6 g/mol Problem #2: Chromium crystallizes in a body-centered cubic structure. The unit cell volume is 2.583 x 10¯^23 cm^3. Determine the atomic radius of Cr in pm. 1) Determine the edge length of the unit cell: [cube root of] 2.583 x 10¯^23 cm^3 = 2.956 x 10¯^8 cm 2) Examine the following diagram: The triangle we will use runs differently than the triangle used in fcc calculations. d is the edge of the unit cell, however d√2 is NOT an edge of the unit cell. It is a diagonal of a face of the unit cell. 4r is a body diagonal. Since it is a right triangle, the Pythagorean Theorem works just fine. We wish to determine the value of 4r, from which we will obtain r, the radius of the Cr atom. Using the Pythagorean Theorem, we find: d^2 + (d√2)^2 = (4r)^2 3d^2 = (4r)^2 3(2.956 x 10¯^8 cm)^2 = 16r^2 r = 1.28 x 10¯^8 cm 3) The conversion from cm to pm is left to the student. Problem #3: Barium has a radius of 224 pm and crystallizes in a body-centered cubic structure. What is the edge length of the unit cell? (This is the reverse of problem #4.) 1) Calculate the value for 4r (refer to the above diagram): radius for barium = 224 pm 4r = 896 pm 2) Apply the Pythagorean Theorem: d^2 + (d√2)^2 = (896)^2 3d^2 = 802816 d^2 = 267605.3333. . . d = 517 pm Problem #4: Metallic potassium has a body-centered cubic structure. If the edge length of unit cell is 533 pm, calculate the radius of potassium atom. (This is the reverse of problem #3.) 1) Solve the Pythagorean Theorem for r (with d = the edge length): d^2 + (d √2)^2 = (4r)^2 d^2 + 2d^2 = 16r^2 3d^2 = 16r^2 r^2 = 3d^2 / 16 r = √3 (d / 4) 2) Solve the problem: √3 (533 / 4) r = 231 pm Problem #5: Sodium has a density of 0.971 g/cm^3 and crystallizes with a body-centered cubic unit cell. (a) What is the radius of a sodium atom? (b) What is the edge length of the cell? Give answers in picometers. 1) Determine mass of two atoms in a bcc cell: 22.99 g/mol divided by 6.022 x 10^23 mol^-1 = 3.81767 x 10^-23 g (this is the average mass of one atom of Na) 3.81767 x 10^-23 g times 2 = 7.63534 x 10^-23 g 2) Determine the volume of the unit cell: 7.63534 x 10^-23 g divided by 0.971 g/cm^3 = 7.863378 x 10^-23 cm^3 3) Determine the edge length (the answer to (b)): [cube root of]7.863378 x 10^-23 cm^3 = 4.2842 x 10^-8 cm 4) Use the Pythagorean Theorem (refer to above diagram): d^2 + (d√2)^2 = (4r)^2 3d^2 = 16r^2 r^2 = 3(4.2842 x 10^-8)^2 / 16 r = 1.855 x 10^-8 cm The radius of the sodium atom is 185.5 pm. The edge length is 428.4 pm. The manner of these conversions are left to the reader. Problem #6: At a certain temperature and pressure an element has a simple body-centred cubic unit cell. The corresponding density is 4.253 g/cm^3 and the atomic radius is 1.780 Å. Calculate the atomic mass (in amu) for this element. 1) Convert 1.780 Å to cm: 1.780 Å = 1.780 x 10^-8 cm 2) Use the Pythagorean Theorem to calculate d, the edge length of the unit cell: d^2 + (d√2)^2 = (4r)^2 3d^2 = 16r^2 d^2 = (16/3) (1.780 x 10^-8 cm)^2 d = 4.11 x 10^-8 cm 3) Calcuate the volume of the unit cell: (4.11 x 10^-8 cm)^3 = 6.95 x 10^-23 cm^3 4) Calcuate the mass inside the unit cell: 6.95 x 10^-23 cm^3 times 4.253 g/cm^3 = 2.95 x 10^-22 g Use a ratio and proportion to calculate the atomic mass: 2.95 x 10^-22 g is to two atoms as 'x' is to 6.022 x 10^23 mol^-1 x = 88.95 g/mol (or 88.95 amu) Problem #7: Mo crystallizes in a body-centered cubic arrangement. Calculate the radius of one atom, given the density of Mo is 10.28 g /cm^3. 1) Determine mass of two atoms in a bcc cell: 95.96 g/mol divided by 6.022 x 10^23 mol^-1 = 1.59349 x 10^-22 g (this is the average mass of one atom of Mo) 1.59349 x 10^-22 g times 2 = 3.18698 x 10^-22 g 2) Determine the volume of the unit cell: 3.18698 x 10^-22 g divided by 10.28 g/cm^3 = 3.100175 x 10^-23 cm^3 3) Determine the edge length: [cube root of]3.100175 x 10^-23 cm^3 = 3.14144 x 10^-8 cm 4) Use the Pythagorean Theorem (refer to above diagram): d^2 + (d√2)^2 = (4r)^2 3d^2 = 16r^2 r^2 = 3(3.14144 x 10^-8)^2 / 16 r = 1.3603 x 10^-8 cm (or 136.0 pm, to four sig figs) Problem #8: In modeling solid-state structures, atoms and ions are most often modeled as spheres. A structure built using spheres will have some empty space in it. A measure of the empty (also called void) space in a particular structure is the packing efficiency, defined as the volume occupied by the spheres divided by the total volume of the structure. Given that a solid crystallizes in a body-centered cubic structure that is 3.05 Å on each side, please answer the following questions. (The ChemTeam formatted this question while in transit through the Panama Canal, Nov. 7, 2010.) a. How many atoms are there in each unit cell? b. What is the volume of one unit cell in Å^3? (3.05 Å)^3 = 28.372625 Å^3 c. Assuming that the atoms are spheres and the radius of each sphere is 1.32 Å, what is the volume of one atom in Å^3? (4/3) (3.141592654) (1.32)^3 = 9.63343408 Å^3 I used the key for π on my calculator, so there were some internal digits in addition to that last 4 (which is actually rounded up from the internal digits). d. Therefore, what volume of atoms are in one unit cell? (9.63343408 Å^3 times 2) = 19.26816686 Å^3 e. Based on your results from parts b and d, what is the packing efficiency of the solid expressed as a percentage? 19.26816686 Å^3 / 28.372625 Å^3 = 0.679 Go to some face-centered cubic problems Go to some general unit cell problems
{"url":"http://www.chemteam.info/Liquids&Solids/WS-bcc-AP.html","timestamp":"2014-04-19T06:51:38Z","content_type":null,"content_length":"11384","record_id":"<urn:uuid:4a5a6f05-4d55-400f-99f4-3adb3f91a8a5>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Complete The Square solve by completing the square I need help on my homework. College algebra. The problem is a squared-2a-8=0 I do not know how to do this because I was absent. My teacher told me the answer, which is (4,-2), but I do not get how to solve it! Please help Complete the square complete the square idk how to complete the square any help z^2+22z+c completing a square x^2+5x=4 v^2-6v=91 is what I need the anwser for you sovle it by completing the square find all real or complex zeros by completing the squares complete the square
{"url":"http://www.wyzant.com/resources/answers/complete_the_square?f=votes","timestamp":"2014-04-19T20:09:46Z","content_type":null,"content_length":"46842","record_id":"<urn:uuid:935772b6-a0e2-497e-a391-5dd8c0c0ecd8>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
East Amwell Township, NJ Geometry Tutor Find an East Amwell Township, NJ Geometry Tutor ...I have a BS in Mathematics from Ohio University, and am currently working on my masters in Education. I can give you the attention and dedication you need to become successful in the math areas you want to become better in. We can work with your school curriculum and on other topics. 14 Subjects: including geometry, calculus, ASVAB, algebra 1 ...This is not to say that I did not have a positive impact because I did have a hand in creating some amazing machines that were used for medical services, civilian transport, and fire departments. However, I really found myself seeking a higher pursuit and many people close to me encouraged me to teach. I thought back on my first teaching experience back when I was in college. 16 Subjects: including geometry, Spanish, calculus, physics ...Solve problems involving decimals, percents, and ratios. 4. Solve problems involving exponents. 5. Solve problems involving radicals. 6. 27 Subjects: including geometry, calculus, statistics, algebra 1 ...In high school, the class I enjoyed most and excelled in more than any other class was history; whether it was American History or World History. I also spend much of my free time watching historical documentaries and reading books about historical events and influential world leaders. I have a... 10 Subjects: including geometry, algebra 1, Arabic, elementary (k-6th) ...If you need some help to learn Java programming, I will be delighted to share my knowledge with you and point you in the right direction with examples from real world application building experiences. And, last but not least, I liked tutoring flexible working hours and my family can use some fin... 9 Subjects: including geometry, algebra 1, algebra 2, SAT math Related East Amwell Township, NJ Tutors East Amwell Township, NJ Accounting Tutors East Amwell Township, NJ ACT Tutors East Amwell Township, NJ Algebra Tutors East Amwell Township, NJ Algebra 2 Tutors East Amwell Township, NJ Calculus Tutors East Amwell Township, NJ Geometry Tutors East Amwell Township, NJ Math Tutors East Amwell Township, NJ Prealgebra Tutors East Amwell Township, NJ Precalculus Tutors East Amwell Township, NJ SAT Tutors East Amwell Township, NJ SAT Math Tutors East Amwell Township, NJ Science Tutors East Amwell Township, NJ Statistics Tutors East Amwell Township, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/East_Amwell_Township_NJ_Geometry_tutors.php","timestamp":"2014-04-20T19:34:18Z","content_type":null,"content_length":"24662","record_id":"<urn:uuid:75547c91-dfc6-4b3f-a398-b9fc33473a19>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
┃ Volume 1 ┃ ┃ ┃ ┃ Expanded Contents ┃ ┃ ┃ ┃ Chapters ┃ ┃ ┃ ┃ 1: Introduction [and Summary] ┃ ┃ 2: Physical Field Theories ┃ ┃ 3: Psychological Field Theories ┃ ┃ 4: Social Field Theories ┃ ┃ 5: The Field of Power ┃ ┃ 6: Field Theories in Summary ┃ ┃ 7: Perception and Reality ┃ ┃ 8: Actuality versus Potentiality ┃ ┃ 9: Manifests versus Latents ┃ ┃ 11: Perception, Space, and Field ┃ ┃ 12: Cognitive Dissonance ┃ ┃ 13:Behavior, Personality, Situation, and Expectations ┃ ┃ 14: The Behavioral Equation: Behavior, Situation, and Expectations ┃ ┃ 15: Situation, Expectations, and Triggers ┃ ┃ 16: Person-Perception and Distance ┃ ┃ 17: The Behavioral Occasion ┃ ┃ 18: Social Behavior ┃ ┃ 19: Motivational Explanation ┃ ┃ 20: Energy and Attitudes in the Psychological Field ┃ ┃ 21: Motivation and the Superordinate Goal ┃ ┃ 22: What About Other Motivations ? ┃ ┃ 23: The Dynamic Field and Social Behavior ┃ ┃ 24: The Sociocultural Spaces ┃ ┃ 25: The Biophysical Spaces ┃ ┃ 26: Intentions and The Intentional Field ┃ ┃ 27: A Point of View ┃ ┃ 28: The Self As a Power ┃ ┃ 29: The Will As a Power ┃ ┃ 30: Determinism and Free Will ┃ ┃ 31: Alternative Perspectives on Freedom of the Will ┃ ┃ 32: A Humanism Between Materialism and Idealism ┃ ┃ 33: Atomism-Mechanism versus Organicism ┃ ┃ 34: Between Absolutism and Relationism ┃ ┃ 35: Humanity and Nature ┃ ┃ ┃ ┃ Other Volumes ┃ ┃ ┃ ┃ Vol. 2: The Conflict Helix ┃ ┃ Vol. 3: Conflict In Perspective ┃ ┃ Vol. 4: War, Power, Peace ┃ ┃ Vol. 5: The Just Peace ┃ ┃ ┃ ┃ Other Related Work ┃ ┃ ┃ ┃ The Conflict Helix: Principles and Practices... ┃ UNDERSTANDING CONFLICT AND WAR: VOL. 1: THE DYNAMIC PSYCHOLOGICAL FIELD Chapter 10 Latent Functions^* By R.J. Rummel ┃ ... in the political sciences, we cannot, except rarely, deal with functions but are compelled to operate with functionals, that is functions of functions, and actually of many functions, ┃ ┃ including time. ┃ ┃ ----Robert Strausz-Hupé and Stefan T. Possony, International Relation: In the Age of the Conflict Between Democracy and Dictatorship, 191-92 ┃ 10.1 COMMON LATENTS To understand further the nature of latents as they play a role in this book, The Dynamic Psychological Field, a more explicit discussion is necessary. Consider any discrete configuration of interdependent or meaningfully coherent manifestations as a system A particular can, stick, planet, ant, woman, nation, or alliance is such a system. Let any particular system be denoted by i, and any of its manifestations by in. Manifestation of a particular woman might be height, weight, hair color, girth, and I.Q. Then let X[im] be a datum, a particular manifestation m, for system i. Thus, if i is a woman and m is height, then X[im] is her specific height. In these terms, we can denote any manifestation as some X[im]. There are, of course, similar systems (the class of women, or of nations, or of planets) that have similar manifestations (similar heights, economic development, masses), although each may be perceived as unique. The mind intuitively or rationally, the culture pragmatically, or science quantitatively imposes order, pattern, regularity, intelligibility, and understanding on these manifestations in terms of the latents common to each similar system. Each manifestation, m, is then seen as a function of these common latents. These common latents are within the perspective of the percipient and comprise part of the transformation of external potentialities, dispositions, and their powers. Each system, such as a woman, is a field of potentialities and a configuration of dispositions, determinables, and powers. Some of these, in reality, are common to different but similar systems, as all we all share the disposition of hunger, or the power of will. The common latents are our perspective on these commonalities as perceived through their manifestations. Now, the potentialities, dispositions, determinables, and powers composing systems are mutually interrelated and entangled, combining in complex, multifold fashion to generate manifestations. As this potentiality and actuality become patterned into common latents, the latents themselves are enmeshed in these complicated relationships. For example, several manifestations of similar systems may be a function of some latents L[1], L[2], and L[3] such that any one manifestation X[im] = f(L[1], L[2], L[3]) = L[1] + 2L[2]^2 + L[3]. Some other manifestation n of the same system may result from a different combination of common latents, such that any one X[in] = f[2](L[1], L[2], L[4]) = L[1]L[2] - 3L[4]. Yet other manifestations for similar systems may be generated by both f[1](L[1], L[2], L [3]) and f[2](L[1], L[2], L[4]). Thus, for a set of manifestations 1, 2,...,m,n,...,p, we may find that the manifestations for system i depend on common latents as follows. Equation 10.1: X[i1] = [11]f[1](L[1], L[2], L[3]) + U[i1], X[i2] = [21]f[1](L[1], L[2], L[3]) + [22]f[2](L[1], L[2], L[4]) + U[i2], X[i3] = [32]f[2](L[1], L[2], L[4]) + U[i3], X[im] = [m1]f[1](L[1], L[2], L[3]) + U[im], X[in] = [i2]f[2](L[1], L[2], L[4]) + U[in], X[ip] = [p1]f[1](L[1], L[2], L[3]) + [p2]f[2](L[1], L[2], L[4]) + U[ip], where the alpha ( Because of the complexity of the relations between common latents underlying the manifestations, we may only intuit or cognitize the functions and not the latents and their complex relations involved in the functions; that is, as we confront reality with our perspective, we drive to transform its multitudinous actuality and potentialities into simpler, more orderly, and comprehensible relationships. Accordingly, we often apprehend the functions themselves as the latents. We then simply perceive the latents underlying the manifestations of Equation 10.1 as f[1]( ) and f[2]( ), such Equation 10.2: X[i1] = [11]f[1] ( ) + U[i1], X[i2] = [21]f[1] ( ) + [22]f[2]( ) + U[i2], X[i3] = [32]f[2]( ) + U[i3], X[im] = [m1]f[1]( ) + U[im], X[in] = [i2]f[2]( ) + U[in], X[ip] = [p1]f[1]( ) + [p2]f[2]( ) + U[ip]. The parentheses of the functions are left blank to indicate that we do not perceive the latents contributing to each function. Each of these functions is perceived, however, as a unity generating the manifestations; they comprise state functions, defining for us the state of system i in terms of its manifestations. However, consistent with the previous discussion, I will call these latent functions. They are generally the latencies our understanding comprehends in imposing order on nature's welter of ephemeral manifestations. These latencies--latent functions--are the invariant potentialities, dispositions, determinables, and powers we perceive or cognitize in moving through life. Some will note that the manifestations are linearly dependent on the common latent functions. This is not the place to consider the passionate, almost theological, linear-nonlinear controversy. Clearly, equations 10.1 and 10.2 are linear in the functions of the functions, although the functions themselves may be nonlinear, and are completely general. They could represent our quantitative scientific knowledge in science (the underlying latents can be defined by differential or integral operators--the basic equations in quantum physics, for example, are similar to equations 10.1-2)^1 or our qualitative distinctions in other areas (which would not be the case if differential equations, say, were employed).^2 Indeed, any perceptual-cognitive distinction that can be made about a system is reducible to equations like 10.2.^3 The equations are thus a universal perspective for perceiving and understanding manifestations of systems.^4 10.2 EXAMPLES AND SUMMARY The time is overdue for some examples. First, consider the manifest locations of all physical objects (determinables) within the territory of the United States. Now we know that the particular location of a manifestation (say a specific house) is a function of a variety of interwoven latent potentials, dispositions, and powers. However, we can perceive this manifest location as a function of two common latent functions: a north-south and an east-west function. Let the location of my house relative to that of another American be X[ij], where system i refers to my house, j to the other home. Then, the manifestation can be represented by the equation X[ij] = [1]f[1]( ) + [2]f[2]( ), where f[1]( ) is the number of miles north or south and f[2]( ) is the number of miles east or west, and [1] = [2] = 1.0. In a similar way, the location of any phenomenon in local Euclidean space is a function of the three dimensions (x, y, z) of physical space. This example should clarify somewhat my cryptic reference to the cultural matrix within which we perceive reality as containing, in part, Kant-like a prioris. The above common latents positioning things in space (and a comparable example could be given for space-time) are part of our cultural schema enabling us to make sense of perceptibles. As a second example, note the many quantitative determinables of a human body, such-as weight, cranial size, finger length, height, toenail width, ad nauseum. Besides qualitative properties, such as color, posture, build, these quantitative aspects are transformed into the physical whole we perceive as a person. Underlying these physical manifestations are, in essence, two common latent functions: height and girth.^5 Let X[ij] refer to person i's neck length j. Then, this manifestation can be represented by the equation X[ij] = [1]f[1]( ) + [2]f[2]( ) + U[ij], where f[1]( ) is i's girth, f[2]( ) is our height, and U[ij] defines the unique sources of this manifestation (such as heredity). Of course, height and girth in turn have many interdependent bio-environmental dispositions and powers underlying their values. However, the mind generally ignores or cannot encompass them, and perceives instead height and girth in making the physical manifestations intelligible. Need I mention our cultural emphasis on fat versus thin and tall versus short, and how we employ these common latent functions as an everyday perspective for perceiving, thinking, or talking about others? Not to slight my own field and to show the function of latents in social phenomena, two final examples will be taken from politics and government. One fast illustration concerns voting for candidates in elections. Clearly, through the perspective of political science, we perceive voting (manifestations) as dependent on a number of common latent functions, such as religion, age, socioeconomic status, region, and party membership. A second, less known example deserving more detail has to do with national political systems. In their manifestations, such systems vary considerably, a variety which reflects the causal and interactive relationships among a number of underlying dispositions and powers associated with identifying issues, articulating and channeling interests, mobilizing support, deciding issues, and so on. There is a simplifying perspective through which these interrelated dispositions and powers may be transformed^6 into essentially three common latent functions, three aspects that in combination give us the variety of common political manifestations. These are Western pluralistic democracy, communism, and monarchy.^7 A manifestation, say censorship, of Uganda's political system, then, can be perceived as mainly a function of the degree to which (1) its political system is pluralistic in a Western sense, (2) it is of communist character, and (3) it is monarchical. Before pushing on further, a brief summary may help. Underlying the manifestations we perceive are latents. These transform the haze of reality to invariant patterns and, within a particular perspective, make reality orderly and predictable. Latents may be considered properties, essences, or forms of things, but, in any case, they reflect the complex interrelationship between potentialities, dispositions, determinables, and powers. Moreover, and most important, latents may also comprise the cultural perspective, the schema, and meanings-values that are added to the perceptibles reality imposes on us. Percepts, themselves, are a seamless mixture of such latents and their manifestations. Finally, although manifestations and latents are interrelated in complex ways, these relationships reduce to those between manifestations and common latent functions (or state functions): manifestations are a function of these latent functions. Specifically, an intuitive awareness or knowledge of these latent functions enables the probabilistic (it is likely that .... it seems that . . . , it is probable that ... ) content of manifests to be perceived or known. To be sure that I am understood up to this point, let me use another language more appropriate for some contemporary philosophers of science or system theorists, but in form the same as what I am saying above. The haze of reality is transformed through our perceptual perspective into interdependent configurations, into systems. These systems link a variety of constructs connected to phenomena by rules of correspondence and which carry empirical properties (observables), some of which are latent observables (like the x, y, z, t coordinate axes of our space-time systems). These latent observables are state functions defining the state of a particular system and investing observables of that system with probable content. 10.3 SPACE AND COMPONENTS As suggested by one example (that of locating objects geographically), the common latent functions can be considered as coordinate axes of sorts. Actually, in general these functions are coordinate axes delimiting a space within which the manifestations can be given point (or vector) location. The substantive nature of this space depends on the manifestations and systems to which the common latent functions refer. For political systems, they define a political space; for voting, a voting space; for body measurements, a physiological space, and so forth. These spaces are no different mathematically or geometrically from the common three-dimensional space of physical objects that, until recent times, was uniquely associated with Euclidean geometry. For our purposes here, the point is that functions of latent functions presuppose a space delimited by the common latent functions. These functions will be called the components of the space. Henceforth, when the term components is used, it will mean common latent functions.^7a To see what is meant by components, consider Figure 10.1 which displays the political system space. In the figure the components define a three-dimensional space, such as a corner of a room. The three components are at mutual right angles,^8 with the vertical one defining, say, the height of an object in a room, and the other two lines fixing objects parallel to each wall. These three lines enable any object in the room to be located uniquely, or in general terms, the three components enable any point in the space to be fixed uniquely. Because the three components, Communism, Pluralism, and Monarchy, define the space of political systems, any manifest, such as X[1], X[2], X[3], and X[4], would have a specific location in the space. In other words, the figure shows what is meant by a common latent function underlying manifestations. Many readers may see the above spatial representation as an attempt to belabor the obvious. Or others may wonder why I do not say it in plain English without the "physicalism." It is essential to grasp the nature of what is being done here. Many social scientists (for example, Lewin, Coutu, Heider, Bentley, Dodd, Sorokin, Parsons) have fallen down precisely where they did not appreciate the extent to which their models, philosophies, or theories presupposed a particular spatial perspective. Accordingly, they could not exploit the considerable analytic power that would otherwise have been at their disposal. At any rate, as far as the story here is concerned, I can now uncork the genie in the bottle: the dynamic field. * Scanned from Chapter in R.J. Rummel, The Dynamic Psychological Field, 1975. For full reference to the book and the list of its contents in hypertext, click book. Typographical errors have been corrected, clarifications added, and style updated. 1. See Henry Margenau, The Nature of Physical Reality (New York: McGraw-Hill, 1950), which mainly concerns the pervasive and central role of latent or state functions in scientific perception. 2. This is a fundamental mistake of some general systems theorists. Bertalanffy, for example, argues that differential equations are the proper expression of system dependencies (Ludwig von Bertalanffy, General Systems Theory, New York: George Braziller, 1968). By doing so he limits "systems" mainly to those in physics and engineering, and to those quantitative manifestations comprising interval or ratio-scaled data. In contrast, if he had incorporated such differential equations as functions and treated manifestations as functions of such functions, as in equations 10.1 and 10.2, then he would have had a completely general expression of system relationships applicable not only to certain kinds of systems, but to social, cultural, aesthetic, linguistic, and in short, an systems. 3. I am well aware that most social scientists will see this as passing beyond the permissible bounds of an author's expected exuberance for his own views, but consider. Any distinction can be treated a dichotomy and then as binary numbers. For example, whether a person is a Catholic or not is a dichotomy which can be denoted as 1 = Catholic, 0 = non-Catholic. A collection of such dichotomous distinctions--manifestations--is then always reducible or transformable to equation 10.2. In more technical terms, every finite dimensional matrix of binary numbers has a basis, consisting of a set of linearly independent dimensions (functions). On measurement of qualitative manifestations, see my Applied Factor Analysis (Evanston: Northwestern University Press, 1970), sec. 9.1.3. On the relation between such measurements and latent functions, see ibid., part 2. 4. As the quote heading this Chapter shows, some students of international relations have begun to think in terms of the manifest-latent distinctions (although in different terms) and function of functions as discussed here (although not so specific). In Ernest Haas (Beyond the Nation State, Stanford: Stanford University Press, 1964) for example, we find: "Throughout this discussion, the kind of 'system' to which we shall address ourselves is the network or relationships among relationships; not merely the relations among nations, but the relations among the abstractions that can be used to summarize the relations among nations . . ." (p. 53). 5. See Harry Harman's Modern Factor Analysis (Chicago: University of Chicago Press, 1960) index for "eight physical variables." 6. All latent functions are given their particular importance through a specific perspective transformation. For example, if the latent functions are the dimensions of a space of potentialities, dispositions, and so forth, then the dimensions are subject to rotation and when differently rotated, they still generate the same manifestations. And different perspectives produce different rotations. In other words, there are many different but interdependent latent functions that can be perceived as the source of manifestations, and those particular latent functions actualized by our perspective (as for the point of view--the station--of the observer in Einstein's relativity) are the ones we perceive. 7. See my Applied Factor Analysis (op. cit.) index for "political characteristics." 7a. Throughout Volumes 1-4 of Understanding Conflict and War, I have adhered to this definition that a component equals a common latent function. However, because of a need to occasionally distinguish the component of component analysis from that of common factor analysis (see "Understanding Factor Analysis"), I sometimes have referred to the latter components as "common components." Moreover, if in context it was useful to stress that a component was a common underlying latent function, I have described it as a common component. In all cases, nothing new is added to the definition here. 8. This is done only for illustrative reasons. There is no necessity that components be so. Go to top of document
{"url":"http://hawaii.edu/powerkills/DPF.CHAP10.HTM","timestamp":"2014-04-21T02:18:29Z","content_type":null,"content_length":"30634","record_id":"<urn:uuid:dd7b39c1-0228-420a-a71c-d0e4056469b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalized coordinates From Wikipedia, the free encyclopedia In analytical mechanics, specifically the study of the rigid body dynamics of multibody systems, the term generalized coordinates refers to the parameters that describe the configuration of the system relative to some reference configuration. These parameters must uniquely define the configuration of the system relative to the reference configuration.^1 The generalized velocities are the time derivatives of the generalized coordinates of the system. An example of a generalized coordinate is the angle that locates a point moving on a circle. The adjective "generalized" distinguishes these parameters from the traditional use of the term coordinate to refer to Cartesian coordinates: for example, describing the location of the point on the circle using x and y coordinates. Although there may be many choices for generalized coordinates for a physical system, parameters are usually selected which are convenient for the specification of the configuration of the system and which make the solution of its equations of motion easier. If these parameters are independent of one another, then number of independent generalized coordinates is defined by the number of degrees of freedom of the system.^2 ^3 Constraint equations Generalized coordinates are usually selected to provide the minimum number of independent coordinates that define the configuration of a system, which simplifies the formulation of Lagrange's equations of motion. However, it can also occur that a useful set of generalized coordinates may be dependent, which means that they are related by one or more constraint equations. Holonomic constraints If the constraints introduce relations between the generalized coordinates q[i], i=1,..., n and time, of the form, $f_j(q_1,..., q_n, t) = 0, j=1,..., k,$ they are called holonomic.^1 These constraint equations define a manifold in the space of generalized coordinates q[i], i=1,...,n, known as the configuration manifold of the system. The degree of freedom of the system is d=n-k, which is the number of generalized coordinates minus the number of constraints.^4^:260 It can be advantageous to choose independent generalized coordinates, as is done in Lagrangian mechanics, because this eliminates the need for constraint equations. However, in some situations, it is not possible to identify an unconstrained set. For example, when dealing with nonholonomic constraints or when trying to find the force due to any constraint, holonomic or not, dependent generalized coordinates must be employed. Sometimes independent generalized coordinates are called internal coordinates because they are mutually independent, otherwise unconstrained, and together give the position of the system. Non-holonomic constraints A mechanical system can involve constraints on both the generalized coordinates and their derivatives. Constraints of this type are known as non-holonomic. First-order non-holonomic constraints have the form $g_j(q_1,... , q_n, \dot{q}_1,... , \dot{q}_n, t) = 0, j=1,.... , k.$ An example of such a constraint is a rolling wheel or knife-edge that constrains the direction of the velocity vector. Non-holonomic constraints can also involve next-order derivatives such as generalized accelerations. Example: Simple pendulum The relationship between the use of generalized coordinates and Cartesian coordinates to characterize the movement of a mechanical system can be illustrated by considering the constrained dynamics of a simple pendulum.^5^6 A simple pendulum consists of a mass M hanging from a pivot point so that it is constrained to move on a circle of radius L. The position of the mass is defined by the coordinate vector r=(x, y) measured in the plane of the circle such that y is in the vertical direction. The coordinates x and y are related by the equation of the circle $f(x, y) = x^2+y^2 - L^2=0,$ that constrains the movement of M. This equation also provides a constraint on the velocity components, $\dot{f}(x, y)=2x\dot{x} + 2y\dot{y} = 0.$ Now introduce the parameter θ, that defines the angular position of M from the vertical direction. It can be used to define the coordinates x and y, such that $\mathbf{r}=(x, y) = (L\sin\theta, -L\cos\theta).$ The use of θ to define the configuration of this system avoids the constraint provided by the equation of the circle. Virtual work Notice that the force of gravity acting on the mass m is formulated in the usual Cartesian coordinates, where g is the acceleration of gravity. The virtual work of gravity on the mass m as it follows the trajectory r is given by $\delta W = \mathbf{F}\cdot\delta \mathbf{r}.$ The variation δr can be computed in terms of the coordinates x and y, or in terms of the parameter θ, $\delta \mathbf{r} =(\delta x, \delta y) = (L\cos\theta, L\sin\theta)\delta\theta.$ Thus, the virtual work is given by $\delta W = -mg\delta y = -mgL\sin\theta\delta\theta.$ Notice that the coefficient of δy is the y-component of the applied force. In the same way, the coefficient of δθ is known as the generalized force along generalized coordinate θ, given by $F_{\theta} = -mgL\sin\theta.$ Kinetic energy To complete the analysis consider the kinetic energy T of the mass, using the velocity, $\mathbf{v}=(\dot{x}, \dot{y}) = (L\cos\theta, L\sin\theta)\dot{\theta},$ $T= \frac{1}{2} m\mathbf{v}\cdot\mathbf{v} = \frac{1}{2} m (\dot{x}^2+\dot{y}^2) = \frac{1}{2} m L^2\dot{\theta}^2.$ Lagrange's equations Lagrange's equations for the pendulum in terms of the coordinates x and y are given by, $\frac{d}{dt}\frac{\partial T}{\partial \dot{x}} - \frac{\partial T}{\partial x} = F_{x} + \lambda \frac{\partial f}{\partial x},\quad \frac{d}{dt}\frac{\partial T}{\partial \dot{y}} - \frac{\ partial T}{\partial y} = F_{y} + \lambda \frac{\partial f}{\partial y}.$ This yields the three equations $m\ddot{x} = \lambda(2x),\quad m\ddot{y} = -mg + \lambda(2y),\quad x^2+y^2 - L^2=0,$ in the three unknowns, x, y and λ. Using the parameter θ, Lagrange's equations take the form $\frac{d}{dt}\frac{\partial T}{\partial \dot{\theta}} - \frac{\partial T}{\partial \theta} = F_{\theta},$ which becomes, $mL^2\ddot{\theta} = -mgL\sin\theta,$ $\ddot{\theta} + \frac{g}{L}\sin\theta=0.$ This formulation yields one equation because there is a single parameter and no constraint equation. This shows that the parameter θ is a generalized coordinate that can be used in the same way as the Cartesian coordinates x and y to analyze the pendulum. Example: Double pendulum The benefits of generalized coordinates become apparent with the analysis of a double pendulum. For the two masses m[i], i=1, 2, let r[i]=(x[i], y[i]), i=1, 2 define their two trajectories. These vectors satisfy the two constraint equations, $f_1 (x_1, y_1, x_2, y_2) = \mathbf{r}_1\cdot \mathbf{r}_1 - L_1^2 = 0, \quad f_2 (x_1, y_1, x_2, y_2) = (\mathbf{r}_2-\mathbf{r}_1) \cdot (\mathbf{r}_2-\mathbf{r}_1) - L_2^2 = 0.$ The formulation of Lagrange's equations for this system yields six equations in the four Cartesian coordinates x[i], y[i] i=1, 2 and the two Lagrange multipliers λ[i], i=1, 2 that arise from the two constraint equations. Now introduce the generalized coordinates θ[i] i=1,2 that define the angular position of each mass of the double pendulum from the vertical direction. In this case, we have $\mathbf{r}_1 = (L_1\sin\theta_1, -L_1\cos\theta_1), \quad \mathbf{r}_2 = (L_1\sin\theta_1, -L_1\cos\theta_1) + (L_2\sin\theta_2, -L_2\cos\theta_2).$ The force of gravity acting on the masses is given by, $\mathbf{F}_1=(0,-m_1 g),\quad \mathbf{F}_2=(0,-m_2 g)$ where g is the acceleration of gravity. Therefore, the virtual work of gravity on the two masses as they follow the trajectories r[i], i=1,2 is given by $\delta W = \mathbf{F}_1\cdot\delta \mathbf{r}_1 + \mathbf{F}_2\cdot\delta \mathbf{r}_2.$ The variations δr[i] i=1, 2 can be computed to be $\delta \mathbf{r}_1 = (L_1\cos\theta_1, L_1\sin\theta_1)\delta\theta_1, \quad \delta \mathbf{r}_2 = (L_1\cos\theta_1, L_1\sin\theta_1)\delta\theta_1 +(L_2\cos\theta_2, L_2\sin\theta_2)\delta\ Virtual work Thus, the virtual work is given by $\delta W = -(m_1+m_2)gL_1\sin\theta_1\delta\theta_1 - m_2gL_2\sin\theta_2\delta\theta_2,$ and the generalized forces are $F_{\theta_1} = -(m_1+m_2)gL_1\sin\theta_1,\quad F_{\theta_2} = -m_2gL_2\sin\theta_2.$ Kinetic energy Compute the kinetic energy of this system to be $T= \frac{1}{2}m_1 \mathbf{v}_1\cdot\mathbf{v}_1 + \frac{1}{2}m_2 \mathbf{v}_2\cdot\mathbf{v}_2 = \frac{1}{2}(m_1+m_2)L_1^2\dot{\theta}_1^2 + \frac{1}{2}m_2L_2^2\dot{\theta}_2^2 + m_2L_1L_2 \cos Lagrange's equations Lagrange's equations yield two equations in the unknown generalized coordinates θ[i] i=1, 2, given by^7 $(m_1+m_2)L_1\ddot{\theta}_1+m_2L_1L_2\ddot{\theta}_2\cos(\theta_2-\theta_1) + m_2L_1L_2\sin(\theta_2-\theta_1) = -(m_1+m_2)gL_1\sin\theta_1,$ $m_2L_2\ddot{\theta}^2+m_2L_1L_2\ddot{\theta}_1\cos(\theta_2-\theta_1) - m_2L_1L_2\sin(\theta_2-\theta_1)=-m_2gL_2\sin\theta_2.$ The use of the generalized coordinates θ[i] i=1, 2 provides an alternative to the Cartesian formulation of the dynamics of the double pendulum. Generalized coordinates and virtual work The principle of virtual work states that if a system is in static equilibrium, the virtual work of the applied forces is zero for all virtual movements of the system from this state, that is, δW=0 for any variation δr.^4 When formulated in terms of generalized coordinates, this is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that is F[i]=0. Let the forces on the system be F[j], j=1, ..., m be applied to points with Cartesian coordinates r[j], j=1,..., m, then the virtual work generated by a virtual displacement from the equilibrium position is given by $\delta W = \sum_{j=1}^m \mathbf{F}_j\cdot \delta\mathbf{r}_j.$ where δr[j], j=1, ..., m denote the virtual displacements of each point in the body. Now assume that each δr[j] depends on the generalized coordinates q[i], i=1, ..., n, then $\delta \mathbf{r}_j = \frac{\partial \mathbf{r}_j}{\partial q_1} \delta{q}_1 + \ldots + \frac{\partial \mathbf{r}_j}{\partial q_n} \delta{q}_n,$ $\delta W = \left(\sum_{j=1}^m \mathbf{F}_j\cdot \frac{\partial \mathbf{r}_j}{\partial q_1}\right) \delta{q}_1 + \ldots + \left(\sum_{j=1}^m \mathbf{F}_j\cdot \frac{\partial \mathbf{r}_j}{\ partial q_n}\right) \delta{q}_n.$ The n terms $F_i = \sum_{j=1}^m \mathbf{F}_j\cdot \frac{\partial \mathbf{r}_j}{\partial q_i},\quad i=1,\ldots, n,$ are the generalized forces acting on the system. Kane^8 shows that these generalized forces can also be formulated in terms of the ratio of time derivatives, $F_i = \sum_{j=1}^m \mathbf{F}_j\cdot \frac{\partial \mathbf{v}_j}{\partial \dot{q}_i},\quad i=1,\ldots, n,$ where v[j] is the velocity of the point of application of the force F[j]. In order for the virtual work to be zero for an arbitrary virtual displacement, each of the generalized forces must be zero, that is $\delta W = 0 \quad \Rightarrow \quad F_i =0, i=1,\ldots, n.$ See also 1. ^ ^a ^b Jerry H. Ginsberg (2008). "§7.2.1 Selection of generalized coordinates". Engineering dynamics, Volume 10 (3rd ed.). Cambridge University Press. p. 397. ISBN 0-521-88303-2. 2. ^ Farid M. L. Amirouche (2006). "§2.4: Generalized coordinates". Fundamentals of multibody dynamics: theory and applications. Springer. p. 46. ISBN 0-8176-4236-6. 3. ^ Florian Scheck (2010). "§5.1 Manifolds of generalized coordinates". Mechanics: From Newton's Laws to Deterministic Chaos (5th ed.). Springer. p. 286. ISBN 3-642-05369-6. 4. ^ ^a ^b Torby, Bruce (1984). "Energy Methods". Advanced Dynamics for Engineers. HRW Series in Mechanical Engineering. United States of America: CBS College Publishing. ISBN 0-03-063366-4. 5. ^ Greenwood, Donald T. (1987). Principles of Dynamics (2nd edition ed.). Prentice Hall. ISBN 0-13-709981-9. 6. ^ Richard Fitzpatrick, Newtonian Dynamics, http://farside.ph.utexas.edu/teaching/336k/Newton/Newtonhtml.html. 7. ^ Eric W. Weisstein, Double Pendulum, scienceworld.wolfram.com. 2007 8. ^ T. R. Kane and D. A. Levinson, Dynamics: theory and applications, McGraw-Hill, New York, 1985
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Generalized_coordinates","timestamp":"2014-04-20T01:41:56Z","content_type":null,"content_length":"112912","record_id":"<urn:uuid:70e0802d-f870-41bb-b514-e4eff07807e0>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Fact Families Fact Families Activities to help your students remember their fact families. fact families Posted by:teach2 #67410 1. Write out several fact families - each number sentence on a different index card or sentence strip. Give each child a number sentence and then tell the class that their job is to find their family. The four number sentences that belong together should find each other and then sit down, raise their hands, or do some other signal to let you know that they are done. 2. I bought the triangle fact family cards at a teacher store. The sum is at the top and the addends are at the bottom corners. We play around the world. We start with two people. The first person to finish a number sentence gets the chance to finish the rest of the fact family. If they are successful, they move on to face another challenger. If they can not name all 4 number sentences, then the other person gets a... View Item (201 words) | fact families Posted by:c #63788 Some quick ideas: 1. Make sets of fact families on index cards. (2+3=5, 3+2=5, 5-3=2, 5-2=3)Hand them out to the students and have them find the other children that should be in their fact family. 2. Have them draw pictures to go along with fact families 3. Give the kids cubes or other counters and have them show how to move them around to show the different number sentences used to make a fact family. 4. Make number and operation (+ , -, =) cards and pass them out to the kids. Call one set of children up to make one number sentence in the fact family and then guide the rest to make the remaining number sentences. View Item | Math Fact Families Posted by:Marilyn #85815 I have my second grade students do the following activity for fact families. A Fact Family Booklet Materials: for one page of the booklet I cut ahead of time circles (any color) with a diameter of 3 inches -enough that each student has three circles, a white triangle 6 ins. X 6 ins. X 9 ins.-one per student and half of a regular piece of construction paper (any color). You will also need two dice per student. You will need to multiply the materials depending on how many pages you do in the booklet. Have the student roll the dice. Put one of the numbers on one circle, put the other number on a second circle, and the sum on a third circle. These circles are then glued to the points of the triangle. On the inside of the triangle the students write the... View Item (219 words) | Fact Families Posted by:Kimberly #20638 I introduce fact families using a house. The numbers in the fact family such as 3,4, and 7 go in the roof part of the house. The facts are the supporting beams of the house. I use a triangle for the roof and 4 long, thin rectangles for the supporting beams. The kids love it. I actually give each kid a different set of numbers that I write on the triangle and they create the fact family to go with it. We hang these up around the room. Everyday after that I give 3 numbers as a warm-up and ask them to create a fact family for me as a class. The student of the day actually makes the house and hangs it up for us. When I get too many, I put them in a center. View Item | Fact Families Posted by:msamyb #546 Maybe you could use this fact family worksheet - someone here on Proteacher shared it over the summer. I love using it with my second graders. :) FactFamilyHouse2.doc (77.824 KB) View Post Doubles House Posted by:msamyb #263 Fact Families Posted by:Traci #68525 My first graders just learned the concept of fact families using stuffed animals. To introduce the concept I taped nametags on 3 different animals. On each animal's nametag I wrote a number. ex. (8,4,12)We then moved the animals around the plus, minus and equal sign to create 4 different equations. The kids were then given 3 blank cutouts of kids with a different number on each one. Given story-paper (blank at the top with lines on the bottom) the kids glued the cut-outs to the top of the paper and wrote sentences below as follows: Hi my name is 8. Hi my name is 4 and my name is 12. We are a fact family. We can do 4 things together. 8+4=12, 4+8=12, 12-4=8, 12-8=4. This integrates reading, writing and math. The students definitly got the concept. View Item | Fact Family Game Posted by:Hollie #69070 I have a game that I got from someone over the internet, so I'll share it. This game is played with a partner. Put a bowl of several dominoes, at least 10, between the two partners. On "Go," they each take a domino and write the 4 facts that go with that domino. When they have done that, they get another domino and do the same thing, and keep doing this until they have completed 5 dominoes. The first one done is the winner, providing they're all correct. This can get competitive. You could have the winners play the winners, etc. I have found this is a good game, but if a child doesn't quite get fact families, they sometimes will let the other person win. It would be good practice even if it wasn't a competition. View Item | round the world and number sentences Posted by:Judith #33810 You probably know this one, but just in case. The students sit in a circle. One person stands behind a seated student. The teacher shows them a flashcard with a number sentence. Who ever answers correctly first gets to move to stand behind the next seated person. He can keep standing until a seated person answers first. Then that person does the standing. Everyone morning during AM business, I put the current date on the board, ie. 6. The kids come up with math sentences using that number. We started with just using numbers to = 6. Now they do fact families also. I write all their answers on the board. They love it. Now they're giving me sentences like 108-102 = 6. Of course they are still using those darn fingers, but at least some of them are memorizing the facts. View Item | No title Posted by:PacNWTchr #136613 Or you could cut out shapes (like hearts or three leaf clovers for March) and write the three numbers on the "corners". Have students cover up one corner at a time and the students have to figure out what number is under there. Then they can write the FFs and check it on the back. View Item View Post View Thread fact families Posted by:Jennypie #136614 I play a game with my students called "find your family." We discuss before playing the game, that there must be four members in your family (unless it's a double fact), and that each member has to share the same three numbers. Each student is given an index card with a fact on it. (no one else sees it). I play music, and when the music stops, I say "find your family." Once students have found all the members of their family, they sit down. The children love playing this game, and it helps them to remember what needs to be included in each fact family. :) View Item View Post View Thread we made Posted by:calumetteach #136615 candy corn fact families. This would still tie in with Thanksgiving. You make a big orange pattern (about 10 inches tall)for the candy corn. Lightly write in the middle of it: Then I gave them 3 numbers for their fact family. (6, 7, 13 etc.-- I tried to give everyone a different fact family) They wrote them on the lines I provided. Then they added yellow paper on the top and bottom and put the two smaller number on the top two corners and the big number in the "point" of the candy corn. View Item View Post View Thread fact families Posted by:KAS112 #136616 I bought the fact family pocket chart from Lakeshore and we use it during calendar, we do one family a week when we start and then as they get the hang of it we'll pull out old families and review them. I also have some fact family house work sheets that I fill in with the three numbers and they can fill in. I put some in page protectors for work time that they can practice on. I also explain it to the parents and put it in my weekly newsletter so that the parents can give extra help if needed. View Item View Post View Thread No title Posted by:lovebug422 #136617 I explain it this way - there are three members in each family. The biggest number is the Daddy, the middle number is the Mommy and the smaller number is the baby. In a fact family there are always four number sentences; two addition and two subtraction. When doing the adding number sentences you NEVER start with Daddy (he's always the answer). If you decided to start with Mommy first, then the next adding number sentence has to be the flip flopper. When doing the subtraction number sentences, you ALWAYS start with Daddy and can subtract Mommy first or Baby. View Item View Post View Thread Posted by:Bob #136618 I always highly recommend manipulatives. I find that kids need to do the math, not just do the right answer. I like base ten blocks. For a fact family of 7, get out 7 blocks. Break the blocks into fact families and write them. Fact family greeting cards are a lot of fun. On the front, write the sum number. Inside, paste beans to show a fact from the family. Put one addend on one side of the card and the other addend on the other side. The kids can write all kinds of fun messages to the family. Breaking kids into fact families while they act out a story is helpful. They get up and moving, and I like to keep kids active at some point in a lesson to get blood into the brain if for no other reason. 10 kids stand up. They are building a fort. One kid goes home to get a hammer. When he... View Item (274 words) View Post View Thread story boards Posted by:LittleHeather #136619 Last year I made these little story boards out of construction paper and die cuts for teaching fact families. For example, one of the boards was a pond with a frog family. I cut out blue paper for the pond, drew small waves, some cattails, trees, and rocks around it. Each frog had a number on its belly (the mom had #2, the dad had #5, and the baby had # 7). The kids had fun making up a story about the frogs jumping in the pond. Then we wrote number sentences to go along with the frog family. Other story boards that I made were: snowmen on a hill, sailboats on the lake, fish in a fish bowl, people in a house, and so on. The kids enjoy it because it is hands on, they get to create their own story, and the numbers are on pictures/characters rather than on a... View Item (153 words) View Post View Thread No title Posted by:istoleahalo #136620 Lets see if I can get this out right..lol... In a house...no "strangers" (other numbers) are allowed in the house. "Big Daddy" = big number (5) Mommy = middle number (3) baby = small number (2) Big Daddy is always first or last, because he is protecting his family. Sometimes Daddy goes to work leaving Mommy and baby home together. 2+3=5 Baby is the most important person in Mommy life so she always puts him first, except on Mommy's Birthday 3+2=5 Sometimes Mommy goes to get her nails done and leaves baby with Daddy. 5-2=3 Sometimes Mommy and Daddy go to see a movie and leave baby with a sitter 5-3=2 A teacher at my school did this and I thought it was really cute. She drew a house and all and had the... View Item View Post View Thread The ProTeacher Collection - All rights reserved For individual use only. Do not copy, reproduce or transmit. Copyright © 1998-2014 ProTeacher® Brought to you by the ProTeacher Community Please share! Links to this page welcome!
{"url":"http://www.proteacher.org/c/462_fact_families.html","timestamp":"2014-04-20T03:26:28Z","content_type":null,"content_length":"64148","record_id":"<urn:uuid:e9bb0370-6606-44d7-9005-9beb2660f707>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
What is that inane jibberish in the title? I may hear you cry, why, it is the exponential function! I reply. Yesterday I got a book from my electronics tutor called “The physics of music” by Alexander Wood, revisions by J M Bowsher. This book both greatly infuriated and assisted me in developing my own musical scale over the last day. I think I spent about 10 hours reading it on and off and doing sums. The damn thing kept going on and on about history (next section to be read in posh accent) <posh> how such and such a musician in 1864 used 436hertz for A while france was still using 437 hertz, oh what a situation that would cause should they play an orchestra together hahahahaha! </posh> All I know is that it got to 20 past midnight and I was still doing sums and reading things. Like Sheldon in the Big Bang Theory I had become victim to my own determination, sleep deprivation deprived me of my wits and thus an answer. I slept and this morning I figured it out with help from the exponential function and this equation: That’s the equation for figuring out the notes in the standard 12 note musical scale, gleaned from amongst the history and factoids in “the physics of music” last night. At the time I was either too sleep deprived or the book was too vague for me too understand it fully. I just got my head around it this morning with the help of a graph and an example which the book was sadly missing, I think it was a little bit too sparse on this area for me to fully comprehend. Anyway the point is, I do now and A 400Hz = 0 = n, the next B up would be n = 1, G# or Ab would be n = -1. A 880 an octave up is n = 12. If only the book had portrayed it that simply! No matter, in my sleep deprived stated I scrawled next to it: 2^n-1? The chior of understanding sounded and it is now understood as a variant of 2^(n-1) or graphulacily y=2^(x-1) That’ll give you the traditional exponential curve. My goal was to make a musical scale which started at a base frequency of 1Hz. Then as it increases in octaves you will get nice even numbers; 2, 4, 8, 16, 32, 64, 128, etc. Do they look familiar? ;) It’s the ol’ binary values! I noticed that using a base frequency of 1, f=1*(2^n/(notes in octave)) gives the same values as y=2^(x-1) if you divide the integer x by the number of notes in an octave; e.g. if you have 8 notes per octave: 0, 1/8, 2/8, 3/8 … 7/8, 1. Add 1 for each additional octave or multiply octave 0 by 2, then that by 2 and so on to get the values for additional octaves. The two equations are very It strikes me that someone somewhere has probably done all this and I could have just read it somewhere else, but I didn’t, I number crunched intermittently for somewhere between 5 and 10 hours while reading this unintelligable, but ultimately helpful book, and I came up with good results at the end! Hell yes!
{"url":"http://gda-labs.tumblr.com/post/25218939377/y-2-x-1","timestamp":"2014-04-17T09:34:42Z","content_type":null,"content_length":"71149","record_id":"<urn:uuid:8d590284-4903-444a-9a12-7388f02c7340>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Mysterious quantum forces unraveled Discovered in 1948, Casimir forces are complicated quantum forces that affect only objects that are very, very close together. They’re so subtle that for most of the 60-odd years since their discovery, engineers have safely ignored them. But in the age of tiny electromechanical devices like the accelerometers in the iPhone or the micromirrors in digital projectors, Casimir forces have emerged as troublemakers, since they can cause micromachines’ tiny moving parts to stick together. MIT researchers have developed a powerful new tool for calculating the effects of Casimir forces, with ramifications for both basic physics and the design of microelectromechanical systems (MEMS). One of the researchers’ most recent discoveries using the new tool was a way to arrange tiny objects so that the ordinarily attractive Casimir forces become repulsive. If engineers can design MEMS so that the Casimir forces actually prevent their moving parts from sticking together — rather than causing them to stick — it could cut down substantially on the failure rate of existing MEMS. It could also help enable new, affordable MEMS devices, like tiny medical or scientific sensors, or microfluidics devices that enable hundreds of chemical or biological experiments to be performed in Ghostly presence Quantum mechanics has bequeathed a very weird picture of the universe to modern physicists. One of its features is a cadre of new subatomic particles that are constantly flashing in and out of existence in an almost undetectably short span of time. (The Higgs boson, a theoretically predicted particle that the Large Hadron Collider in Switzerland is trying to detect for the first time, is expected to appear for only a few sextillionths of a second.) There are so many of these transient particles in space — even in a vacuum — moving in so many different directions that the forces they exert generally balance each other out. For most purposes, the particles can be ignored. But when objects get very close together, there’s little room for particles to flash into existence between them. Consequently, there are fewer transient particles in between the objects to offset the forces exerted by the transient particles around them, and the difference in pressure ends up pushing the objects toward each other. In the 1960s, physicists developed a mathematical formula that, in principle, describes the effects of Casimir forces on any number of tiny objects, with any shape. But in the vast majority of cases, that formula remained impossibly hard to solve. “People think that if you have a formula, then you can evaluate it. That’s not true at all,” says Steven Johnson, an associate professor of applied mathematics, who helped develop the new tools. “There was a formula that was written down by Einstein that describes gravity. They still don’t know what all the consequences of this formula are.” For decades, the formula for Casimir forces was in the same boat. Physicists could solve it for only a small number of cases, such as that of two parallel plates. In recent years, researchers around the world attacked the problem of finding Casimir forces between more general shapes and materials. For instance, in 2006, MIT physics professors Robert Jaffe and Mehran Kardar — with whom Johnson continues to collaborate — and Thorsten Emig of the University of Köln in Germany showed how to calculate the forces acting between a plate and a cylinder; the next year, they demonstrated solutions for multiple spheres. Meanwhile, Johnson and his collaborators explored various numerical methods that can be applied to a wide variety of geometries. However, the full power of existing tools for classical electromagnetic calculations had not yet been brought to bear on the Casimir problem. The power of analogy In a paper appearing this week in Proceedings of the National Academy of Sciences, Johnson, physics PhD students Alexander McCauley and Alejandro Rodriguez (the paper’s lead author), and John Joannopoulos, the Francis Wright Davis Professor of Physics, describe a way to solve Casimir-force equations for any number of objects, with any conceivable shape. The researchers’ insight is that the effects of Casimir forces on objects 100 nanometers apart can be precisely modeled using objects 100,000 times as big, 100,000 times as far apart, immersed in a fluid that conducts electricity. Instead of calculating the forces exerted by tiny particles flashing into existence around the tiny objects, the researchers calculate the strength of an electromagnetic field at various points around the much larger ones. In their paper, they prove that these computations are mathematically equivalent. For objects with odd shapes, calculating electromagnetic-field strength in a conducting fluid is still fairly complicated. But it’s eminently feasible using off-the-shelf engineering software. “Analytically,” says Diego Dalvit, a specialist in Casimir forces at the Los Alamos National Laboratory, “it’s almost impossible to do exact calculations of the Casimir force, unless you have some very special geometries.” With the MIT researchers’ technique, however, “in principle, you can tackle any geometry. And this is useful. Very useful.” Since Casimir forces can cause the moving parts of MEMS to stick together, Dalvit says, “One of the holy grails in Casimir physics is to find geometries where you can get repulsion” rather than attraction. And that’s exactly what the new techniques allowed the MIT researchers to do. In a separate paper published in March, physicist Michael Levin of Harvard University’s Society of Fellows, together with the MIT researchers, described the first arrangement of materials that enable Casimir forces to cause repulsion in a vacuum. Dalvit points out, however, that physicists using the new technique must still rely on intuition when devising systems of tiny objects with useful properties. “Once you have an intuition of what geometries will cause repulsion, then the [technique] can tell you whether there is repulsion or not,” Dalvit says. But by themselves, the tools cannot identify geometries that cause repulsion. February 4, 2011 i just want to say wow! all this can really make a person think. is time travel possible is this energy a void in are atmosphere. i know nothing about this just happen to come across this and just made me think. it is nice to know we have people in this world that can really make you think. i am sick of people that will not think out of the box and just believe it is a waste of time. well that what happens in my world. Perhaps the attractive effect arises when the G force between plates is greater than the G force of the external field. The repulsive effect arises when the gap is so narrow the gravitons are resisting the compression that is trying to force a single layer of gravitons, out of the gap. Why not fabricate MEMS parts that move from magnetized materials with different poles, they'll never stick together? Even more interesting is the use of the lateral Casimir force as a propulsive means. Casimir technology is still very new but exciting and involves extraction of zero-point energy to do useful work. The Casimir force is a direct effect of zero-point energy. Indeed, several researchers have shown that the Dark Energy, causing accelerated expansion of the universe, is a subset of zero-point
{"url":"http://newsoffice.mit.edu/2010/casimir-0511","timestamp":"2014-04-16T08:05:22Z","content_type":null,"content_length":"93726","record_id":"<urn:uuid:72aa5b8c-2605-4739-af34-983589396717>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
First Principles of the Differential and Integral Calculus, or the Doctrine of Fluxions (1824) This calculus book was adapted from parts of a French math book, Principes ce Calcul qui servent d’Introduction aux Sciences Physico-Mathématiques of Bézout. It was translated at Cambridge University as a resource for the students there. After an introduction, the book is divided basically into two sections: “elements of the differential calculus” and “elements of the integral calculus.” It consists almost entirely out of text, though referencing by number illustrations in the back that fold out. There is also an errata after the table of contents, containing corrections to the following text. The sections go from principle to principle, with few examples and no practice problems. This particular book was most likely someone’s personal copy, as throughout the book there are pencil writings in the margin. Some of the pages have not been fully separated yet.
{"url":"http://www.millersville.edu/math/Projects/Rousseau%20Collection/Shope/B492x/B492x-AdditionalInfo.html","timestamp":"2014-04-16T19:13:53Z","content_type":null,"content_length":"2185","record_id":"<urn:uuid:59175d9a-eef6-4317-9e33-800788727fc2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Young Writers Society I am a bit confused about the exponent rules... I just need to simplify these problems. Thank you so much for your help!!! Mamillius: Merry or sad shall’t be? Hermione: As merry as you will. Mamillius: A sad tale’s best for winter. I have one Of sprites and goblins. The Winter's Tale When multiplying exponents, just add them. When dividing them, subtract. When your multiplying the group [the (ab)4 problem] it's just a case of figuring out how many a's need to be multiplied. So, in that problem, it's four. Get it? You know you're a writer when you're not alarmed at hearing voices in your head, you can't read a book without analyzing it for plot & characters and you consider something you nearly killed yourself to write the most rewarding. Guilty as charged. Thank you so much for your help! I'm still a little confused about multiplying the four a's. Would the answer just be ab⁴? Thanks again for replying! Mamillius: Merry or sad shall’t be? Hermione: As merry as you will. Mamillius: A sad tale’s best for winter. I have one Of sprites and goblins. The Winter's Tale Close. It'd be a4b4 (Pretend those are subscripts, my computer won't let me do it) Since your multiplying both the a and the b. You know you're a writer when you're not alarmed at hearing voices in your head, you can't read a book without analyzing it for plot & characters and you consider something you nearly killed yourself to write the most rewarding. Guilty as charged. Oh ok! I get it now. Thank you sooooo much for the help! Mamillius: Merry or sad shall’t be? Hermione: As merry as you will. Mamillius: A sad tale’s best for winter. I have one Of sprites and goblins. The Winter's Tale Your welcome! This stuff took me forever to learn too. You know you're a writer when you're not alarmed at hearing voices in your head, you can't read a book without analyzing it for plot & characters and you consider something you nearly killed yourself to write the most rewarding. Guilty as charged. Yes, I definitely don't have a math mind. Mamillius: Merry or sad shall’t be? Hermione: As merry as you will. Mamillius: A sad tale’s best for winter. I have one Of sprites and goblins. The Winter's Tale Incandescence says... I'll go ahead and carry the first problem out in full, if you don't mind. By the associative property (i.e., abc=(ab)c=a(bc)), we can rewrite this as: Since associative multiplication is commutative (i.e., ab=ba), we can write this as: and by associativity we can say: Since the rule of exponent multiplication is addition, we get: The second problem is identical to the first problem: just make sure you understand that everything inside the parentheses is put to the exponent. That is, if I had (ab)³, I would say, (ab)³=(ab)(ab) (ab), by definition of exponentiation. From there, I would follow the procedure I gave you above, changing the parentheses around and use commutativity to get my result. "If I have not seen as far as others, it is because giants were standing on my shoulders." -Hal Abelson Thank you for writing it out! That makes a lot of sense. So you just write it out as a longer problem so that you can add up the like bases. That's what you were saying, right? At least that's the way that makes the most sense to me. Anyway, thanks again! Mamillius: Merry or sad shall’t be? Hermione: As merry as you will. Mamillius: A sad tale’s best for winter. I have one Of sprites and goblins. The Winter's Tale thunder_dude7 says... It seems that you've been helped, but I'm bored. Start by looking at the variables individually. Let's start with "a". a^3 multiplied by a, written out, is... a x a x a x a That equals a^4. The same with the Bs means that they're equal to b^3. Well, this can be written as "(ab) x (ab)". This is a double distributive property thing. First, distribute the a. That makes the second part a^2 x ab. Next, distribute the b. That makes it a^2b x ab Well, I have to go. Can't do the third one. Sorry. I reject your reality and substitute my own.
{"url":"http://youngwriterssociety.com/viewtopic.php?f=63&t=39952","timestamp":"2014-04-16T21:59:42Z","content_type":null,"content_length":"37562","record_id":"<urn:uuid:20f65db1-1fca-45f1-9aae-16d1356e65d5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Velocity Reviews - Optimize power function for fixed point numbers On Mar 12, 9:16 am, suppamax <max.giacome...@gmail.com> wrote: > Hi everybody! > I'm writing a C program for a PIC18F microcontroller. > I need to calculate a power function, in which both base and exponent > are fixed point numbers (ex: 3.15^1.13). > Using pow() function is too expensive... > Is there another way to do that? It doesn't seem obvious to me. I guess you would want a break down two_pow_fromIM ( y * two_log_toIM ( x ) ); The idea would be that two_log_toIM and two_pow_fromIM could be implemented as a scaling (normalize to the range 1 <= x < 2) then either a post or pre-shift along with a table look up if the resolution was small enough (and possibly perform interpolations). To _fromIM and _toIM reflect the fact you might like to convert it to a temporarily higher resolution intermediate value, or range corrected for the particular input values. I am not aware of any really good approximations to log() or 2exp() except for taylor series or rational function approximations, which will end up doing no better than using pow() directly. This table based stuff would obviously compromise accuracy/resolution. Paul Hsieh http://www.pobox.com/~qed/ http://bstring.sf.net/
{"url":"http://www.velocityreviews.com/forums/printthread.php?t=598158","timestamp":"2014-04-18T19:57:45Z","content_type":null,"content_length":"18585","record_id":"<urn:uuid:bf8ceec7-1b54-475b-91f3-28b9a67fff23>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Size of union of a set of subsets and its permutations up vote 5 down vote favorite For $[n] := \{1,...,n\}$, let $G$ be the set of all $\lceil n/2\rceil$-subsets of $[n]$. For a permutation $\rho \in S_{n}$, and some $F \subset G$, define $\rho(F)$ in the natural way: apply $\rho$ to each element in every set in $F$ and let $\rho(F)$ be the set of these new subsets. For example, if $F = \{ \{1,2\}, \{3,4\} \}$, and $\rho = 3241$ (in one-line notation), then $\rho(F) = \{\{2,3 \},\{1,4\}\}$. Obviously $|\rho(F)| = |F|$. Fixing some integer $k$, is there anything we can say about $K(n,k) := \min_{F \subset G, |F| = k} \max_{\rho \in S_n} |F \cup \rho(F)|$? By symmetry considerations, for a fixed $F$, every $\lceil n/2\rceil$-subsets of $[n]$ is contained in the same number of $\rho(F)$'s, so for an "average" $\rho$ we have $\frac{|F \cap \rho(F)|}{|F|} = \frac{|F|}{|G|}$. That is, we can always find a $\rho$ such that $|F \cap \rho(F)| \leq \frac{|F|^2}{|G|}$. Then, $K(n,k) \geq 2k - \frac{k^2}{n \choose \lceil n/2 \rceil}$. The question is, can we always (ever?) do much better than this average? I think K(5,5) = 9. I can come back in a few hours to explain that. You certainly can't do better (assuming you take the least integer larger than the RHS) when $k^2 < {n \choose \lceil n/2 \rceil} $, or when $F=G$, and possibly some other large cases like that. – Zack Wolske Jul 10 '12 at 21:01 Yes, we have the trivial upper bound that $K(n,k) \leq \min \{2k, {n \choose \lceil n/2 \rceil} \}$, which, as you point out, takes care of the cases when $k$ is large or small. In the context that this problem came up, $k \approx \frac{1}{2}{n \choose \lceil n/2 \rceil}$. Taking $k$ to be some fraction of the total number of $\lceil n/2 \rceil$-subsets is the interesting case. – Sam Hopkins Jul 10 '12 at 21:15 The details of the proof didn't work out as nicely as I'd hoped, but it is true that K(5,5) = 9. The method is ad-hoc, and you might get more insight doing it yourself, but I'll post it if you'd like. It essentially breaks down the different sets of 5 subsets of size 2 (these are the same as subsets of size 3 by taking complements) into generic cases by considering how you can write 10 as a sum of 5 integers, each counting the number of times a specific digit appears in the set. Then you pick a permutation for each of the five cases. – Zack Wolske Jul 11 '12 at 3:52 Perhaps working that case out would be instructive. But I'm more interested in the limiting behavior than exact values. For instance, by the above lower bound and the trivial upper bound, we have: $\frac{3}{4} {n \choose \lceil n/2 \rceil} \leq K(n,\frac{1}{2}{n \choose \lceil n/2 \rceil}) \leq {n \choose \lceil n/2 \rceil}$ Which side is right asymptotically? What is $\lim_{n \to \infty} K (n,\frac{1}{2}{n \choose \lceil n/2 \rceil})$? Does this limit even exist? – Sam Hopkins Jul 11 '12 at 23:00 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/101886/size-of-union-of-a-set-of-subsets-and-its-permutations","timestamp":"2014-04-20T08:22:19Z","content_type":null,"content_length":"52257","record_id":"<urn:uuid:9681f010-42af-4c73-9621-df6e2a8936a3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Powder Springs, GA Trigonometry Tutor Find a Powder Springs, GA Trigonometry Tutor ...I have used Microsoft Windows daily since the release of version 3.0 in 1990. Since then I have worked with Windows 95, Windows 98, Windows for Workgroups 3.11 and NT 3.1 and more recently Windows 7 and 8. I have installed and worked extensively nearly every major application available on Windo... 126 Subjects: including trigonometry, chemistry, English, calculus ...I can help any student strengthen their basic skills, setting a solid foundation for success in their current middle or high school math curriculum, as well as college work - now or down the road. I presently teach Precalculus at Chattahoochee Tech. I have also taught Calculus at the college level, so I am able to help prepare students for the highest levels of math. 13 Subjects: including trigonometry, calculus, algebra 1, SAT math I am currently working on a PhD in physics at Georgia Tech and have already completed my master's degree. As an undergraduate, I double majored in math and physics. I am low key and enjoy helping students gain a better understanding of math and physics without applying excessive amounts of pressure to the student. 11 Subjects: including trigonometry, physics, calculus, geometry ...I have taught students of diverse ages and backgrounds, including underprivileged and learning-disabled students. Before WyzAnt I worked with some of the highest quality personal educational service companies available. My areas of expertise include the following: standardized test preparation ... 31 Subjects: including trigonometry, English, reading, chemistry ...I knew more about George Lamsa's Aramaic Bible than he did, and he was taking graduate course in it. My Asian religion course covered Hinduism, Buddhism, Daoism (or Taoism), Shintoism and Confucianism. I used to attend a synagogue and learned a little Hebrew. 14 Subjects: including trigonometry, calculus, statistics, algebra 2 Related Powder Springs, GA Tutors Powder Springs, GA Accounting Tutors Powder Springs, GA ACT Tutors Powder Springs, GA Algebra Tutors Powder Springs, GA Algebra 2 Tutors Powder Springs, GA Calculus Tutors Powder Springs, GA Geometry Tutors Powder Springs, GA Math Tutors Powder Springs, GA Prealgebra Tutors Powder Springs, GA Precalculus Tutors Powder Springs, GA SAT Tutors Powder Springs, GA SAT Math Tutors Powder Springs, GA Science Tutors Powder Springs, GA Statistics Tutors Powder Springs, GA Trigonometry Tutors Nearby Cities With trigonometry Tutor Austell trigonometry Tutors Chamblee, GA trigonometry Tutors Clarkdale, GA trigonometry Tutors Cumming, GA trigonometry Tutors Dallas, GA trigonometry Tutors Douglasville trigonometry Tutors Hiram, GA trigonometry Tutors Holly Springs, GA trigonometry Tutors Lilburn trigonometry Tutors Lithia Springs trigonometry Tutors Mableton trigonometry Tutors Marietta, GA trigonometry Tutors Tyrone, GA trigonometry Tutors Villa Rica, PR trigonometry Tutors Winston, GA trigonometry Tutors
{"url":"http://www.purplemath.com/Powder_Springs_GA_Trigonometry_tutors.php","timestamp":"2014-04-19T15:05:20Z","content_type":null,"content_length":"24580","record_id":"<urn:uuid:22f9bc7c-015f-4e8b-9b73-df2a3d6d42e6>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
How do I simplify this trigonometric integral even further? February 27th 2010, 07:35 PM #1 Super Member Nov 2008 How do I simplify this trigonometric integral even further? I get the correct final answer as Wolfram Alpha confirms: http://www.wolframalpha.com/input/?i=is+(sqrt(x^2+%2B+9))^3+%2F3+-+9*sqrt(x^2+%2B+9)+%3D+1%2F3+*+(x^2+-+18)+*+sqrt(x^2+%2B+9)%3F My work is attached and I would just like to know how to get from where I am to the answer the book provides in the back (I wrote the answer the book provides in red on the attached file). (The problem is #39 of the attached pdf) Any help would be greatly appreciated! Thanks in advance! I get the correct final answer as Wolfram Alpha confirms: http://www.wolframalpha.com/input/?i=is+(sqrt(x^2+%2B+9))^3+%2F3+-+9*sqrt(x^2+%2B+9)+%3D+1%2F3+*+(x^2+-+18)+*+sqrt(x^2+%2B+9)%3F My work is attached and I would just like to know how to get from where I am to the answer the book provides in the back (I wrote the answer the book provides in red on the attached file). (The problem is #39 of the attached pdf) Any help would be greatly appreciated! Thanks in advance! Dear s3a, Your final expression is, Talking $\sqrt{x^2+9}$ out will give you, Hope this will help you. quoting the wrong reply unintentionally Dear s3a, Your final expression is, Talking $\sqrt{x^2+9}$ out will give you, Hope this will help you. after getting the expression in tan and sec convert it to sin and cos and then solve converting the expression having an odd pwer to the differentiation of the expression having even power. u should get integration sin^3(theta)/cos^4(theta).d(theta) make this integration sin^2(theta).sin(theta)/cos^4(theta) then (1-cos^(theta)).sin(theta)/cos^4(theta). hope u are able to folow. its easy! my mistake. i should have quoted s3a post but i quoted sudharkas unintentionally. Last edited by Pulock2009; February 27th 2010 at 10:25 PM. Reason: quoting the wrong reply February 27th 2010, 07:47 PM #2 Super Member Dec 2009 February 27th 2010, 10:22 PM #3 Nov 2009
{"url":"http://mathhelpforum.com/calculus/131110-how-do-i-simplify-trigonometric-integral-even-further.html","timestamp":"2014-04-18T14:08:27Z","content_type":null,"content_length":"39877","record_id":"<urn:uuid:6db0aca3-bd2e-4f68-ae55-4388fdf486c6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
SPOJ Problem Set Making Labels Problem code MKLABELS Trees SPOJ Problem Set 1870. Making Labels Problem code: MKLABELS Trees comes in many varieties other than the popular binary tree. In general, a tree is a connected acyclic graph. That is, it consists of some number of vertices N (which we’ll assume is at least one in this problem), and N - 1 edges, each of which connects a pair of vertices. A "labeled tree" is a tree in which each vertex has been given a "label." For simplicity, let us assume these labels are the integers 1 through N. In how many different ways may a tree with N vertices be labeled? By "different" we mean that no rearrangement of two trees with the same number of vertices with different labeling will be identical. (Note that although we commonly associate data with each vertex, and identify one vertex as the root of the tree, that’s not significant in this problem.) Let’s consider some examples. The figure below shows all possible arrangements of trees with N = 1, 2, 3, 4, or 5 vertices. The number shown below each tree is the number of different ways in which the vertices in each tree can be labeled. Clearly a tree with only one vertex can be labeled in only one way - by assigning the label "1" to the single vertex. A tree with two vertices can also be labeled in only one way. For example, although the two trees shown on the left below appear to be different, the first can be easily transformed into the second. (Imagine the edges are strings, so the vertices can be easily repositioned without losing their There are, however, three possible ways to label the vertices in a 3-vertex tree, as shown on the right above. No matter how you rearrange the labeled vertices in any of the three trees, you cannot produce any of the other labeled trees. In a similar manner, the various arrangements of four vertices in a tree yield a total of 16 possible labelings - 12 for the four vertices "in a row," and 4 for the other configuration. There are three possible arrangements of the vertices in a tree with N = 5, with a total of 125 possible There will be multiple cases to consider. The input for each case is an integer N specifying the number of vertices in a tree, which will always be between 1 and 10. The last case will be followed by a zero. For each input case, display the case number (1, 2, ...), the input value of N, and the number of different ways in which a tree with N vertices may be labeled. Use the format shown in the examples Case 1, N = 2, # of different labelings = 1 Case 2, N = 3, # of different labelings = 3 Case 3, N = 4, # of different labelings = 16 Case 4, N = 5, # of different labelings = 125 Added by: Camilo Andrés Varela León Date: 2007-10-07 Time limit: 1s Source limit:50000B Languages: All Resource: North Central North America Regional Programming Contest - 2003
{"url":"http://www.docstoc.com/docs/4406143/SPOJ-Problem-Set-Making-Labels-Problem-code-MKLABELS-Trees","timestamp":"2014-04-19T21:20:55Z","content_type":null,"content_length":"57440","record_id":"<urn:uuid:af3bd015-23eb-4f16-8cea-7e6b8aad26fb>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 21 - 30 of 35 "... Abstract. It is shown how the theory of commutative monads provides an axiomatic framework for several aspects of distribution theory in a broad sense, including probability distributions, physical extensive quantities, and Schwartz distributions of compact support. Among the particular aspects cons ..." Cited by 1 (0 self) Add to MetaCart Abstract. It is shown how the theory of commutative monads provides an axiomatic framework for several aspects of distribution theory in a broad sense, including probability distributions, physical extensive quantities, and Schwartz distributions of compact support. Among the particular aspects considered here are the notions of convolution, density, expectation, and conditional probability. "... A category with biproducts is enriched over (commutative) additive monoids. A category with tensor products is enriched over scalar multiplication actions. A symmetric monoidal category with biproducts is enriched over semimodules. We show that these extensions of enrichment (e.g. from hom-sets to h ..." Add to MetaCart A category with biproducts is enriched over (commutative) additive monoids. A category with tensor products is enriched over scalar multiplication actions. A symmetric monoidal category with biproducts is enriched over semimodules. We show that these extensions of enrichment (e.g. from hom-sets to homsemimodules) are functorial, and use them to make precise the intuition that “compact objects are finitedimensional” in standard cases. Keywords: Semimodules, enriched categories, biproducts, scalar multiplication, compact objects. "... We investigate impure, call-by-value programming languages. Our first language only has variables and let-binding. Its equational theory is a variant of Lambek’s theory of multicategories that omits the commutativity axiom. We demonstrate that type constructions for impure languages — products, sums ..." Add to MetaCart We investigate impure, call-by-value programming languages. Our first language only has variables and let-binding. Its equational theory is a variant of Lambek’s theory of multicategories that omits the commutativity axiom. We demonstrate that type constructions for impure languages — products, sums and functions — can be characterized by universal properties in the setting of ‘premulticategories’, multicategories where the commutativity law may fail. This leads us to new, universal characterizations of two earlier equational theories of impure programming languages: the premonoidal categories of Power and Robinson, and the monad-based models of Moggi. Our analysis thus puts these earlier abstract ideas on a canonical foundation, bringing them to a new, syntactic level. F.3.2 [Semantics of Pro- , 907 "... The goal of this paper is to prove coherence results with respect to relational graphs for monoidal monads and comonads, i.e. monads and comonads in a monoidal category such that the endofunctor of the monad or comonad is a monoidal functor (this means that it preserves the monoidal structure up to ..." Add to MetaCart The goal of this paper is to prove coherence results with respect to relational graphs for monoidal monads and comonads, i.e. monads and comonads in a monoidal category such that the endofunctor of the monad or comonad is a monoidal functor (this means that it preserves the monoidal structure up to a natural transformation that need not be an isomorphism). These results are proved first in the absence of symmetry in the monoidal structure, and then with this symmetry. The monoidal structure is also allowed to be given with finite products or finite coproducts. Monoidal comonads with finite products axiomatize a plausible notion of identity of deductions in a fragment of the modal logic S4. , 907 "... The goal of this paper is to prove coherence results with respect to relational graphs for monoidal endofunctors, i.e. endofunctors of a monoidal category that preserve the monoidal structure up to a natural transformation that need not be an isomorphism. These results are proved first in the absenc ..." Add to MetaCart The goal of this paper is to prove coherence results with respect to relational graphs for monoidal endofunctors, i.e. endofunctors of a monoidal category that preserve the monoidal structure up to a natural transformation that need not be an isomorphism. These results are proved first in the absence of symmetry in the monoidal structure, and then with this symmetry. In the later parts of the paper the coherence results are extended to monoidal endofunctors in monoidal categories that have diagonal or codiagonal natural transformations, or where the monoidal structure is given by finite products or coproducts. Monoidal endofunctors are interesting because they stand behind monoidal monads and comonads, for which coherence will be proved in a sequel to this paper. "... We exhibit sufficient conditions for a monoidal monad T on a monoidal ..." , 2009 "... How to cite this article: ..." "... In [4] we proved that a commutative monad on a symmetric monoidal closed category carries the structure of a symmetric monoidal monad ([4], Theorem 3.2). We here prove the converse, so that, taken together, we have: there is a 1-1 correspondence between commutative monads and symmetric monoidal mona ..." Add to MetaCart In [4] we proved that a commutative monad on a symmetric monoidal closed category carries the structure of a symmetric monoidal monad ([4], Theorem 3.2). We here prove the converse, so that, taken together, we have: there is a 1-1 correspondence between commutative monads and symmetric monoidal monads (Theorem 2.3 below). The main computational work needed consists in constructing an equivalence between possible strengths 8tA,B: A c ~ B-+ A T ~ B T on a functor, and possible "tensorial stren~hs " on T t"X,B: X ( ~ BT--> (X ( ~ B) T; T is assumed to be a functor between categories tensored over a monoidal closed category 3~'. The equivalence is stated in Theorem 1.3. (There is a similar theorem for the notion of eotensorial strength Ax,B: (Xt ~ B) T--+ Xr B T, which we do not include in this note.) As an application of the theory here, we construct strength on certain functors related to the power set monad. If ~r is a 3~-category, we use t ~ to denote the hom-functor ~r x ~r as well as to denote the hom-functor of 3r ~ itself. 1. Making a functor strong. Let ~r and ~ be categories tensored over the symmetric monoidal closed ~r [3]. Let T: ~0--> ~0 be a functor between the underlying categories. To a family of maps (1.1) 8tA,A,: Ac~A'--> A Tc~A ' T we associate a family of maps (1.2) t"X,A: X (D A T-> ( X @ A) T by commutativity of (1.3) ua|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=258747&sort=cite&start=20","timestamp":"2014-04-20T14:05:07Z","content_type":null,"content_length":"30815","record_id":"<urn:uuid:699f7786-59ad-4159-a5d7-1748f92fa316>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Age Word Problems (with worked solutions & videos) Algebra: Age Word Problems Age problems are algebra word problems that deal with the ages of people currently, in the past or in the future. If the problem involves a single person, then it is similar to an Integer Problem. Read the problem carefully to determine the relationship between the numbers. This is shown in the example involving a single person. If the age problem involves the ages of two or more people then using a table would be a good idea. A table will help you to organize the information and to write the equations. This is shown in the examples involving more than one person. Related Topics: More Algebra Word Problems Age Problems Involving A Single Person Example 1: Five years ago, John’s age was half of the age he will be in 8 years. How old is he now? Step 1: Let x be John’s age now. Look at the question and put the relevant expressions above it. Step 2: Write out the equation. Isolate variable x Answer: John is now 18 years old. Example 2: John is twice as old as his friend Peter. Peter is 5 years older than Alice. In 5 years, John will be three times as old as Alice. How old is Peter now? Step 1: Set up a table. │ │age now │age in 5 yrs │ │John │ │ │ │Peter│ │ │ │Alice│ │ │ Step 2: Fill in the table with information given in the question. John is twice as old as his friend Peter. Peter is 5 years older than Alice. In 5 years, John will be three times as old as Alice. How old is Peter now? Let x be Peter’s age now. Add 5 to get the ages in 5 yrs. │ │age now │age in 5 yrs │ │John │ 2x │ 2x + 5 │ │Peter│ x │ x + 5 │ │Alice│ x – 5 │ x – 5 + 5 │ Write the new relationship in an equation using the ages in 5 yrs. In 5 years, John will be three times as old as Alice. 2x + 5 = 3(x – 5 + 5) 2x + 5 = 3x Isolate variable x x = 5 Answer: Peter is now 5 years old. John’s father is 5 times older than John and John is twice as old as his sister Alice. In two years time, the sum of their ages will be 58. How old is John now? Step 1: Set up a table. │ │age now│age in 2 yrs │ │John’s father │ │ │ │ John │ │ │ │ Alice │ │ │ Step 2: Fill in the table with information given in the question. John’s father is 5 times older than John and John is twice as old as his sister Alice. In two years time, the sum of their ages will be 58. How old is John now? Let x be John’s age now. Add 2 to get the ages in 2 yrs. │ │age now│age in 2 yrs │ │John’s father │ 5x │ 5x + 2 │ │ John │ x │ x + 2 │ │ Alice │ │ │ Write the new relationship in an equation using the ages in 2 yrs. In two years time, the sum of their ages will be 58. Answer: John is now 8 years old. Ten years from now, Orlando will be three times older than he is today. What is his current age? In 20 years, Kayleen will be four times older than she is today. What is her current age? Ben is eight years older than Sarah, 10 years ago. Ben is twice as old as Sarah. Currently, how old is Ban and Sarah? Mary is three times as old as her son. In 12 years, Mary's age will be one year less than twice her son's age. How old is each now? Arun is 4 times as old as Anusha is today. Sixty years ago, Arun was 6 times as old as Anusha. How old are they today? We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
{"url":"http://www.onlinemathlearning.com/age-problems.html","timestamp":"2014-04-16T10:54:58Z","content_type":null,"content_length":"49485","record_id":"<urn:uuid:c2f64362-a536-4596-982d-071dbcd0f84e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
A New Generation of Atomic Clocks This was just posted today, and looks interesting A New Generation of Atomic Clocks: Accuracy and Stability at the 10 B. J. Bloom et al The exquisite control exhibited over quantum states of individual particles has revolutionized the field of precision measurement, as exemplified by the most accurate atomic clock realized in single trapped ions. Whereas many-atom lattice clocks have shown advantages in measurement precision over trapped-ion clocks, their accuracy has remained 20 times worse. Here we demonstrate, for the first time, that a many-atom system achieves accuracy (6x10 better than a single ion-based clock, with vastly reduced averaging times (3000 s). This is the first time a single clock has achieved the best performance in all three key ingredients necessary for consideration as a primary standard - stability, reproducibility, and accuracy. This work paves the way for future experiments to integrate many-body quantum state engineering into the frontiers of quantum metrology, creating exciting opportunities to advance precision beyond the standard quantum limit. Improved frequency standards will have impact to a wide range of fields from the realization of the SI units, the development of quantum sensors, to precision tests of the fundamental laws of nature. --- National Institute of Standards and Technology and University of Colorado, Boulder, CO
{"url":"http://www.physicsforums.com/showthread.php?s=138bc244c9ce111d1e3ed57b0b16ae34&p=4492524","timestamp":"2014-04-18T00:24:28Z","content_type":null,"content_length":"32951","record_id":"<urn:uuid:112e21c3-3896-4fb4-81e1-47fed6deb2c9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Citizendium - building a quality free general knowledge encyclopedia. Click here to join and contribute—free Many thanks December donors; special to Darren Duncan. January donations open; need minimum total $100. Let's exceed that. Donate here. By donating you gift yourself and CZ. From Citizendium, the Citizens' Compendium The concept of number is one of the most elementary, or fundamental notions of mathematics. Such elementary concepts cannot be defined in terms of other concepts (trivially, if an elementary concept could be defined in terms of other concepts, then it would not in fact be fundamental). Rather, a fundamental concept such as number can only be explained by demonstration. Such an approach relies for its efficacy on the intuitive properties of the human mind and its ability to abstract and generalize. There are philosophical problems bound up with the concept of number. First, there is the ontological problem of the various types of numbers — do they exist, or are they "mental concepts". Then there is the epistemological problem which is concerned with how we know anything about numbers. In mathematics, a number is formally a member of a given set (possibly an ordered set). It conveys the ideas of : • counting (e.g., there are 26 simple latin letters), • ordering (e.g., e is smaller than pi in the real number set), and • measurement (e.g., the weight of 50 lbs in the Imperial system is approximately equal to 22.7 kg in the metric system). However, due to the expressiveness of positional number systems, the usefulness of geometric objects, and the advances in different scientific fields, it can convey more properties. A word written only with digits is called a numeral, and may represent a number. Numerals are often used for labeling (like telephone numbers), for ordering (like serial numbers), and for encoding (like ISBNs). The writing of a number depends on the numeral system in use. For instance, the number 12 is written "1100" in base 2, "C" in base 16, and "XII" as a roman numeral. We can geometrically represent a number with unitless vectors in a cartesian system or by drawing simple shapes (e.g., squares and circles). There are other means to express a number. Abstract algebra studies abstract number systems such as groups, rings and fields. Number sets This section presents different number sets, but this list is not exhaustive. 1. The natural numbers ($\scriptstyle \mathbb{N}$) are used to count things (e.g., there are 52 weeks in a Julian year). This set contains many remarkable subsets : prime numbers, Fibonacci numbers, perfect numbers, catalan numbers, etc. 2. The integers ($\scriptstyle \mathbb{Z}$) also include negative numbers, that can be used to represent debits and credits, etc. (e.g., a company owes 60 millions US dollars to a bank). This set includes the natural numbers. 3. The rational numbers ($\scriptstyle \mathbb{Q}$) are any number that can be represented as a fraction (e.g., someone received half of her pay yesterday). This set includes the integers. 4. The irrational numbers ($\scriptstyle \mathbb{J}$) find application in many abstract mathematical fields, such as algebra and number theory. An irrational number can not be written as a fraction, and can indeed not be written out fully at all. The numbers $\pi$ and $\sqrt{2}$ are both irrational. This set does not share any member with the rational number set. 5. The real numbers ($\scriptstyle \mathbb{R}$) find applications in measurements and advanced mathematics. They are usually best written as decimal numbers (e.g., the value of e is approximately equal to 2.718281828). This set includes the rational numbers and the irrational numbers. 6. The complex numbers ($\scriptstyle \mathbb{C}$) have two parts, where one is real and the other is some number multiplied by the imaginary number $i\!$, which is defined as $i = \sqrt{-1}$. The complex numbers were discovered while searching solutions to some polynomials (e.g., the polynomial $\scriptstyle x^2 + 1 = 0$ has two solutions, one being $\scriptstyle \sqrt{-1} = (0, 1) = i$). Because the complex number set is algebraically closed, it finds applications in many scientific fields, such as engineering and applied mathematics. This set includes the real numbers. 7. A complex number that is solution to a polynomial in integer coefficients is an algebraic number. This set includes all rational numbers and a subset of the irrational numbers. Any other complex number is a transcendental number. In order to meet their needs, scientists created other number sets. To ease the study of quadratic forms, Carl Friedrich Gauss introduced from 1829 to 1831 what is known today as the gaussian integers. While studying 3D mechanics, William Rowan Hamilton introduced the quaternions in 1843 (today, they are largely superseded by vectors). Octonions were discovered in 1843. Georg Cantor, through its naive set theory, formally defined the notion of infinity in 1895. Kurt Hensel first described the p-adic numbers in 1897, looking for a way to bring the ideas and the techniques of power series within number theory. We can consider unitless vectors and unitless matrices as number sets, since they mathematically abstract phenomenas in a unique way and we can apply operations upon them. The notation plays a central role in the perception of what a number is and what we can do with it. A good notation saves lots of work when operating on numbers (and more generally on any mathematical abstract objects). For instance, it is possible to add numbers written in roman numerals (e.g., MCMXCVIII plus CCXVII). However, it is faster to add numbers written in base 10 (e.g., 1998 plus 217). The gain is higher when multiplying numbers. In the Western world, the positional number system in base ten is the most used number notation. In this system, a numeral is constructed by putting digits side by side, each position in the numeral having a different numerical weight (a power of 10). In some knowledge fields, other numeral systems allow better handling of information. For instance, electronic engineers use binary numbers when dealing with electronic circuits. To convey more information and to ease reading, different symbols are added to the digits : • Integer numerals are prepended with the minus or the plus symbol ("-" and "+"). This applies to any numeral, as long as it does not represent a natural number. • Numerals may come with a radix point, the decimal separator in base 10 (the period "." in some systems, the comma "," in others). • In long numerals, digits are grouped and may contain a thousand separator (e.g., the speed of light in vacuum is written as 1,079,252,849 km/h in some systems, while it is written as 1 079 252 849 km/h in some others). • Percentages ("%") allow to write a numeral as a fraction with the denominator 100 (e.g., 14.5% = $\scriptstyle \frac{14.5}{100}$). • Per mills ("‰") allow to write a numeral as a fraction with the denominator 1,000 (e.g., 22.3‰ = $\scriptstyle \frac{22.3}{1000}$). • Per cent mille (pcm) allow to write a numeral as a fraction with the denominator 100,000 (e.g., 78.7 pcm = $\scriptstyle \frac{78.7}{100000}$). • Parts per million (ppm), parts per billion (ppb) and parts per trillion (ppt) are others way to write a numeral as a fraction with the denominators 1 million, 1 billion, and 1 trillion. • Very small and very large numbers are usually expressed in scientific notation. Their numeral uses the product symbol × or E (e.g., the speed of light in vacuum is approximately $\scriptstyle 3.0 \times 10^8 m/s = 3.0 E+8 \, m/s$). There are other ways to represent a number. • Fractions contain a slash or a vinculum (e.g., $\scriptstyle a / b = \frac{a}{b}$). Ratios use the colon (e.g., 1.5 : 5). • To shorten some numerals (or to show some properties), numbers are represented using an exponentation (e.g., 3^4), a radical symbol (e.g., $\scriptstyle \sqrt 2$), a repeated pattern (e.g., $\ scriptstyle \frac{1}{3} = 0.\overline{3}$). Almost any expression having only functions and numerals may represent a number (e.g., $\scriptstyle \sin ( \frac{\pi}{3} )$, $\scriptstyle |-3.4|$, and $\scriptstyle \zeta (3)$). • Complex numbers are represented either by $\scriptstyle (a, b)$ or by $\scriptstyle a + b i$, where $\scriptstyle a, b \in \R$. • In physics, vectors are usually represented as the sum of unit vectors : $\vec{ \imath }, \vec{ \jmath }, \ldots$. In mathematics, we may encounter the "hat notation" : $\hat{ \imath }, \hat{ \ jmath }, \ldots$ • Named constants are another way to represent numbers : $\pi$, e, $\gamma$, etc. In geometry, a number can be represented in different ways. For instance, the length between two points in a cartesian coordinate system may represent a number. Fractions are sometimes represented by a rectangular grid. We could represent $\scriptstyle \frac{7}{12}$ by a grid. In statistics, numbers are represented by areas in histogram or by height in bar charts. In pie charts, values are proportional to the central angles. There are many other ways to represent numbers in statistics. Many other scientific fields have their own notations. There are cases where it is difficult to say if a symbol represents a number. Take for instance the units in International System of Units. When we write 30 cm, it means 30 ÷ 100 × meter. Officially, we should see "cm" as a centimeter, a unit of measure. However, 30 cm is the same as 0.3 m. For this reason, "30 c" represents a number: 0.3.
{"url":"http://en.citizendium.org/wiki/Number","timestamp":"2014-04-18T00:28:03Z","content_type":null,"content_length":"43842","record_id":"<urn:uuid:88e4d333-7517-49b7-9342-3f7303b42176>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
A. Analytical relation for proportionality constant of pickup coils B. Application to ac loss measurement on tapes A. Tapes under perpendicular fields B. Tapes under parallel fields A. Perpendicular fields B. Parallel fields
{"url":"http://scitation.aip.org/content/aip/journal/jap/96/4/10.1063/1.1766100","timestamp":"2014-04-17T14:16:15Z","content_type":null,"content_length":"107456","record_id":"<urn:uuid:9e78480a-56c6-4990-a5fb-80a4e26174ff>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. (Color) Self-assembled morphologies for symmetric diblock copolymers with confined in pores (a) the morphologies as functions of the ratio and . For the S helices or imperfect S helices, morphologies of only the A blocks (or B blocks) are also given, and for some concentric lamellae, a cross section view is also given. The boundary of the concentric lamellae (or the perpendicular lamellae) is given with two identical structures for each value when . [(b)–(d)] Morphologies formed with : (b) , ; (c) , , and (d) , . (Color online) The order parameter for concentric lamellae at . (a) The value of the order parameter at the pore center as a function of and (b) at different values. Schematics of concentric lamellae with . The outermost solid circle represents the pore surfaces, the inside solid circles represent interfaces, and the dashed circles represent the assumed or interfaces in the strong segregations limit. (Color online) Comparison of the dimensionless thicknesses of the alternating -rich and -rich layers in concentric lamellae for strongly preferential surfaces obtained from simulations (symbols) with those predicted from the equations (lines). In the figure, three parts according to from small to larger correspond to , 2, and 3. In each part, the squares and the lower lines represent the thicknesses of the outermost -rich layer . The up lines represent the thicknesses of the inner - or -rich layers. The up triangles, down triangles, and right triangles represent the thickness of , , , respectively. (Color online) The mean-square end-to-end distance, , as a function of for symmetric diblock copolymers. (Color online) The mean-square end-to-end distances of and chains and respectively, as a function of for symmetric diblock copolymers with (a) strongly preferential surfaces, (b) weakly preferential surfaces, and (c) neutral surfaces. (Color online) The components of the mean-square end-to-end distances of and chains, respectively, as a function of for symmetric diblock copolymers with strongly preferential surfaces. (a) and (b) (Color online) The components, and , of the mean-square end-to-end distance as a function of for symmetric diblock copolymers with neutral surfaces. (Color online) The mean-square end-to-end distances as a function of . (a) and [(b) and (c)] and , where (b) and (c) . (Color online) (a) The local concentration profile of monomers near the surface, (b) the average contact numbers for an monomer with monomers, and (c) the free energy per chain as a function of . (Color) Self-assembled morphologies for asymmetric diblock copolymers with as a function of . Only the blocks are shown. The outermost red circle in each top view indicates the surface of the cylindrical pore. (a) and (b) . (Color online) Comparison of the radial order parameter profiles for different structures: (a) The degenerated structures of concentric perforated lamellae and S helices at and (b) the two-ring structure at . (Color online) The mean-square end-to-end distances as a function of for asymmetric diblock copolymers, where the values for the degenerated structures are all shown. (a) , (b) , and (c) . (Color online) The average contact numbers for an monomer with monomers as a function of , where the values for the degenerated structures are all shown.
{"url":"http://scitation.aip.org/content/aip/journal/jcp/127/11/10.1063/1.2768920","timestamp":"2014-04-18T22:02:10Z","content_type":null,"content_length":"110781","record_id":"<urn:uuid:a5374fe3-af29-4ad1-975a-de55bf0da00b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
convert 166 cm to inches You asked: convert 166 cm to inches Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/convert_166_cm_to_inches","timestamp":"2014-04-16T11:12:57Z","content_type":null,"content_length":"57553","record_id":"<urn:uuid:8dd15ab9-5a95-47c1-91b0-b6872407c4db>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
How high is the fort? - WyzAnt Answers A cannon ball leaves a fort with an initial horizontal speed of 1.8*10^2 m/s and strikes a ship in the sea below 7.2 s later. How high is the fort above sea level? Tutors, please sign in to answer this question. 3 Answers Initial height : Δy=½gt² =½(9.8)(7.2)² = 250 m. Note that the answer is independent of the initial horizontal velocity. Hey Sun -- constant g ... use your delta V's of 10 @ sec ... Vdn at sea is 72m/s ... Vave is 36m/s ... freefall 36m/s for 7.2s ==> 210 +42 +7 ~260m ... Regards :) This takes a little longer to explain, instead of looking up a strange formula on a formula sheet or even worst trying to memorize the formulas - but I think it is easier. You can find the answer using the same smarts you use to figure out how much money you need to buy 10 candy bars a day for the next 7 days if candy bars are $10.00 (or $9.81). You can use the formulas or think of it this way. 1 - One important thing to remember with falling object or X & Y problems is that the X and Y are independent. Always. So for this we Only need to solve a Y or falling problem. 2- What is the starting velocity? Zero - 0 3- After 7.2 seconds falling (when it hit the water) how fast was it going? 7.2 sec. X 10 meter per sec. increase every sec. = 7.2 s X 10 m/s^2 = 72 m/s (if you want to make the problem harder and be a minuscule 2.5% more accurate you can use 9.81 m/s^2 instead of 10 m/s^2) 4- Okay now you know the starting speed and the ending speed; what was the average speed? (0 + 72) ÷ 2 = 36 so an average of 36 m/s 5- Good! Now, if something goes an average of 36 m/s for 7.2 s, how far does it go? 36 m/s for 7.2 s = 36 m/s X 7.2 s = 260 m 6) So it fell 260 m therefor the fort was 260 m above sea level. (Using 9.81 instead of 10 would give 254 - we were 2.4 % high) There is nothing strange about the appropriate kinematic equation for projectile motion problems such as this one. The formula I used is well-known and was derived by combining your steps 2 through 5 into one, for any falling time. Using this equation instead of going through your four steps for every falling time is merely a matter of efficiency. The formulas are strange to a 1st year high school physics student, and to most people except physics and other science and math people. It is a a good part of thereason 90% of the people shudder when I get to the word physics in telling them what I do. I know the formula combines many steps and is the scientifically correct way to express the solution. And the formula provides the answer in one step - but in a single step that is much more complicated. I always want my students to understand what is going on, not use a formula sheet. In this part of the country most schools are going to Physics First where All 9th graders in a school take physics. And a significant number of students I have had were freshmen. Physics needs to be much more conceptual for them. And it need to be more conceptual for the older students too who are not going to go the math or science route in higher education. Not everyone needs to do mathematical physics but everyone should understand physics to be good citizens. And I have always found even with my Juniors and Seniors, they understand it better if it is 1st explained in simpler terms conceptually and they are given formulas after. Besides, what's wrong solving a problem by intuition because you understand it rather than looking up formulas that you know work but are not sure why? I absolutely agree with you as far as understanding physics on a conceptual level first, even though intuition can also easily limit or fail you. (In this case, even the slight generalization of the problem to non-zero angles will require trigonometry). I wish my students would understand the derivations of formulas and the deeper concepts behind them, but due to time limitations that is usually not the case. I also wish students in this country would take some physics every year starting in 7th grade, as is the case where I grew up. I believe Sun is taking physics at the college level now for the first time and may at some point be confronted with a standardized testing situation (GRE, MCAT etc). Unfortunately, students are expected to get to the final answer as quickly as possible, so they often do end up memorizing a set of formulas. I would hope that Sun has seen the kinematic equations by this point. The tricky moment in your consideration, Robert, is the application of average velocity. This works only for constant acceleration. If you do not specify this, students may blindly apply the same trick to the variable acceleration. Second, when you said the starting velocity is zero, it bugged even me for a second. Of course, I quickly realized what you meant. But will the student? You implicitly talked about direction of velocity here. Third, (not directly related to this problem, but important to kinematic problems) the problem involving decelerated motion is the most difficult, like dropping something from a steadily rising helicopter or rocket travelling up and having its engines fail. Students have to assign appropriate signs to what we would call projections of velocities and accelerations. But they lack math and most do not know vectors well, if at all. And what about general case of projectile motion? This is even more difficult. So I do not see how we could get students to know physics other way than telling them to either memorize formulas or be ready to shed lots of "blood, sweat and tears" in order to learn and understand concepts. I did not know we could look up students status, so he may very well be a university student. I was also not considering this being a question of only one student, and trying to give an alternate explanation for others. If here is just for one student to ask a question; then if there is one correct solution a second is not needed, and I should look from now on who the student is. Kirill, You are absolutely right about the 0 starting velocity. In this problem we are solving only the vertical so I just considered only that. For a student that needs my explanation to 'get it' they would automatically see what I did, I need to remember to point that out, thanks. About constant acceleration, that is true; but even the formulas would fail for an acceleration that was not constant. Even in a more mathematical physics the derivations of the formulas are for constant acceleration. You need to get fairly high in university physics before a non-zero jerk is considered. You talk about physics in 7th grade (they do have physical science sometimes). You are definitely right. We do not teach science properly here in the US. Imagine if in in 9th grade everyone learned all Spanish and took a test at the end of the year. Then in 10th all French and had a test; German in 11th, and Latin in 12th! Everyone knows you can't learn a language in a year; but expects kids to learn a whole branch of science.
{"url":"http://www.wyzant.com/resources/answers/14783/how_high_is_the_fort","timestamp":"2014-04-20T08:45:03Z","content_type":null,"content_length":"52161","record_id":"<urn:uuid:612d16b6-2225-400b-9b09-5d414f12ac4e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
Cg Programming/Vertex Transformations One of the most important tasks of the vertex shader and the following stages in a programmable graphics pipeline is the transformation of vertices of primitives (e.g. triangles) from the original coordinates (e.g. those specified in a 3D modeling tool) to screen coordinates. While programmable vertex shaders allow for many ways of transforming vertices, some transformations are usually performed in the fixed-function stages after the vertex shader. When programming a vertex shader, it is therefore particularly important to understand which transformations have to be performed in the vertex shader. These transformations are usually specified as uniform parameters and applied to the input vertex positions and normal vectors by means of matrix-vector multiplications. While this is straightforward for points and directions, it is less straightforward for normal vectors as discussed in Section “Applying Matrix Transformations”. Here, we will first present an overview of the coordinate systems and the transformations between them and then discuss individual transformations. Overview: The Camera AnalogyEdit It is useful to think of the whole process of transforming vertices in terms of a camera analogy as illustrated to the right. The steps and the corresponding vertex transformations are: 1. positioning the model — modeling transformation 2. positioning the camera — viewing transformation 3. adjusting the zoom — projection transformation 4. cropping the image — viewport transformation The first three transformations are applied in the vertex shader. Then the perspective division (which might be considered part of the projection transformation) is automatically applied in the fixed-function stage after the vertex shader. The viewport transformation is also applied automatically in this fixed-function stage. While the transformations in the fixed-function stages cannot be modified, the other transformations can be replaced by other kinds of transformations than described here. It is, however, useful to know the conventional transformations since they allow to make best use of clipping and perspectively correct interpolation of varying variables. The following overview shows the sequence of vertex transformations between various coordinate systems and includes the matrices that represent the transformations: object/model coordinates vertex input parameters with semantics (in particular the semantic POSITION) ↓ modeling transformation: model matrix $\mathrm{M}_{\text{object}\to \text{world}}$ world coordinates ↓ viewing transformation: view matrix $\mathrm{M}_{\text{world}\to \text{view}}$ view/eye coordinates ↓ projection transformation: projection matrix $\mathrm{M}_\text{projection}$ clip coordinates vertex output parameter with semantic POSITION ↓ perspective division (by the w coordinate) normalized device coordinates ↓ viewport transformation screen/window coordinates Note that the modeling, viewing and projection transformation are applied in the vertex shader. The perspective division and the viewport transformation is applied in the fixed-function stage after the vertex shader. The next sections discuss all these transformations in detail. Modeling TransformationEdit The modeling transformation specifies the transformation from object coordinates (also called model coordinates or local coordinates) to a common world coordinate system. Object coordinates are usually specific to each object or model and are often specified in 3D modeling tools. On the other hand, world coordinates are a common coordinate system for all objects of a scene, including light sources, 3D audio sources, etc. Since different objects have different object coordinate systems, the modeling transformations are also different; i.e., a different modeling transformation has to be applied to each object. Structure of the Model MatrixEdit The modeling transformation can be represented by a 4×4 matrix, which we denote as the model matrix $\mathrm{M}_{\text{object}\to \text{world}}$. Its structure is: $\mathrm{M}_{\text{object}\to \text{world}} = \left[ \begin{matrix} a_{1,1} & a_{1,2} & a_{1,3} & t_1 \\ a_{2,1} & a_{2,2} & a_{2,3} & t_2 \\ a_{3,1} & a_{3,2} & a_{3,3} & t_3 \\ 0 & 0 & 0 & 1 \end {matrix} \right]$$\text{ with } \mathrm{A} = \left[ \begin{matrix} a_{1,1} & a_{1,2} & a_{1,3} \\ a_{2,1} & a_{2,2} & a_{2,3} \\ a_{3,1} & a_{3,2} & a_{3,3} \end{matrix} \right]$$\text{ and } \mathbf {t} = \left[ \begin{matrix} t_1\\ t_2\\ t_3 \end{matrix} \right]$ $\mathrm{A}$ is a 3×3 matrix, which represents a linear transformation in 3D space. This includes any combination of rotations, scalings, and other less common linear transformations. t is a 3D vector, which represents a translation (i.e. displacement) in 3D space. $\mathrm{M}_{\text{object}\to \text{world}}$ combines $\mathrm{A}$ and t in one handy 4×4 matrix. Mathematically spoken, the model matrix represents an affine transformation: a linear transformation together with a translation. In order to make this work, all three-dimensional points are represented by four-dimensional vectors with the fourth coordinate equal to 1: $P = \left[ \begin{matrix} p_1\\ p_2\\ p_3\\ 1 \end{matrix} \right]$ When we multiply the matrix to such a point $P$, the combination of the three-dimensional linear transformation and the translation shows up in the result: $\mathrm{M}_{\text{object}\to \text{world}}\;P = \left[ \begin{matrix} a_{1,1} & a_{1,2} & a_{1,3} & t_1 \\ a_{2,1} & a_{2,2} & a_{2,3} & t_2 \\ a_{3,1} & a_{3,2} & a_{3,3} & t_3 \\ 0 & 0 & 0 & 1 \ end{matrix} \right] \left[ \begin{matrix} p_1 \\ p_2 \\ p_3 \\ 1 \end{matrix} \right]$$= \left[ \begin{matrix} a_{1,1} p_1 + a_{1,2} p_2 + a_{1,3} p_3 + t_1 \\ a_{2,1} p_1 + a_{2,2} p_2 + a_{2,3} p_3 + t_2 \\ a_{3,1} p_1 + a_{3,2} p_2 + a_{3,3} p_3 + t_3 \\ 1 \end{matrix} \right]$ Apart from the fourth coordinate (which is 1 as it should be for a point), the result is equal to $\mathrm{A} \left[ \begin{matrix} p_1\\ p_2\\ p_3 \end{matrix} \right] + \left[ \begin{matrix} t_1\\ t_2\\ t_3 \end{matrix} \right]$ Accessing the Model Matrix in a Vertex ShaderEdit The model matrix $\mathrm{M}_{\text{object}\to \text{world}}$ can be defined as a uniform parameter such that it is available in a vertex shader. However, it is usually combined with the matrix of the viewing transformation to form the modelview matrix, which is then set as a uniform parameter. In some APIs, the matrix is available as a built-in uniform parameter. (See also Section “Applying Matrix Transformations”.) Computing the Model MatrixEdit Strictly speaking, Cg programmers don't have to worry about the computation of the model matrix since it is provided to the vertex shader in the form of a uniform parameter. In fact, render engines, scene graphs, and game engines will usually provide the model matrix; thus, the programmer of a vertex shader doesn't have to worry about computing the model matrix. In some cases, however, the model matrix has to be computed when developing graphics application. The model matrix is usually computed by combining 4×4 matrices of elementary transformations of objects, in particular translations, rotations, and scalings. Specifically, in the case of a hierarchical scene graph, the transformations of all parent groups (parent, grandparent etc.) of an object are combined to form the model matrix. Let's look at the most important elementary transformations and their matrices. The 4×4 matrix representing the translation by a vector t $= (t_1, t_2, t_3)$ is: $\mathrm{M}_{\text{translation}} = \left[ \begin{matrix} 1 & 0 & 0 & t_1 \\ 0 & 1 & 0 & t_2 \\ 0 & 0 & 1 & t_3 \\ 0 & 0 & 0 & 1 \end{matrix} \right]$ The 4×4 matrix representing the scaling by a factor $s_x$ along the $x$ axis, $s_y$ along the $y$ axis, and $s_z$ along the $z$ axis is: $\mathrm{M}_{\text{scaling}} = \left[ \begin{matrix} s_x & 0 & 0 & 0 \\ 0 & s_y & 0 & 0 \\ 0 & 0 & s_z & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right]$ The 4×4 matrix representing the rotation by an angle $\alpha$ about a normalized axis $(x, y, z)$ is: $\mathrm{M}_{\text{rotation}} = \left[ \begin{matrix} (1-\cos\alpha) x\,x + \cos\alpha & (1-\cos\alpha) x\,y - z \sin\alpha & (1-\cos\alpha) z\,x + y \sin\alpha & 0 \\ (1-\cos\alpha) x\,y + z \sin\ alpha & (1-\cos\alpha) y\,y + \cos\alpha & (1-\cos\alpha) y\,z - x \sin\alpha & 0 \\ (1-\cos\alpha) z\,x - y \sin\alpha & (1-\cos\alpha) y\,z + x \sin\alpha & (1-\cos\alpha) z\,z + \cos\alpha & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right]$ Special cases for rotations about particular axes can be easily derived. These are necessary, for example, to implement rotations for Euler angles. There are, however, multiple conventions for Euler angles, which won't be discussed here. A normalized quaternion $(x_q, y_q, z_q, w_q)$ corresponds to a rotation by the angle $2 \arccos(w_q)$. The direction of the rotation axis can be determined by normalizing the 3D vector $(x_q, y_q, Further elementary transformations exist, but are of less interest for the computation of the model matrix. The 4×4 matrices of these or other transformations are combined by matrix products. Suppose the matrices $\mathrm{M}_1$, $\mathrm{M}_2$, and $\mathrm{M}_3$ are applied to an object in this particular order. ($\mathrm{M}_1$ might represent the transformation from object coordinates to the coordinate system of the parent group; $\mathrm{M}_2$ the transformation from the parent group to the grandparent group; and $\mathrm{M}_3$ the transformation from the grandparent group to world coordinates.) Then the combined matrix product is: $\mathrm{M}_\text{combined} = \mathrm{M}_3 \mathrm{M}_2 \mathrm{M}_1\,\!$ Note that the order of the matrix factors is important. Also note that this matrix product should be read from the right (where vectors are multiplied) to the left, i.e. $\mathrm{M}_1$ is applied first while $\mathrm{M}_3$ is applied last. Viewing TransformationEdit The viewing transformation corresponds to placing and orienting the camera (or the eye of an observer). However, the best way to think of the viewing transformation is that it transforms the world coordinates into the view coordinate system (also: eye coordinate system) of a camera that is placed at the origin of the coordinate system, points (by convention) to the negative $z$ axis in OpenGL and to the positive $z$ axis in Direct3D, and is put on the $x z$ plane, i.e. the up-direction is given by the positive $y$ axis. Accessing the View Matrix in a Vertex ShaderEdit Similarly to the modeling transformation, the viewing transformation is represented by a 4×4 matrix, which is called view matrix $\mathrm{M}_{\text{world}\to \text{view}}$. It can be defined as a uniform parameter for the vertex shader; however, it is usually combined with the model matrix $\mathrm{M}_{\text{object}\to \text{world}}$ to form the modelview matrix $\mathrm{M}_{\text{object}\to \text{view}}$. Since the model matrix is applied first, the correct combination is: $\mathrm{M}_{\text{object}\to \text{view}} = \mathrm{M}_{\text{world}\to \text{view}} \mathrm{M}_{\text{object}\to \text{world}}\,\!$ (See also Section “Applying Matrix Transformations”.) Computing the View MatrixEdit Analogously to the model matrix, Cg programmers don't have to worry about the computation of the view matrix since it is provided to the vertex shader in the form of a uniform parameter. However, when developing graphics applications, it is sometimes necessary to compute the view matrix. Here, we briefly summarize how the view matrix $\mathrm{M}_{\text{world}\to \text{view}}$ can be computed from the position t of the camera, the view direction d, and a world-up vector k (all in world coordinates). Here we limit us to the case of the right-handed coordinate system of OpenGL where the camera points to the negative $z$ axis. (There are some sign changes for Direct3D.) The steps are straightforward: 1. Compute (in world coordinates) the direction z of the $z$ axis of the view coordinate system as the negative normalized d vector: $\mathbf{z} = -\frac{\mathbf{d}}{|\mathbf{d}|}$ 2. Compute (again in world coordinates) the direction x of the $x$ axis of the view coordinate system by: $\mathbf{x} = \frac{\mathbf{d} \times \mathbf{k}}{|\mathbf{d} \times \mathbf{k}|}$ 3. Compute (still in world coordinates) the direction y of the $y$ axis of the view coordinate system: $\mathbf{y} = \mathbf{z} \times \mathbf{x}$ Using x, y, z, and t, the inverse view matrix $\mathrm{M}_{\text{view}\to \text{world}}$ can be easily determined because this matrix maps the origin (0,0,0) to t and the unit vectors (1,0,0), (0,1,0) and (0,0,1) to x, y,, z. Thus, the latter vectors have to be in the columns of the matrix $\mathrm{M}_{\text{view}\to \text{world}}$: $\mathrm{M}_{\text{view}\to \text{world}} = \left[ \begin{matrix} x_1 & y_1 & z_1 & t_1 \\ x_2 & y_2 & z_2 & t_2 \\ x_3 & y_3 & z_3 & t_3 \\ 0 & 0 & 0 & 1 \end{matrix} \right]$ However, we require the matrix $\mathrm{M}_{\text{world}\to \text{view}}$; thus, we have to compute the inverse of the matrix $\mathrm{M}_{\text{view}\to \text{world}}$. Note that the matrix $\mathrm {M}_\text{view→world}$ has the form $\mathrm{M}_{\text{view}\to \text{world}} =\left[ \begin{matrix} \mathrm{R} & \mathbf{t} \\ \mathbf{0}^T & 1 \end{matrix} \right]$ with a 3×3 matrix $\mathrm{R}$ and a 3D vector t. The inverse of such a matrix is: $\mathrm{M}_{\text{view}\to \text{world}}^{-1} = \mathrm{M}_{\text{world}\to \text{view}} = \left[ \begin{matrix} \mathrm{R}^{-1} & -\mathrm{R}^{-1}\mathbf{t} \\ \mathbf{0}^T & 1 \end{matrix} \right] Since in this particular case the matrix $\mathrm{R}$ is orthogonal (because its column vectors are normalized and orthogonal to each other), the inverse of $\mathrm{R}$ is just the transpose, i.e. the fourth step is to compute: $\mathrm{M}_{\text{world}\to \text{view}} = \left[ \begin{matrix} \mathrm{R}^T & -\mathrm{R}^T\mathbf{t} \\ \mathbf{0}^T & 1 \end{matrix} \right]$$\text{with }\mathrm{R} = \left[ \begin{matrix} x_1 & y_1 & z_1 \\ x_2 & y_2 & z_2 \\ x_3 & y_3 & z_3 \end{matrix} \right]$ While the derivation of this result required some knowledge of linear algebra, the resulting computation only requires basic vector and matrix operations and can be easily programmed in any common programming language. Projection Transformation and Perspective DivisionEdit First of all, the projection transformations determine the kind of projection, e.g. perspective or orthographic. Perspective projection corresponds to linear perspective with foreshortening, while orthographic projection is an orthogonal projection without foreshortening. The foreshortening is actually accomplished by the perspective division; however, all the parameters controlling the perspective projection are set in the projection transformation. Technically spoken, the projection transformation transforms view coordinates to clip coordinates. (All parts of primitives that are outside the visible part of the scene are clipped away in clip coordinates.) It should be the last transformation that is applied to a vertex in a vertex shader before the vertex is returned in the output parameter with the semantic POSITION. These clip coordinates are then transformed to normalized device coordinates by the perspective division, which is just a division of all coordinates by the fourth coordinate. (Normalized device coordinates are called this way because their values are between -1 and +1 for all points in the visible part of the scene.) Accessing the Projection Matrix in a Vertex ShaderEdit Similarly to the modeling transformation and the viewing transformation, the projection transformation is represented by a 4×4 matrix, which is called projection matrix $\mathrm{M}_\text{projection}$ . It is usually defined as a uniform parameter for the vertex shader. Computing the Projection MatrixEdit Analogously to the modelview matrix, Cg programmers don't have to worry about the computation of the projection matrix. However, when developing applications, it is sometimes necessary to compute the projection matrix. Here, we present the projection matrices for three cases (all for the OpenGL convention with a camera pointing to the negative $z$ axis in view coordinates): • standard perspective projection (corresponds to the OpenGL 2.x function gluPerspective) • oblique perspective projection (corresponds to the OpenGL 2.x function glFrustum) • orthographic projection (corresponds to the OpenGL 2.x function glOrtho) The standard perspective projection is characterized by • an angle $\theta_\text{fovy}$ that specifies the field of view in $y$ direction as illustrated in the figure to the right, • the distance $n$ to the near clipping plane and the distance $f$ to the far clipping plane as illustrated in the next figure, • the aspect ratio $a$ of the width to the height of a centered rectangle on the near clipping plane. Together with the view point and the clipping planes, this centered rectangle defines the view frustum, i.e. the region of the 3D space that is visible for the specific projection transformation. All primitives and all parts of primitives that are outside of the view frustum are clipped away. The near and front clipping planes are necessary because depth values are stored with a finite precision; thus, it is not possible to cover an infinitely large view frustum. With the parameters $\theta_\text{fovy}$, $a$, $n$, and $f$, the projection matrix $\mathrm{M}_\text{projection}$ for the perspective projection is: $\mathrm{M}_{\text{projection}} = \left[ \begin{matrix} \frac{d}{a} & 0 & 0 & 0 \\ 0 & d & 0 & 0 \\ 0 & 0 & \frac{n+f}{n-f} & \frac{2 n f}{n-f} \\ 0 & 0 & -1 & 0 \end{matrix} \right]$$\text{ with } d = \frac{1}{\tan(\theta_{\text{fovy}}/2)}$ The oblique perspective projection is characterized by • the same distances $n$ and $f$ to the clipping planes as in the case of the standard perspective projection, • coordinates $r$ (right), $l$ (left), $t$ (top), and $b$ (bottom) as illustrated in the corresponding figure. These coordinates determine the position of the front rectangle of the view frustum; thus, more view frustums (e.g. off-center) can be specified than with the aspect ratio $a$ and the field-of-view angle $\theta_\text{fovy}$. Given the parameters $n$, $f$, $r$, $l$, $t$, and $b$, the projection matrix $\mathrm{M}_\text{projection}$ for the oblique perspective projection is: $\mathrm{M}_{\text{projection}} = \left[ \begin{matrix} \frac{2 n}{r-l} & 0 & \frac{r+l}{r-l} & 0 \\ 0 & \frac{2 n}{t-b} & \frac{t+b}{t-b} & 0 \\ 0 & 0 & \frac{n+f}{n-f} & \frac{2 n f}{n-f} \\ 0 & 0 & -1 & 0 \end{matrix} \right]$ An orthographic projection without foreshortening is illustrated in the figure to the right. The parameters are the same as in the case of the oblique perspective projection; however, the view frustum (more precisely, the view volume) is now simply a box instead of a truncated pyramid. With the parameters $n$, $f$, $r$, $l$, $t$, and $b$, the projection matrix $\mathrm{M}_\text{projection}$ for the orthographic projection is: $\mathrm{M}_{\text{projection}} = \left[ \begin{matrix} \frac{2 }{r-l} & 0 & 0 & -\frac{r+l}{r-l} \\ 0 & \frac{2 }{t-b} & 0 & -\frac{t+b}{t-b} \\ 0 & 0 & \frac{-2}{f-n} & -\frac{f+n}{f-n} \\ 0 & 0 & 0 & 1 \end{matrix} \right]$ Viewport TransformationEdit The projection transformation maps view coordinates to clip coordinates, which are then mapped to normalized device coordinates by the perspective division by the fourth component of the clip coordinates. In normalized device coordinates (ndc), the view volume is always a box centered around the origin with the coordinates inside the box between -1 and +1. This box is then mapped to screen coordinates (also called window coordinates) by the viewport transformation as illustrated in the corresponding figure. The parameters for this mapping are the coordinates $s_x$ and $s_y$ of the lower, left corner of the viewport (the rectangle of the screen that is rendered) and its width $w_s$ and height $h_s$, as well as the depths $n_s$ and $f_s$ of the front and near clipping planes. (These depths are between 0 and 1). In OpenGL and OpenGL ES, these parameters are set with two functions: glViewport(GLint $s_x$, GLint $s_y$, GLsizei $w_s$, GLsizei $h_s$); glDepthRangef(GLclampf $n_s$, GLclampf $f_s$); The matrix of the viewport transformation isn't very important since it is applied automatically in a fixed-function stage. However, here it is for the sake of completeness: $\left[ \begin{matrix} \frac{w_s}{2} & 0 & 0 & s_x + \frac{w_s}{2} \\ 0 & \frac{h_s}{2} & 0 & s_y + \frac{h_s}{2} \\ 0 & 0 & \frac{f_s - n_s}{2} & \frac{n_s+f_s}{2} \\ 0 & 0 & 0 & 0 \end{matrix} \ Further ReadingEdit The conventional vertex transformations are also described in less detail in Chapter 4 of Nvidia's Cg Tutorial. The conventional OpenGL transformations are described in full detail in Section 2.12 of the “OpenGL 4.1 Compatibility Profile Specification” available at the Khronos OpenGL web site. A more accessible description of the vertex transformations is given in Chapter 3 (on viewing) of the book “OpenGL Programming Guide” by Dave Shreiner published by Addison-Wesley. (An older edition is available online). Unless stated otherwise, all example source code on this page is granted to the public domain. Last modified on 20 October 2012, at 22:33
{"url":"https://en.m.wikibooks.org/wiki/Cg_Programming/Vertex_Transformations","timestamp":"2014-04-18T23:48:19Z","content_type":null,"content_length":"61707","record_id":"<urn:uuid:0caf0d88-e2c0-49d8-bbec-5f8c7d207124>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
understanding macluarin polynomials. A peak under the hood My calculus book explained how to form macluarin polynomials but said nothing on why they work. Over the past couple days I've been turning it over in my head trying to figure it out. I haven't found a proof but its begining to make sense. I thought it might make an intresting discussion for anyone who has never seen the proof or is not advanced enough to understand it yet. Say the nth derivative of a function evaluated at zero is 5. If it were a polynomial with a limited number of terms, the nth derivative could be 5 itself. Lets integrate to find the (n - 1)th derivative 5x + c. If the (n - 1)th derivative evaluated at zero were 7, then c would be 7. OOH THIS IS FUN! Lets do it again! Integrate to find the (n - 2)th derivative, we get 5/2 x^2 + 7x + C. If at zero the (n - 2)th derivative is 14, then C would be 14. get So we've found the (n - 2)th Now lets assume the n we spoke of had a value of 3, then the (n - 3)th derivative would be the 0th derivative, or the function itself. So lets integrate the (n - 2)th derivative, to get 5/6 x^3 + 7/2 x^2 + 14x + C, if at zero the function had a value of 17 then C would be 17. So the function would look like 5/2! x^3 + 7/2 x^2 + 14x + 17 if it were in fact a polynomial with n terms. Ok, now lets look back at what happened. Instead of saying (n - 1), (n - 2) or (n - 3) for each scenario lets just say u for whatever derivative we're working with at the time. The u'th derivative evaluated at zero was always some consant "c" (where u is some number between n and 0) this term is integrated u times and appears in the final function. Anyone familiar with calculus can see that the this term would end up being ( c x^u )/ u!. So the function of x would be Summation of the u'th derivative at (0) multiplied by (x^u) / u! from u = infinity (ideally) to u = 0. We could reverse the order of the sum (which won't effect the answer) and sum from u = 0 to u = infinity, we would end up with: f(0)/0! + f'(0)x/1! + f''(0)/2! + f'''(0)/3! ...... Well what do you know? The macluarin polynomial! Like I said this is not a proof. We made a bunch of assumptions, assuming the function was a polynomial when it may not be. We just used the clues to "mold" a polynomial with similar characteristics, (or exact when evaluated at 0) but I suppose the numerous stipulations tie the curve down to various places, and kind of leave little room for the function to stray far from the other on the inbetween points. Also we assumed the nth derivatve was a constant when it may not be at all. The nth derivative could have been 5 cos(x) However, if the n is very very large then this mistake should have little effect on the final function. Why? Because the limit of a x^n/n! as n approaches infinity is in fact zero. (I forget how you prove this but I remember doing it. Hmm... gotta review) Anyway, that doesn't prove it converges but it would prove it diverges if if the limit was not zero. So its at least consistant. Anyways, this doesn't fully explain or proove why they work, but I think it gives you a pretty good idea of HOW they work. Like peeking under the hood of a car. Doesn't reveal everything but helps you understand it at least on a basic level. Whoever this Mr. Maclaurin was, he was a complete and utter genius who could probably move things with his mind. Last edited by mikau (2006-07-01 09:40:48) A logarithm is just a misspelled algorithm.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=37359","timestamp":"2014-04-20T06:18:05Z","content_type":null,"content_length":"27213","record_id":"<urn:uuid:39ee1fba-447d-4b55-8834-8343d8efdcee>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Jacobi elliptic function dn: Representations through equivalent functions Representations through equivalent functions With inverse function With related functions Involving am Involving one other Jacobi elliptic function Involving cd Involving cn Involving cs Involving dc Involving ds Involving nc Involving nd Involving ns Involving sc Involving sd Involving sn Involving two other Jacobi elliptic functions Involving cd and ncnc Involving cd and nc Involving cs and nd Involving dc and cn Involving dc and nc Involving dc and nd Involving ds and ns Involving ds and sn Involving nc and nd Involving nd and ns Involving nd and sc Involving ns and sd Involving sd and sn Involving three other Jacobi elliptic functions Involving four other Jacobi elliptic functions Involving five other Jacobi elliptic functions Involving Weierstrass functions Involving theta functions
{"url":"http://functions.wolfram.com/EllipticFunctions/JacobiDN/27/ShowAll.html","timestamp":"2014-04-19T14:48:47Z","content_type":null,"content_length":"146384","record_id":"<urn:uuid:663978c7-8c30-447d-8caa-a1254c08377f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
MONSTR: term graph rewriting for parallel machines - Proc. 4th International Workshop on Graph Grammars , 1991 "... This paper gives some examples of how computation in a number of languages may be described as graph rewriting, giving the Dactl notation for the examples shown. It goes on to present the Dactl model more formally before giving a formal definition of the syntax and semantics of the language. 2 Examp ..." Cited by 34 (7 self) Add to MetaCart This paper gives some examples of how computation in a number of languages may be described as graph rewriting, giving the Dactl notation for the examples shown. It goes on to present the Dactl model more formally before giving a formal definition of the syntax and semantics of the language. 2 Examples of Computation by Graph Rewriting - J.UCS , 1995 "... Abstract: A translation of the π-calculus into the MONSTR graph rewriting language is described and proved correct. The translation illustrates the heavy cost in practice of faithfully implementing the communication primitive of the π-calculus and similar process calculi. It also illustrates the con ..." Cited by 8 (8 self) Add to MetaCart Abstract: A translation of the π-calculus into the MONSTR graph rewriting language is described and proved correct. The translation illustrates the heavy cost in practice of faithfully implementing the communication primitive of the π-calculus and similar process calculi. It also illustrates the convenience of representing an evolving network of communicating agents directly within a graph manipulation formalism, both because the necessity to use delicate notions of bound variables and of scopes is avoided, and also because the standard model of graphs in set theory automatically yields a useful semantics for the process calculus. The correctness proof illustrates many features typically encountered in reasoning about graph rewriting systems, and particularly how serialisation techniques can be used to reorder an arbitrary execution into one having stated desirable properties. - Journal of Universal Computer Science , 1996 "... Abstract: This is the first in a series of papers dealing with the implementation of an extended term graph rewriting model of computation (described by the DACTL language) on a distributed store architecture. In this paper we set out the high level model, and under some simple packet store model is ..." Cited by 6 (5 self) Add to MetaCart Abstract: This is the first in a series of papers dealing with the implementation of an extended term graph rewriting model of computation (described by the DACTL language) on a distributed store architecture. In this paper we set out the high level model, and under some simple packet store model is compared to a more realistic and finegrained packet store model, more closely related to the properties of a genuine distributed store architecture, and the differences are used to inspire the definition of the MONSTR sublanguage of DACTL, intended for direct execution on the machine. Various alternative operational semantics for MONSTR are proposed to reflect more closely the finegrained packet store model, and the prospects for establishing correctness are discussed. The detailed treatment of the alternative models, in the context of suitable sublanguages of MONSTR where appropriate, are subjects for subsequent papers. , 1992 "... A methodology for polymorphic type inference for general term graph rewriting systems is presented. This requires modified notions of type and of type inference due to the absence of structural induction over graphs. Induction over terms is replaced by dataflow analysis. 1 Introduction Term graphs ..." Cited by 5 (1 self) Add to MetaCart A methodology for polymorphic type inference for general term graph rewriting systems is presented. This requires modified notions of type and of type inference due to the absence of structural induction over graphs. Induction over terms is replaced by dataflow analysis. 1 Introduction Term graphs are objects that locally look like terms, but globally have a general directed graph structure. Since their introduction in Barendregt et al. (1987), they have served the purpose of defining a rigorous framework for graph reduction implementations of functional languages (Peyton-Jones (1987)). This was the original intention. However the rewriting of term graphs defined in the operational semantics of the model, makes term graph rewriting systems (TGRSs) interesting models of computation in their own right. One can thus study all sorts of issues in the specific TGRS context. Typically one might be interested in how close TGRSs are to TRSs and this problem is examined in Barendregt et al. (19... - Journal of Programming Languages , 1997 "... Two superficially similar graph rewriting formalisms, Interaction Nets and MONSTR, are studied. Interaction Nets come from multiplicative Linear Logic and feature undirected graph edges, while MONSTR arose from the desire to implement generalized term graph rewriting efficiently on a distributed arc ..." Cited by 3 (3 self) Add to MetaCart Two superficially similar graph rewriting formalisms, Interaction Nets and MONSTR, are studied. Interaction Nets come from multiplicative Linear Logic and feature undirected graph edges, while MONSTR arose from the desire to implement generalized term graph rewriting efficiently on a distributed architecture and utilizes directed graph arcs. Both formalisms feature rules with small left-hand sides consisting of two main graph nodes. A translation of Interaction Nets into MONSTR is described for both typed and untyped nets, while the impossibility of the opposite translation rests on the fact that net rewriting is always Church–Rosser while MONSTR rewriting is not. Some extensions to the net formalism suggested by the relationship with MONSTR are discussed, as well as some related implementation issues. - Journal of Programming Languages , 1997 "... this paper we try to bridge the gap between the two formalisms by showing how concurrent logic languages can be implemented using graph rewriting. In particular, we develop techniques for mapping a wide class of CLLs including Parlog, GHC, Strand, Janus and a restricted subset of the Concurrent Prol ..." Cited by 1 (1 self) Add to MetaCart this paper we try to bridge the gap between the two formalisms by showing how concurrent logic languages can be implemented using graph rewriting. In particular, we develop techniques for mapping a wide class of CLLs including Parlog, GHC, Strand, Janus and a restricted subset of the Concurrent Prolog family onto Dactl, a compiler target language based on graph rewriting. We discuss the problems found in the process and the adopted solutions. The paper contributes to related research by: # examining the potential of graph reduction as a suitable model for implementing CLLs in terms of expressiveness and efficiency , 1996 "... The extended term graph rewriting formalism of MONSTR is described, together with some of its more important rigorously established properties, particularly regarding serialisability and acyclicity. This basis is used for giving a convenient description of the global runtime structure of a concurren ..." Cited by 1 (1 self) Add to MetaCart The extended term graph rewriting formalism of MONSTR is described, together with some of its more important rigorously established properties, particularly regarding serialisability and acyclicity. This basis is used for giving a convenient description of the global runtime structure of a concurrent object oriented language. The formalism proves especially convenient for describing very precisely a variety of intended synchronisation properties of objects in a concurrent OOL, and this flexibility is illustrated by considering a variety of possible operational semantics for a simple counter object. A lower bound object example illustrates that even more extreme synchronisation properties for objects may be contemplated without stretching the capabilities of the MONSTR formalism. The presentation is independent of any specific high level OOL. Key Words: Object Oriented Languages, Object Synchronisation, Term Graph Rewriting, MONSTR, Distributed Processing, Serialisability. 1 - Journal of Universal Computer Science , 1995 "... : A translation of the p-calculus into the MONSTR graph rewriting language is described and proved correct. The translation illustrates the heavy cost in practice of faithfully implementing the communication primitive of the p-calculus and similar process calculi. It also illustrates the convenience ..." Add to MetaCart : A translation of the p-calculus into the MONSTR graph rewriting language is described and proved correct. The translation illustrates the heavy cost in practice of faithfully implementing the communication primitive of the p-calculus and similar process calculi. It also illustrates the convenience of representing an evolving network of communicating agents directly within a graph manipulation formalism, both because the necessity to use delicate notions of bound variables and of scopes is avoided, and also because the standard model of graphs in set theory automatically yields a useful semantics for the process calculus. The correctness proof illustrates many features typically encountered in reasoning about graph rewriting systems, and particularly how serialisation techniques can be used to reorder an arbitrary execution into one having stated desirable properties. Key Words: Concurrency, Pi-Calculus, Term Graph Rewriting, MONSTR, Process Networks, Simulation, Serialisability. Ca... , 1996 "... MONSTR Rule Systems R. Banach UMCS-96-7-3 Computer Science University of Manchester Technical Report Series University of Manchester Department of Computer Science ISSN 1361 - 6161 2 A Fibration Semantics for Pi-Calculus Modules via Abstract MONSTR Rule Systems* R. Banach Department of ..." Add to MetaCart MONSTR Rule Systems R. Banach UMCS-96-7-3 Computer Science University of Manchester Technical Report Series University of Manchester Department of Computer Science ISSN 1361 - 6161 2 A Fibration Semantics for Pi-Calculus Modules via Abstract MONSTR Rule Systems* R. Banach Department of Computer Science University of Manchester Oxford Road, Manchester, U.K. banach@cs.man.ac.uk 31 July 1996 Copyright 1996. All rights reserved. Reproduction of all or part of this work is permitted for educational or research purposes on condition that: (1) this copyright notice is included, (2) proper attribution to the author or authors is made, and (3) no commercial gain is involved. Recent technical reports issued by the Department of Computer Science, Manchester University, are available by anonymous ftp from ftp.cs.man.ac.uk in the directory pub/TR. The files are stored as PostScript, in compressed form, with the report number as filename. They can also be obtained on WWW via ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=543586","timestamp":"2014-04-18T17:14:33Z","content_type":null,"content_length":"35394","record_id":"<urn:uuid:605b0c97-b960-4e9b-8701-6219f20f0640>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: INFINITE SETS IN PROLOG Replies: 4 Last Post: Feb 25, 2013 7:18 PM Messages: [ Previous | Next ] BruceS Re: INFINITE SETS IN PROLOG Posted: Feb 25, 2013 6:54 PM Posts: 153 Registered: 8/23/05 On 02/25/2013 02:23 PM, Graham Cooper wrote: > On Feb 26, 6:40 am, BruceS <bruce...@hotmail.com> wrote: >> <snip> >> Never mind infinite sets in Prolog, what about your finite series of >> incomplete excuses, tied to your failure to pay Brad the $1000 you owe >> him? Why should anyone spend time on you if you can't even honor the >> agreements you make? > For someone in the Triple 9 Society you couldn't even get the homework > question right! You have again forgotten, or misconstrued, what I've said. Don't you get tired of being wrong all the time? Really though, we can set all that aside, if you would just pay your $1,000 debt to Brad. Date Subject Author 2/23/13 Graham Cooper 2/25/13 Re: INFINITE SETS IN PROLOG BruceS 2/25/13 Re: INFINITE SETS IN PROLOG Graham Cooper 2/25/13 Re: INFINITE SETS IN PROLOG BruceS 2/25/13 Re: INFINITE SETS IN PROLOG Graham Cooper
{"url":"http://mathforum.org/kb/message.jspa?messageID=8414972","timestamp":"2014-04-20T19:13:47Z","content_type":null,"content_length":"21326","record_id":"<urn:uuid:d0dd00e0-4d9e-4f53-abe5-c4cf3ad7b966>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
brain sizes: Einstein's and women's John Knight johnknight at usa.com Wed Jul 10 02:24:10 EST 2002 "Mark D. Morin" <mdmpsyd at NOSPAMgwi.net> wrote in message news:3D2BA223.4020605 at NOSPAMgwi.net... > >>>Most of the corrlations I've gotten are in the range of 0.6, so it > >>>really be appreciated if you could provide a reference to the above. > >> > >>0.6 is not a low correlation--it explains over one third of the > >>variation in scores. In any other field, an R of this size would be > >>considered robust. > >> > >>my resources are at the office, I'll dig them out today. > >> > > > > > > Yes, 0.6 really is good correlation, but when compared to the 0.8795 > > correlation between brain size and GRE Quantitative, you have to wonder > > what's missing from "IQ tests". > and there still isn't a reliable source for this statistic. For which statistic? Are you questioning Philippe Rushton's measurements of brain size, GRE Quantitative Scores, or the method for calculating Run the data at http://christianparty.net/grebrainsize.htm yourself. Or use the following figures and see what you get for r-squared. The first column of numbers is brain size in cubic centimeters, and the second column is GRE Quantitative Scores Asian men 1,472 638 White men 1,416 586 Asian women 1,358 572 White women 1,308 514 African men 1,319 446 African women 1,217 404 If you manage to get something much different than 0.87 to 0.88, please let me know how you did it. Just comparing the highest score to the lowest score you could argue that each 1 cc increase in brain size is equivalent to a 1 point increase in GRE Scores, which is not insignificant. If you remove the furthest outlier, which is Black men's brains [no pun intended], then r-squared increases to 0.9583. Such a small variation could easily be due to errors in either measurement rather than some other unknown factor. For example, Indian men scored 14 points higher in 1998 than they did in 1997, whereas Puerto Rican women scored 6 points lower, for a 20 point swing relative to each other. Usually the variation from year to year is only about 4 points, but because of this variation, brain size may be a more accurate measurement of someone's quantitative ability than the GRE Quantitative score itself ); John Knight More information about the Neur-sci mailing list
{"url":"http://www.bio.net/bionet/mm/neur-sci/2002-July/048877.html","timestamp":"2014-04-17T21:49:46Z","content_type":null,"content_length":"4957","record_id":"<urn:uuid:171f4e81-e020-49a1-8000-310760262ce5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Auburn, MA Algebra 2 Tutor Find an Auburn, MA Algebra 2 Tutor ...I have over 8 years experience tutoring math at colleges and on a private basis. Not only am I a nationally certified peer tutor through the College Reading & Learning Association, but I am also licensed to teach math in the state of Massachusetts at the high school level. Also, I've worked professionally as a corporate trainer. 15 Subjects: including algebra 2, calculus, geometry, statistics ...I have also passed the following MTELs: Communication and Literacy; Middle School Math; History; and Foundations of Reading. I am currently working on a second M.Ed. in Moderate Disabilities (5-12) at Westfield State University. I have an M.Ed. in History from Westfield State University. 34 Subjects: including algebra 2, English, reading, writing ...I've tutored nearly all the students I've worked with for many years, and I've also frequently tutored their brothers and sisters - also for many years. I enjoy helping my students to understand and realize that they can not only do the work - they can do it well and they can understand what they're doing. My references will gladly provide details about their own experiences. 11 Subjects: including algebra 2, geometry, algebra 1, precalculus ...People are surprised at how quickly they can learn these subjects once they are given a clear explanation. I have over 20 years of experience tutoring accounting, finance, economics and statistics. I have a master's degree in accounting, and I currently teach statistics, accounting, and finance at local colleges, where students have given me great evaluations. 14 Subjects: including algebra 2, statistics, accounting, GRE I am a licensed mathematics teacher, high school athletic coach and small business owner. I tutor for MCAS and special needs students at two area high schools. I teach remedial math courses at a state college for students who have struggled with math courses during their high school years. 21 Subjects: including algebra 2, statistics, GRE, geometry Related Auburn, MA Tutors Auburn, MA Accounting Tutors Auburn, MA ACT Tutors Auburn, MA Algebra Tutors Auburn, MA Algebra 2 Tutors Auburn, MA Calculus Tutors Auburn, MA Geometry Tutors Auburn, MA Math Tutors Auburn, MA Prealgebra Tutors Auburn, MA Precalculus Tutors Auburn, MA SAT Tutors Auburn, MA SAT Math Tutors Auburn, MA Science Tutors Auburn, MA Statistics Tutors Auburn, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Auburn_MA_Algebra_2_tutors.php","timestamp":"2014-04-20T11:32:06Z","content_type":null,"content_length":"24174","record_id":"<urn:uuid:b8152356-7f7b-4e57-b100-5c4fe8025cc8>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
A First Course in Probability(Ross, Question from CH6) December 1st 2008, 04:21 PM #1 Junior Member Nov 2008 p.313, #7 Consider a sequence of independent Bernoulli trials, each with success probability p. Let X1 be the number of failures preceding the first success, and let X2 be the number of failures between the first two successes. Find the joint density of <X1, X2>. (Note: X1 and X2 are called "waiting times") P.315, #21 Let f(x,y) = 24xy on for (x,y) in the set S = {(x,y)| 0<=x<=1, 0<=y<=1, 0<= x+y<=1}, and f(x,y)=0 otherwise. 1) Find the marginal density fy(y). I was thinking integral(f(x,y),dx,0,1-y)) bt doesn't seem right. (12y^3-24y^2+12y) The answer should satisfy that integral of f from -inf to inf should be 1. Right? 2) Find E[Y] integral(f(x,y),dx,0,1-y), then you should get 12y^3-24y^2+12y. and take integral(12y^3-24y^2+12y, dy, 0, 1), you will get 1. There is a double integral here to satisfy your statement "The answer should satisfy that integral of f from -inf to inf should be 1". I hope that helps. Question re written wrong and help please p.313, #7 Consider a sequence of independent Bernoulli trials, each with success probability p. Let X1 be the number of failures preceding the first success, and let X2 be the number of failures between the first two successes. Find the joint density of <X1, X2>. (Note: X1 and X2 are called "waiting times") i am using 5th edition: hi, i was looking at the same question, but you said find the joint density function, it actually says joint mass function (in my version) implying that it is discrete cases, not continuous (unless updated in your version). so as far as i know, we need to use geometric distribution with parameter P for both X and Y. but i am stuck from there onwards. help please December 1st 2008, 05:14 PM #2 Dec 2008 April 30th 2009, 12:42 AM #3 Apr 2009
{"url":"http://mathhelpforum.com/advanced-statistics/62718-first-course-probability-ross-question-ch6.html","timestamp":"2014-04-21T16:08:48Z","content_type":null,"content_length":"35817","record_id":"<urn:uuid:ddc5b5d2-f2b5-433f-8297-d0abfe528ced>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Reasoning about functional programs in Nuprl - In: Proceedings of the 12 th IEEE International Conference on Automated Software Engineering, IEEE Computer Society , 1997 "... Proofs in the Nuprl system, an implementation of a constructive type theory, yield “correct-by-construction ” programs. In this paper a new methodology is presented for extracting efficient and readable programs from inductive proofs. The resulting extracted programs are in a form suitable for use i ..." Cited by 18 (5 self) Add to MetaCart Proofs in the Nuprl system, an implementation of a constructive type theory, yield “correct-by-construction ” programs. In this paper a new methodology is presented for extracting efficient and readable programs from inductive proofs. The resulting extracted programs are in a form suitable for use in hierarchical verifications in that they are amenable to clean partial evaluation via extensions to the Nuprl rewrite system. The method is based on two elements: specifications written with careful use of the Nuprl set-type to restrict the extracts to strictly computational content; and on proofs that use induction tactics that generate extracts using familiar fixed-point combinators of the untyped lambda calculus. In this paper the methodology is described and its application is illustrated by example. 1. , 1997 "... on the World Wide Web (\the Web") (www.cs.cornell.edu/Info/NuPrl/nuprl.html) ..." , 2000 "... In this paper, we take an abstract view of search by describing search procedures via particular kinds of proofs in type theory. We rely on the proofs-as-programs interpretation to extract programs from our proofs. Using these techniques we explore, in depth, a large family of search problems by par ..." Cited by 8 (2 self) Add to MetaCart In this paper, we take an abstract view of search by describing search procedures via particular kinds of proofs in type theory. We rely on the proofs-as-programs interpretation to extract programs from our proofs. Using these techniques we explore, in depth, a large family of search problems by parameterizing the speci cation of the problem. A constructive proof is presented which has as its computational content a correct search procedure for these problems. We show how a classical extension to an otherwise constructive system can be used to describe a typical use of the nonlocal control operator call/cc. Using the classical typing of nonlocal control we extend our purely constructive proof to incorporate a sophisticated backtracking technique known as ‘con ict-directed backjumping’ (CBJ). A variant of this proof is formalized in Nuprl yielding a correct-by-construction implementation of CBJ. The extracted program has been translated into Scheme and serves as the basis for an implementation of a new solution to the Hamiltonian circuit problem. This paper demonstrates a nontrivial application of the proofs-as-programs paradigm by applying the technique to the derivation of a sophisticated search algorithm; also, it shows the generality of the resulting implementation by demonstrating its application in a new problem , 1997 "... . This paper deals with a particular approach to the verification of functional programs. A specification of a program can be represented by a logical formula [Con86, NPS90]. In a constructive framework, developing a program then corresponds to proving this formula. Given a specification and a progr ..." Cited by 6 (0 self) Add to MetaCart . This paper deals with a particular approach to the verification of functional programs. A specification of a program can be represented by a logical formula [Con86, NPS90]. In a constructive framework, developing a program then corresponds to proving this formula. Given a specification and a program, we focus on reconstructing a proof of the specification whose algorithmic contents corresponds to the given program. The best we can hope is to generate proof obligations on atomic parts of the program corresponding to logical properties to be verified. First, this paper studies a weak extraction of a program from a proof that keeps track of intermediate specifications. From such a program, we prove the determinism of retrieving proof obligations. Then, heuristic methods are proposed for retrieving the proof from a natural program containing only partial annotations. Finally, the implementation of this method as a tactic of the Coq proof assistant is presented. 1. Introduction A large p... , 1998 "... The topic of this thesis is the extraction of efficient and readable programs from formal constructive proofs of decidability. The proof methods employed to generate the efficient code are new and result in clean and readable Nuprl extracts for two non-trivial programs. They are based on the use of ..." Cited by 3 (0 self) Add to MetaCart The topic of this thesis is the extraction of efficient and readable programs from formal constructive proofs of decidability. The proof methods employed to generate the efficient code are new and result in clean and readable Nuprl extracts for two non-trivial programs. They are based on the use of Nuprl's set type and techniques for extracting efficient programs from induction principles. The constructive formal theories required to express the decidability theorems are of independent interest. They formally circumscribe the mathematical knowledge needed to understand the derived algorithms. The formal theories express concepts that are taught at the senior college level. The decidability proofs themselves, depending on this material, are of interest and are presented in some detail. The proof of decidability of classical propositional logic is relative to a semantics based on Kleene's strong three-valued logic. The constructive proof of intuitionistic decidability presented here is the first machine formalization of this proof. The exposition reveals aspects of the Nuprl tactic collection relevant to the creation of readable proofs; clear extracts and efficient code are illustrated in the discussion of the proofs. "... We present the base realizations of the imperative program synthesis system bases on logic. We give rules of simplications which dene a relation between realizations named the computational equivalence. Using this relation allows to transform realizations into simpler equivalent realizations using ..." Add to MetaCart We present the base realizations of the imperative program synthesis system bases on logic. We give rules of simplications which dene a relation between realizations named the computational equivalence. Using this relation allows to transform realizations into simpler equivalent realizations using less ressources for their executions. 1 Introduction The system is an imperative program synthesis system based on a non-classical linear logic of actions and causality, the logic. The formulae of logic can describe situations, actions and eternal truths. A situation formula is understood as the action to do things in such a way that the situation holds. Thus, everything become actions. The synthesized programs are C ++ object-oriented programs. The main tool of the system is the realized formulae. A realized formulae is an expression R : A meaning that the realization R realizes the formula A. A realization R is an object in the sense of object-oriented programming which has a met... "... The formalization of divisibility theory over cancellation monoids in Nuprl is described. The main theorems presented concern the existence and uniqueness of factorisations. Issues addressed include how to make formalized mathematics readable and the use of automated inference. The constructive nat ..." Add to MetaCart The formalization of divisibility theory over cancellation monoids in Nuprl is described. The main theorems presented concern the existence and uniqueness of factorisations. Issues addressed include how to make formalized mathematics readable and the use of automated inference. The constructive nature of mathematics in Nuprl is also discussed. , 2009 "... Trees carrying information stored in their nodes are a fundamental abstract data type. Approaching trees in a formal constructive environment allows us to realize properties of trees, inherent in their structure. Specifically we will look at the evidence provided by the predicates which operate on t ..." Add to MetaCart Trees carrying information stored in their nodes are a fundamental abstract data type. Approaching trees in a formal constructive environment allows us to realize properties of trees, inherent in their structure. Specifically we will look at the evidence provided by the predicates which operate on these trees. This evidence, expressed in terms of logical and programming languages, is realizable only in a constructive context. In the constructive setting, membership predicates over recursive types are inhabited by terms indexing the elements that satisfy the criteria for membership. In this paper, we motivate and explore this idea in the concrete setting of lists and trees. We first provide a background in constructive type theory and show relavent properties of trees. We present and define the concept of inhabitants of a generic shape type that corresponds naturally and exactly to the inhabitants of a membership predicate. In this context, (λx.T rue) ∈ S is the set of all indexes into S, but we show that not all subsets of indexes are expressible by strictly local predicates. Accordingly, we extend our membership predicates to predicates that compute and hold the state “from above” as well as allow “looking below”. The modified predicates of this form are complete in the sense that they can express every subset of indexes in S. These ideas are motivated by experience programming in Nuprl’s constructive type theory and the theorems for lists and trees have been formalized and mechanically checked. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1834048","timestamp":"2014-04-25T05:59:06Z","content_type":null,"content_length":"33380","record_id":"<urn:uuid:9653a219-18db-40a2-bea9-4492a84c2b64>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
Bode Plot of an Open-loop Transfer Function 1. 75885 Bode Plot of an Open-loop Transfer Function Consider the following system in the attached file. Draw a Bode plot of the open-loop transfer function. Please see the attached file for the fully formatted problem. A Bode Plot of an Open-loop Transfer Function is provided. The solution is detailed and well presented. The response was given a rating of "5/5" by the student who originally posted the question.
{"url":"https://brainmass.com/math/graphs-and-functions/75885","timestamp":"2014-04-19T09:33:29Z","content_type":null,"content_length":"26045","record_id":"<urn:uuid:294d6333-2084-4813-aa5b-dedf8b406725>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Windows Phone 7 Game Development : The World of 3D Graphics - Rendering 3D Objects Moving objects around our 3D game world is great, but we need to be able to create 3D objects too; so far, we've worked just with flat rectangles. This section discusses how solid objects can be The objects that we have been drawing up to this point have defined four vertices, all with a z value of zero, and used a triangle strip to combine them into the rendered shape. When we move into three-dimensional objects, we probably can't use triangle strips. Every triangle of a triangle strip shares an edge with the previous triangle, and with 3D objects we will very quickly find that we can't draw objects in this way. Instead, we will use a list of individual triangles, which gives us the flexibility to draw whatever triangle we need wherever we need it. 1. Defining a 3D Object To start with we will define our 3D object by manually providing all its vertex coordinates. This is fairly straightforward for simple shapes, but does quickly become impractical once we want to move on to more complicated objects. A cube consists of six square faces and eight vertices. As each square needs to be rendered as two triangles, we end up with a total of 12 triangles to draw, as shown in Figure 1. Figure 1. The triangles required to build a 3D cube Because we will draw individual triangles rather than use a triangle strip, we need to specify each triangle coordinate individually. This means that when two triangles share a single coordinate, we actually need to specify the coordinate twice, once for each of the triangles. As a result, we have to provide a total of 36 vertices, three for each triangle. Because there are only eight distinct vertices forming the cube, this respecification of vertices is quite wasteful and requires XNA to perform the same calculations over and over again. To build the vertices of the cube, we simply declare an array of vertices and add to it sets of three values, representing the vertices of each of the triangles. The coordinates for the front face of a unit-size cube can be seen in Listing 1. Note that the z coordinate in each coordinate is 0.5, meaning that it extends half a unit toward the viewpoint. Example 1. Defining the front face of a cube │ // Create and initialize the vertices │ │ _vertices = new VertexPositionColor[6]; │ │ │ │ // Set the vertex positions for a unit size cube. │ │ int i = 0; │ │ // Front face... │ │ _vertices[i++].Position = new Vector3(-0.5f, −0.5f, 0.5f); │ │ _vertices[i++].Position = new Vector3(-0.5f, 0.5f, 0.5f); │ │ _vertices[i++].Position = new Vector3(0.5f, −0.5f, 0.5f); │ │ _vertices[i++].Position = new Vector3(0.5f, −0.5f, 0.5f); │ │ _vertices[i++].Position = new Vector3(-0.5f, 0.5f, 0.5f); │ │ _vertices[i++].Position = new Vector3(0.5f, 0.5f, 0.5f); │ │ │ │ │ Plotting out these coordinates shows that we have indeed formed a square that will form the front face of the cube, as shown in Figure 2. Figure 2. The vertices forming the front face of the cube The array is extended to cover all the faces of the cube, extending into the 3D space by using positive and negative values for the z positions. The full array is not included here because it is fairly large and not particularly interesting, but it can be seen in full inside the CubeObject.BuildVertices function in the ColoredCubes example project. The code in this function also sets the vertices for each face to be a different color to make the cube look nicer. The CubeObject class declares its array of vertices as static, so only a single instance of the array exists and is shared by all instances of the CubeObject class. Because the contents of this array are identical for every class instance, declaring the array in this way means that .NET allocates memory for the vertices only once for the whole application instead of once per cube object, saving some precious memory. With all the vertices defined, the object can be rendered using exactly the same code used for flat objects. The result is shown in Figure 3. Figure 3. The cube resulting from the set of 3D vertices Fundamentally, that is all there is to it! If you run the ColoredCubes example project, you will see how this basic object can be easily reused within the game engine to create a much more visually exciting scene, as shown in Figure 4. This example creates 100 cubes, gives each a random angle and position, and then rotates them around the y axis, resulting in a swirling tornado of colored Figure 4. The ColoredCubes example project
{"url":"http://mscerts.programming4.us/windows_phone/Windows%20Phone%207%20Game%20Development%20%20%20The%20World%20of%203D%20Graphics%20-%20Rendering%203D%20Objects.aspx","timestamp":"2014-04-19T22:08:14Z","content_type":null,"content_length":"40308","record_id":"<urn:uuid:d9d86374-305f-46cb-9f3d-31c11c9a9121>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
7 years of NKS—and its first killer app May 14, 2009 — Stephen Wolfram May 14, 2009 marks the 7^th anniversary of the publication of A New Kind of Science, and it has been my tradition on these anniversaries to write a short report on the progress of NKS. It has been fascinating over the past few years to watch the progressive absorption of NKS methods and the NKS paradigm into countless different fields. Sometimes there’s visible mention of NKS, though often there is not. There has been an inexorable growth in the use of the types of models pioneered in NKS. There has been steadily increasing use of the kinds of computational experiments and investigations introduced in NKS. And the NKS way of thinking about computation and in terms of computation has become steadily more widespread. Many of the specific investigations made in the NKS book have now been extended and enhanced. And even the results on fundamental physics in the NKS book are now coming closer to the mainstream. The trickle of academic work aimed directly at pure NKS—the basic investigation of simple programs and the computational universe—has turned into a stream, though tremendous opportunity for growth And I continue to find it remarkable how many thought leaders that I run across in incredibly diverse areas turn out to have read the NKS book, often in great detail. In June we’ll be holding our 7^th NKS Summer School (this year in Italy—the first time outside the United States). Every year we receive a progressively larger number of highly qualified applications, and this year will be our largest Summer School to date. But for me the biggest thing that’s happened this year is the emergence of Wolfram|Alpha. When I was writing the NKS book I kept on wondering what the first “killer app” (to use a phrase from the software industry) for NKS would be. I tried to think back what one would have imagined in 1936, when the idea of universal computing was introduced. Could one have predicted what the first killer apps for computers would be? As it was, first there were databases—which drove the mainframe computer industry, and later there were word processors—which drove the personal computer industry. Despite their tremendous practical importance, databases and word processors are really quite prosaic applications of an idea as powerful as universal computation. And both of these applications could probably have been done even without the full concept of universal computation. But the point is that the paradigm of universal computation was crucial in even imagining that either of these applications would make sense. And so it is now with NKS and Wolfram|Alpha. Wolfram|Alpha is, I believe, going to be the first killer app of NKS. And remarkable though Wolfram|Alpha is, it is at some level still prosaic relative to the full power of the ideas in NKS. Yet without the NKS paradigm, I cannot imagine I would ever have thought that Wolfram|Alpha could make sense. There is an immensely complex web of systematizable knowledge out there in the world. And before NKS, I would have assumed that to handle something of this complexity would have required building a system that is somehow correspondingly complex—and in practice completely out of reach. But from NKS we have learned that even highly complex things can have their origins in simple rules and simple programs. And this is what inspired me to believe that building Wolfram|Alpha might be possible. As a practical matter, many algorithms in Wolfram|Alpha were found by NKS methods—by searching the computational universe for programs that achieve particular purposes. And there is a curious sense in which the discoveries of NKS about computational irreducibility are what make Wolfram|Alpha possible. For one of the crucial features of Wolfram|Alpha is its ability to take free-form linguistic input, and to map it onto its precise symbolic representations of computations. Yet if these computations could be of any form whatsoever, it would be very difficult to recognize the linguistic inputs that represent them. But from NKS we know that computations fall into two classes: computationally reducible and computationally irreducible. NKS shows that in the abstract space of all possible computations the computationally irreducible are much the most common. But here is the crucial point: because those computations are not part of what we have historically studied or discussed, no systematic tradition of human language exists to describe them. So when we use natural human language as input to Wolfram|Alpha, we are inevitably going to be describing that thin set of computations that have long linguistic traditions, and are computationally Those computations cover the traditional sciences. But in a sense it is the very ubiquity of computational irreducibility that forces there to be only small islands of computational reducibility—which can readily be identified even from quite vague linguistic input. If one looks at Wolfram|Alpha today, much of what it computes is firmly based on OKS (the “Old Kind of Science”), and in this sense Wolfram|Alpha can be viewed as a shining example of what can be achieved with pre-NKS mathematical science. And curiously, after all these years, it is also perhaps the first clear consumerized example of universal computation at work. For now, for the first time, anyone will be able to walk up to a computer and immediately see just how diverse a range of possible computations it can do. So what about NKS? NKS is certainly crucial to the very conceptualization of Wolfram|Alpha. And even today one can use Wolfram|Alpha to do a little NKS: one can type in “rule 30″, or ask about other NKS systems that can readily be specified in linguistic terms. But in the future there is tremendous opportunity to do more with NKS in Wolfram|Alpha. Today, Wolfram|Alpha uses existing models from science and other areas, then does computations based on these models. But what if it could find new models? What if it could invent on the fly? Do science on the fly? That is precisely what NKS suggests should be possible. Exploring the computational universe on request, and finding things out there that are useful for some particular specified purpose. We started a small experiment a few years ago with WolframTones where we use NKS to invent new musical tunes. But there is vastly more that can be done—directing with ordinary language, but discovering automatically with NKS. Whether today’s computers are fast enough to do this well I do not know. But perhaps by next year, Wolfram|Alpha will not only be a killer app made possible by NKS—it will also provide an outlet for the full richness of the computational universe that has been revealed to us by NKS. But for now: tomorrow (May 15) is the day we begin to make Wolfram|Alpha live—the first killer app of NKS. (See the Wolfram|Alpha Blog to follow the launch.)
{"url":"http://blog.wolfram.com/2009/05/14/7-years-of-nksand-its-first-killer-app/","timestamp":"2014-04-19T01:51:08Z","content_type":null,"content_length":"58835","record_id":"<urn:uuid:779f384b-a9e7-4e88-9b88-72535a76f0a6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of de gree early 13c., from O.Fr. degre "a degree, step, rank," from V.L. *degradus "a step," from L.L. degredare, from L. de- "down" + gradus "step" (see ). Most modern senses date from M.E., from notion of a hierarchy of steps. Meaning "a grade of crime" is 1670s; that of "a unit
{"url":"http://dictionary.reference.com/browse/de+gree","timestamp":"2014-04-19T14:49:31Z","content_type":null,"content_length":"110269","record_id":"<urn:uuid:8eda89be-9e84-4a58-9343-960eab98b597>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Which Prime below 1 million can be expressed as the sum of most primes Re: Which Prime below 1 million can be expressed as the sum of most primes Hi Agnishom; I do not use pastebin so I do not know anything about it. Your program got the right answer. If you want to test it again then you could answer you own question that you posted. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=229334","timestamp":"2014-04-19T02:09:18Z","content_type":null,"content_length":"11273","record_id":"<urn:uuid:5c14ead9-c34a-4198-8042-95075a3b1746>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
December 15th 2008, 02:51 PM show that the curves 3x^2+2x-3y^2=1 and 6xy+2y=0 are orthogonal (i.e. there slopes everywhere are perpendicular) thank you, your help is very appreciated! December 15th 2008, 03:24 PM If you take the derivatives of both curves wrt x then the derivative of the first curve is (3x+1)/(3y) and the derivative of the second curve wrt x is (-3y)/(3x+1). So the slopes are opposite reciprocals. This means they are perpendicular. Hope this helps, someone else will hopefully explain it better. December 15th 2008, 03:44 PM im sorry i had a typo. the first equation i forgot to put = 1, i dont know if that changes anything at all or not.. December 15th 2008, 03:57 PM nope because the derivative of a constant is still zero December 15th 2008, 04:29 PM thank you!
{"url":"http://mathhelpforum.com/calculus/65128-curves-print.html","timestamp":"2014-04-19T13:08:55Z","content_type":null,"content_length":"4521","record_id":"<urn:uuid:d699866f-ae0a-4ff1-a9bf-75a08900d03b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
A First Course in Machine Learning A First Course in Machine Learning Follow Author: Simon Rogers & Mark Girolami Publisher: Chapman & Hall/CRC ISBN: 978-1439824146 Aimed at: Students preparing for a course in machine learning Rating: 3 Pros: Readable explanations of statistical techniques Cons: Doesn't cover enough about machine learning Reviewed by: Mike James Given the interest in the online course on Machine Learning what could be better than a bit of background reading? This book looks as if it might be exactly the right stuff to get you started - but in practice that are a few things you need to know about it before you decide that it is for you. The most important feature of the book is that it is a statistics-oriented account. In fact many of the chapters would be just as at home in a book on classical statistics. For example, the first chapter is on Linear Modelling and it is essentially about least squares linear regression. It is true that it is packaged up in some of the language of machine learning but what you are presented with could be found in any introduction to statistics. This said, it is well presented and the mathematics is broken down into manageable chunks that you should be able to follow. There is also a web site with MATLAB scripts that lets you try out the models, view the graphs and tweak the parameters. Chapter 2 is more linear modelling, but from the maximum likelihood point of view. Again, this is well explained, but it isn't the stuff that makes machine learning an exciting subject. You could argue that the prospective student of machine learning needs to know all of this before moving on but this isn't really true. There are lots of machine learning techniques that don't need much statistics theory. Chapter 3 introduces the Bayesian approach to machine learning, but this doesn't cover much more than basic Bayesian stats. After going over an example of coin tossing, it moves on to introduce the basic techniques of Bayesian stats. The next chapter pushes this further to some areas where it does look more like machine learning than classical stats. Chapter 5 is about classification, but it is a very narrow approach. We learn about the Bayes classifier and logistic regression but not about discriminant analysis. The second half of the chapter introduces non-probabilistic methods and at this point the book more or less abandons the classical stats approach - it really has no choice because the majority of machine learning algorithms don't have firm theoretical foundations. However, they do have probabilistic heuristics underlying them and, for example, the K nearest neighbour classifier can be viewed as an estimate of the Bayes classifier constructed using the sample density. The chapter concludes with a look at support vector machines. Chapter 6 is about clustering, predominantly the K means approach and the K means augmented by kernel estimation. Again clustering is mostly based on heuristics rather than deep statistical theory so there isn't much justification to make use of the approach used in the earlier parts of the book. The final chapter returns to classical statistics, multivariate statistics this time with a look at principal components and other latent variable models. Again, the presentation is quite good but more suited to a general statistics book than machine learning. At the end of the day the problem with this introduction is that it really doesn't cover the subject it claims to. There are so many missing techniques - the perceptron, neural networks, discriminant analysis, decision trees, Bayesian networks, reinforcement learning, the genetic algorithm and so on. You can argue that some of these techniques are too advanced for a first course, but leaving out so many simply robs the book of any real machine learning flavor. Even the classical statistics that are presented aren't particularly applied to machine learning problems and examples. Instead they relate to data analysis problems that aren't really anything much to do with machine learning. You also don't get any feeling for the way the techniques might be used in programs as online learners. The approach is static, you get some data, analyse it, derive a model, use the model - this really isn't machine learning. Having said all this, I have to admit that I enjoyed reading many of the chapters but because of what I learned about standard statistical analysis rather than machine learning. If you are looking for a book that introduces model fitting in a sort of machine learning context then this is a really good book. If on the other hand you want a first course on machine learning then this one just doesn't cover the ground. SQL Server 2012 Query Performance Tuning Author: Grant Fritchley Publisher: Apress, 2012 ISBN: 978-1430242031 Audience: Database developers and administrators Rating: 4.5 Reviewed by: Kay Ewbank The title of this book is a pretty good summary of the contents, especially with its subtitle of ‘Troubleshoot and optimize query performa [ ... ] + Full Review Joomla! Explained Author: Stephen Burge Publisher: Addison-Wesley ISBN: 978-0321703781 Aimed at: New to intermediate users of Joomla! Rating: 4 Pros: Good introductory explanations Cons: Not good on management issues Reviewed by: Ian Elliot A step-by-step guide to Joomla! 1.6 - does it succeed in explaining this [ ... ] + Full Review More Reviews Last Updated ( Monday, 06 February 2012 ) RSS feed of book reviews only RSS feed of all content Copyright © 2014 i-programmer.info. All Rights Reserved.
{"url":"http://www.i-programmer.info/bookreviews/59-artificial-intelligence/3734-a-first-course-in-machine-learning.html","timestamp":"2014-04-19T22:08:38Z","content_type":null,"content_length":"38873","record_id":"<urn:uuid:69e95c62-f76d-44f1-91c1-8f1ccf59410a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
SAS-L archives -- November 2001, week 3 (#132)LISTSERV at the University of Georgia Date: Fri, 16 Nov 2001 11:16:36 -0500 Reply-To: "Huang, Ya" <ya.huang@PFIZER.COM> Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU> From: "Huang, Ya" <ya.huang@PFIZER.COM> Subject: Re: Q on proc report, compute block? Comments: To: Perry Watts <wattsp@DCA.NET> Content-Type: text/plain; charset="iso-8859-1" Thanks for the suggestion. I believe the macro trick will work. But I still don't understand why mine did not work. According to the Online Doc the syntax for compute block is: compute after <target> and if <target> is omitted, the compute block should be excute at the end of report. And there is also an example in Online Doc for that. Richard Read Allen in a private response provide me another trick, that is to add a dummy var, and change "compute after" to "compute after dummy". It works too. I'm lost. Ya Huang -----Original Message----- From: Perry Watts [mailto:wattsp@DCA.NET] Sent: Thursday, November 15, 2001 8:44 PM To: SAS-L@LISTSERV.UGA.EDU Subject: Q on proc report, compute block? Ya Huang, I think your problem is in the COMPUTE AFTER; -- You are literally "computing after" i.e. after you have read through the entire data set -- so that all variables are re-set to missing. I can see where you cannot literally do computations in this PROC REPORT, since you need medians. I got around this problem by substituting macro variables for your Temp1 data set containing ON1, OM1, OM2. Please see below. data xx; input study $ sex $ age ncrs desc $; B F 67 8 stst B F 34 10 sdfsaf B M 55 7 sdryts B M 73 7 werwer B M 46 9 sdgsf A M 45 3 abcd A M 65 4 bdrf A F 46 3 lsdfsd A F 67 5 sdfas proc sort; by study; proc univariate noprint data=xx; by study; var age ncrs; output out=temp n=n1 median=m1 m2; proc univariate noprint data=xx; var age ncrs; output out=temp1 n=on1 median=om1 om2; proc sql noprint; select left(put(on1,3.)), left(put(om1,5.1)), left(put(om2,5.1)) into :on1, :om1, :om2 from temp1; data xx; merge xx temp; by study; proc sql; create table xx as select xx.*, temp1.* from xx, temp1 order by study options nocenter; proc report data=xx nowd headline; column n1 m1 m2 study sex age ncrs desc; define study / width=5 order; define sex / width=3 order; define age / width=3; define ncrs / width=5; define n1/noprint order; define m1/noprint order; define m2/noprint order; compute after study; line ' '; line 'N=' n1 3.; line 'Median age=' m1 5.1; line 'Median CRS=' m2 5.1; line ' '; compute after; line ' '; line "Total N= &on1"; line "Overall Median age= &om1"; line "Overall Median CRS= &om2"; line ' '; study sex age ncrs desc A F 46 3 lsdfsd 67 5 sdfas M 45 3 abcd 65 4 bdrf N= 4 Median age= 55.5 Median CRS= 3.5 B F 67 8 stst 34 10 sdfsaf M 55 7 sdryts 73 7 werwer 46 9 sdgsf N= 5 Median age= 55.0 Median CRS= 8.0 Total N= 9 Overall Median age= 55.0 Overall Median CRS= 7.0 Perry Watts
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0111c&L=sas-l&F=&S=&P=15055","timestamp":"2014-04-20T00:40:06Z","content_type":null,"content_length":"12133","record_id":"<urn:uuid:a9b6535a-be01-46da-b2f0-3a6c44b48498>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
arccos of arcsin? October 9th 2008, 10:52 PM arccos of arcsin? Question: "Find the derivative of arccos(arcsin(t))". which should be same as cos^-1 (sin^-1 (t)) I have no idea how to approach this problem. I know that sin(arcsinx) = x if [-1,1] and arc(sinx) = x if [-pi/2, pi/2] but my prof never explained the arccos and thats why I really don't know how to start doing this question! October 9th 2008, 11:51 PM Question: "Find the derivative of arccos(arcsin(t))". which should be same as cos^-1 (sin^-1 (t)) I have no idea how to approach this problem. I know that sin(arcsinx) = x if [-1,1] and arc(sinx) = x if [-pi/2, pi/2] but my prof never explained the arccos and thats why I really don't know how to start doing this question! you are not asked to evaluate the function, you are asked to find its derivative. recall that $\frac d{dx} \arccos [f(x)] = - \frac {f'(x)}{\sqrt{1 - [f(x)]^2}}$ and $\frac d{dx} \arcsin x = \frac 1{\sqrt{1 = x^2}}$ now use the chain rule
{"url":"http://mathhelpforum.com/calculus/52971-arccos-arcsin-print.html","timestamp":"2014-04-17T18:45:53Z","content_type":null,"content_length":"5014","record_id":"<urn:uuid:8097676a-d4bb-4f2b-a239-2a1d76c9d440>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig identities. May 26th 2010, 11:23 AM #1 Junior Member May 2010 Trig identities. Im having a lot of trouble solving the following trig identities: a) secē x-2 sec x cos x + cosē = tanē x - sinē x b) $tan x + 1/ tanx = 1/sin x cos x$ My working: For a): Trigonometric identity: secx cosx=1 secēx-2 sec x cos x + cosē =secēx + cosē-2 Identity: secēx = 1/cosēx Use the Identity: secēx + cosē- 2 1/cos^2x +cosēx-2 Now I am very confused, really unsure on how to prove this identity. I would greatly appreciate any helpful tips. For b) tanx+1/tanx =1/(sinx cosx ) Identity: 1/tanx = cotx : Now im completely lost, the R.H.S side is turning out to be "csc x sec x" . I would greatly appreciate any helpful tips, since I'm very confused here. Last edited by mr fantastic; June 5th 2010 at 04:44 PM. Reason: Edited post title. Im having a lot of trouble solving the following trig identities: a) secē x-2 sec x cos x + cosē = tanē x - sinē x b) $tan x + 1/ tanx = 1/sin x cos x$ My working: For a): Trigonometric identity: secx cosx=1 secēx-2 sec x cos x + cosē =secēx + cosē-2 Identity: secēx = 1/cosēx Use the Identity: secēx + cosē- 2 1/cos^2x +cosēx-2 Now I am very confused, really unsure on how to prove this identity. I would greatly appreciate any helpful tips. For b) tanx+1/tanx =1/(sinx cosx ) Identity: 1/tanx = cotx : Now im completely lost, the R.H.S side is turning out to be "csc x sec x" . I would greatly appreciate any helpful tips, since I'm very confused here. (b) $tanx+\frac{1}{tanx} = \frac{sinx}{cosx} +\frac{1}{\frac{sinx}{cosx}} = \frac{sinx}{cosx}+\frac{cosx}{sinx}$ $= \frac{sin^2x+cos^2x}{sinx.cosx} = \frac{1}{sinx.cosx}$ (a) $sec^2x-2secx.cosx+cos^2x$ use the identity $sec^2x-tan^2x=1$ and $cos^2x=1-sin^2x$ $= (1+tan^2x)-2+(1-sin^2x)$ $= 1+tan^2x-2+1-sin^2x$ Thanks harish, the $cos^2 x$ in bold is actually only $cos^2$..would this make a difference? or would the same method you described work? May 26th 2010, 11:57 AM #2 May 26th 2010, 12:04 PM #3 May 26th 2010, 01:09 PM #4 Junior Member May 2010
{"url":"http://mathhelpforum.com/trigonometry/146534-trig-identities.html","timestamp":"2014-04-17T06:56:37Z","content_type":null,"content_length":"43082","record_id":"<urn:uuid:7e1f25ee-53d1-4aa4-91d8-2ebad1fb8651>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
Rob Roy Kelly Courses: Design Principles 3 3 Defining Space through Scale and Value A Using no more than four squares of varying dimensions, show space using scale. Exaggeration of small to large creates the most dramatic effect. B Using four squares of varying values of gray to black create a spatial composition. 4 Illustration Two Shapes as One, as Two or in Tension The next series of exercises has to do with different manifestations of tension. Tension is a very important design tool that has numerous interpretations. An old painter once described tension as a very important something between two points where there is nothing. Tension exists in color, drawing, relationship of shapes, and it is extremely important in any kind of composition typographic or otherwise. Tension is a principle that is manipulated for numerous effects or purposes. As such, it is one of the most important design tools a designer can exploit. It is essential to recognize and to know how to use tension. Divide the 10 x 10 inch picture plane horizontally into thirds with lines using a rapidiograph pen. Top section: two squares placed next to one another to appear as one shape. Middle section: move the squares to opposite edges to read as two shapes. Bottom section: slide the square back and forth until you ind that exact point where it cannot be determined if it is one or two shapes that will be the tension point. 5 Tension Relationships Using one four-inch black square (or a four-inch square from a photograph) and one linear unit (or a line of 10 point type) 1/8 x 3 1/2 inches make three arrangements either static or dynamic: A Relate linear unit to square as one shape. B Put linear unit into tension relationship with square. C Make linear unit and square two separate entities that are visually related. 1/8-inch strip must align with left side of the larger square in all three exercises. The purpose of this exercise is to demonstrate how captions can relate to photograph, illustration or any other element. This is of particular importance because too many designers do not recognize the importance of the visual dynamics of this relationship. 6 Tension to Achieve Visual Balance Using three squares and one 1/2-inch red circle, activate the entire ten-inch picture plane using tension. Arrange squares into an unbalanced format, and then create a visual balance using the red dot; at the same time, activating the entire picture plane. It works best if the red dot is slid up or down the edges. A Using one red dot B Using two red dots C Optional: You can use dots and three lines of type. 7 Preserving Integrity of a Shape with Tension A four-inch black square is cut vertically into three sections, one of which will be one-eighth inch wide,and the other cuts are made at the student's discretion. Squares can be placed in either a static or dynamic relationship to the edges. Sections can be rearranged. The sections are slid back and forth to ind the greatest amount of tension between the sections. Top and bottom edges must always align with all sides parallel. The objective is to retain the integrity of the three shapes as one shape through tension.
{"url":"http://www.rit.edu/library/archives/rkelly/html/04_cou/cou_des3.html","timestamp":"2014-04-20T16:19:18Z","content_type":null,"content_length":"34570","record_id":"<urn:uuid:d5132d76-c79d-4e6a-b646-cb6881f5c45f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimal Control Problem for Switched System with the Nonsmooth Cost Functional Abstract and Applied Analysis Volume 2013 (2013), Article ID 681862, 6 pages Research Article Optimal Control Problem for Switched System with the Nonsmooth Cost Functional ^1Department of Mathematics, Yasar University, 35100 İzmir, Turkey ^2Department of Business Administration, Cologne University, 50931 Cologne, Germany Received 2 July 2013; Revised 15 August 2013; Accepted 3 September 2013 Academic Editor: Nazim Idrisoglu Mahmudov Copyright © 2013 Shahlar Meherrem and Rufan Akbarov. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We examine the relationships between lower exhausters, quasidifferentiability (in the Demyanov and Rubinov sense), and optimal control for switching systems. Firstly, we get necessary optimality condition for the optimal control problem for switching system in terms of lower exhausters. Then, by using relationships between lower exhausters and quasidifferentiability, we obtain necessary optimality condition in the case that the minimization functional satisfies quasidifferentiability condition. 1. Introduction A switched system is a particular kind of hybrid system that consists of several subsystems and a switching law specifying the active subsystem at each time instant. There are some articles which are dedicated to switching system [1–8]. Examples of switched systems can be found in chemical processes, automotive systems, and electrical circuit systems, and so forth. Regarding the necessary optimality conditions for switching system in the smooth cost functional, it can be found in [1, 4, 6]. The more information connection between quasidifferential, exhausters and Hadamard differential are in [8–10]. Concerning the necessary optimality conditions for discrete switching system is in [5], and switching system with Frechet subdifferentiable cost functional is in [3]. This paper addresses the role exhausters and quasi-differentiability in the switching control problem. This paper is also extension of the results in the paper [5] (additional conditions are switching points unknown, and minimization functional is nonsmooth) in the case of first optimality condition. The rest of this paper is organized as follows. Section 2 contains some preliminaries, definitions, and theorems. Section 3 contains problem formulations and necessary optimality conditions for switching optimal control problem in the terms of exhausters. Then, the main theorem in Section 3 is extended to the case in which minimizing function is quasidifferentiable. 2. Some Preliminaries of Non-Smooth Analysis Let us begin with basic constructions of the directional derivative (or its generalization) used in the sequel. Let , be an open set. The function is called Hadamard upper (lower) derivative of the function at the point in the direction if there exist limit such that where means that and . Note that limits in (1) always exist, but there are not necessary finite. This derivative is positively homogeneous functions of direction. The Gateaux upper (lower) subdifferential of the function at a point can be defined as follows: The setis called, respectively, the upper (lower) Frechet subdifferential of the function at the point . As observed in [9, 10], if is a quasidifferentiable function then its directional derivative at a point is represented as where are convex compact sets. From the last relation, we can easily reduce that This means that for the function the upper and lower exhausters can be described in the following way: It is clear that the Frechet upper subdifferential can be expressed with the Hadamard upper derivative in the following way; see [9, Lemma 3.2]: Theorem 1. Let be lower exhausters of the positively homogeneous function . Then, , where is the Frechet upper subdifferential of the at , and for the positively homogeneous function the Frechet superdifferential at the point zero follows Proof. Take any . Then by using definition an lower exhausters we can write Consider now any Let us consider . Then, there exists where . Then, by separation theorem, there exists such that It is conducts (3) and for every and due to arbitrary. This means that . The proof of the theorem is complete. Lemma 2. The Frechet upper and Gateaux lower subdifferentials of a positively homogeneous function at zero coincide. Proof. Let be a positively homogenous function. It is not difficult to observe that every and every : Hence, the Gateaux lower subdifferential of at takes the forms which coincides with the representation of the Frechet upper subdifferential of the positively homogenous function (see [11, Proposition 1.9]). 3. Problem Formulation and Necessary Optimality Condition Let investigating object be described by the differential equation with initial condition and the phase constraints at the end of the interval and switching conditions on switching points (the conditions which determine that at the switching points the phase trajectories must be connected to each other by some relations): The goal of this paper is to minimize the following functional: with the conditions (14)–(16). Namely, it is required to find the controls , switching points , and the end point (here are not fixed) with the corresponding state satisfying (14)–(16) so that the functional in (18) is minimized. We will derive necessary conditions for the nonsmooth version of these problems (by using the Frechet superdifferential and exhausters, quasidifferentiable in the Demyanov and Rubinov sense). Here , and are continuous, at least continuously partially differentiable vector-valued functions with respect to their variables, are continuous and have continuous partial derivative with respect to their variables, has Frechet upper subdifferentiable (superdifferentiable) at a point and positively homogeneous functional, and are controls. The sets are assumed to be nonempty and open. Here ( 16) is switching conditions. If we denote this as follows: , , , then it is convenient to say that the aim of this paper is to find the triple which solves problem (14)–(18). This triple will be called optimal control for the problem (14)–(18). At first we assume that is the Hadamar upper differentiable at the point in the direction of zero. Then, is upper semicontinuous, and it has an exhaustive family of lower concave approximations of . Theorem 3 (Necessary optimality condition in terms of lower exhauster). Let be an optimal solution to the control problem (14)–(18). Then, for every element from intersection of the subsets of the lower exhauster of the functional , that is, , , there exist vector functions , for which the following necessary optimality condition holds:(i)State equation: (ii)Costate equation: (iii)At the switching points, , (iv)Minimality condition: (v)At the end point , here is a Kronecker symbol, , is a Hamilton-Pontryagin function, is lower exhauster of the functional , , are the vectors, and is defined by the conditions (ii) and (iii) in the process of the proof of the theorem, later. Proof. Firstly, we will try to reduce optimal control problem (14)–(18) with nonsmooth cost functional to the optimal control problem with smooth minimization functional. In this way, we will use some useful theorems in [12, 13]. Let us note that smooth variational descriptions of Frechet normals theorem in [12, Theorem 1.30] and its subdifferential counterpart [12, Theorem 1.88] provide important variational descriptions of Frechet subgradients of nonsmooth functions in terms of smooth supports. To prove the theorem, take any elements from intersection of the subset of the exhauster, , where , . Then by using Theorem 1, we can write that . Then, apply the variational description in [12, Theorem 1.88] to the subgradients . In this way, we find functions for satisfying the relations in some neighborhood of , and such that each is continuously differentiable at with , . It is easy to check that is a local solution to the following optimization problem of type (14)–( 18) but with cost continuously differentiable around . This means that we deduce the optimal control problem (14)–(18) with the nonsmooth cost functional to the smooth cost functional data: taking into account that We use multipliers to adjoint to constraints , , and , to : by introducing the Lagrange multipliers . In the following, we will find it convenient to use the function , called the Hamiltonian, defined as for . Using this notation, we can write the Lagrange functional as Assume is optimal control. To determine the variation , we introduce the variation , , , and . From the calculus of variations, we can obtain that the first variation of as If we follow the steps in [3, pages 5–7] then, the first variation of the functional takes the following form:The latter sum is known because and it is easy to check that If the state equations (14) are satisfied, is selected so that coefficient of and is identically zero. Thus, we have The integrand is the first-order approximation to the change in caused by Therefore, If is in a sufficiently small neighborhood of then the high-order terms are small and the integral in last equation dominates the expression of . Thus, for to be a minimizing control it is necessary that for all admissible . We assert that in order for the last inequality to be satisfied for all admissible in the specified neighborhood, it is necessary that for all admissible and for all . To show this, consider the control where is an arbitrarily small, but nonzero time interval and are admissible control variations. After this, if we consider proof description of the maximum principle in [4], we can come to the last inequality. According to the fundamental theorem of the calculus of the variation, at the extremal point the first variation of the functional must be zero, that is, . Setting to zero, the coefficients of the independent increments , , and , and taking into account that yield the necessary optimality conditions (i)–(v) in Theorem 3. This completes the proof of the theorem. Theorem 4 (Necessary optimality conditions for switching optimal control system in terms of Quasidiffereniability). Let the minimization functional be positively homogenous, quasidifferentiable at a point , and let be an optimal solution to the control problem (14)–(18). Then, there exist vector functions , , and there exist convex compact and bounded set , in which for any elements , the necessary optimality conditions (i)–(v) in Theorem 3 are satisfied. Proof. Let minimization functional be positively homogenous and quasidifferentiable at a point . Then, there exist totally bounded lower exhausters for the [9, Theorem 4]. Let us make the substitution ; take any element , then also, and if we follow the proof description and result in Theorem 3 in the current paper, we can prove Theorem 4. If we use the relationship between the Gateaux upper subdifferential and Dini upper derivative [9, Lemma 3.6], substitute , then we can write the following corollary (here , is the Hadamard upper derivative of the minimizing functional in the direction ). Corollary 5. Let the minimization functional be positively homogenous, and let the Dini upper differentiable at a point and be an optimal solution to the control problem (14)–(18). Then for any elements , there exist vector functions , in which the necessary optimality conditions (i)–(v) in the Theorem 3 hold. Proof. Let us take any element . Then by using the lemma in [9, Lemma 3.8] we can write . Next, if we use the lemma in [9, Lemma 3.2], then we can put . At least, if we follow Theorem 1 (relationship between upper Frechet subdifferential and exhausters) and Theorem 3 (necessary optimality condition in terms of exhausters) in the current paper, we can prove the result of Corollary 5. 1. P. J. Antsaklis and A. Nerode, “Special issue on hybrid system,” IEEE Transactions on Automatic Control, vol. 43, no. 4, pp. 540–554, 1998. 2. A. Bensoussan and J. L. Menaldi, “Hybrid control and dynamic programming,” Dynamics of Continuous, Discrete and Impulsive Systems, vol. 3, no. 4, pp. 395–442, 1997. View at Zentralblatt MATH · View at MathSciNet 3. S. F. Maharramov, “Necessary optimality conditions for switching control problems,” Journal of Industrial and Management Optimization, vol. 6, no. 1, pp. 47–55, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 4. V. Boltyanski, “The maximum principle for variable structure systems,” International Journal of Control, vol. 77, no. 17, pp. 1445–1451, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 5. S. F. Magerramov and K. B. Mansimov, “Optimization of a class of discrete step control systems,” Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki, vol. 41, no. 3, pp. 360–366, 2001. View at Zentralblatt MATH · View at MathSciNet 6. R. M. Caravello and B. Piccoli, “Hybrid necessary principle,” in Proceedings of the 44th IEEE Conference on Decision and Control, pp. 723–728, 2002. 7. S. F. Maharramov, “Optimality condition of a nonsmooth switching control system,” Automotic Control and Computer Science, vol. 42, no. 2, pp. 94–101, 2008. 8. V. F. Demyanov and V. A. Roshchina, “Constrained optimality conditions in terms of proper and adjoint exhausters,” Applied and Computational Mathematics, vol. 4, no. 2, pp. 114–124, 2005. View at Zentralblatt MATH · View at MathSciNet 9. V. F. Demyanov and V. Roshchina, “Exhausters and subdifferentials in non-smooth analysis,” Optimization, vol. 57, no. 1, pp. 41–56, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 10. V. F. Demyanov and V. A. Roshchina, “Exhausters, optimality conditions and related problems,” Journal of Global Optimization, vol. 40, no. 1–3, pp. 71–85, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 11. F. Tröltzsch, Optimal Control of Partial Differential Equations, Theory, Methods and Applications, American Mathematical Society, 2010. View at MathSciNet 12. B. S. Mordukhovich, Variational Analysis and Generalized Differentiation I: Basic Theory, Springer, Berlin, Germany, 2006. View at MathSciNet 13. B. S. Mordukhovich, Variational Analysis and Generalized Differentiation, II: Applications, Springer, Berlin, Germany, 2006. View at MathSciNet
{"url":"http://www.hindawi.com/journals/aaa/2013/681862/","timestamp":"2014-04-18T06:07:30Z","content_type":null,"content_length":"575878","record_id":"<urn:uuid:28a81310-cf60-4528-981e-5eed742b4105>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Bellaire, TX Algebra 1 Tutor Find a Bellaire, TX Algebra 1 Tutor ...I later went to Howard University, in Washington, D.C., where I majored in Biology, but I later discovered that I didn't quite satiate my curiosity for science. After college, I went to school part-time, at the University of Houston. While there, I completed several classes, such as Biochemistry I, Biochemistry II, Endocrinology and Human Physiology. 8 Subjects: including algebra 1, chemistry, physics, biology I have taught math and science as a tutor since 1989. I am a retired state certified teacher in Texas both in composite high school science and mathematics. I offer a no-fail guarantee (contact me via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible. 35 Subjects: including algebra 1, chemistry, physics, calculus ...I have assisted both FBISD and LCISD students in Geometry. I took 4 semesters of physics in college. I have used physics in my engineering career. 11 Subjects: including algebra 1, chemistry, English, physics ...I worked as a tutor at a community college, where I was tutoring my fellow classmates and students with chemistry (General Chemistry I & II, Analytical Chemistry, Organic Chemistry I & II and Inorganic chemistry I & II) as well as mathematics (Algebra, Geometry, Trigonometry, Calculus I and II). ... 19 Subjects: including algebra 1, chemistry, calculus, physics ...I have been tutoring for close to 5 years now on most math subjects from Pre-Algebra up through Calculus 3. I have done TA jobs where I hold sessions for groups of students to give them extra practice on their course material and help to answer any questions that they might have, I have tutored ... 7 Subjects: including algebra 1, calculus, statistics, algebra 2
{"url":"http://www.purplemath.com/bellaire_tx_algebra_1_tutors.php","timestamp":"2014-04-18T04:18:21Z","content_type":null,"content_length":"24346","record_id":"<urn:uuid:66336229-aa22-4cae-b67a-bbbca7db1bab>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Beaverton Math Circle | National Association of Math Circles Beaverton Math Circle is a student-led organization that aims to provide students who have exceptional talent or interest in math with opportunities to meet like-minded peers and to explore exciting mathematics beyond the school curriculum. This year, we will host a variety of events including lectures from guest speakers and classes on problem solving.
{"url":"http://www.mathcircles.org/content/beaverton-math-circle","timestamp":"2014-04-19T06:52:38Z","content_type":null,"content_length":"19220","record_id":"<urn:uuid:cd7b2e58-f71c-4f53-8a83-e88ac914f37f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Artistic Intelligence Assignment Week 4 Because of confusions Generated by this weeks videos There are additional materials for this week The first is here Another Look 1. Using an image of your choice (photo, former assignment, or from an artist you admire) resize in at least three different new formats with at least one of those formats being a square. Upload the original and your three new variations. If you don't have an image processing program on your computer look at these free options. http://pixlr.com/ http://www.gimp.org/ http://www.getpaint.net/ http://www.photopos.com/ http://www.astronomie.be/registax/ 2. Take one of the images from the Gallery below and using the golden ratio rearrange the elements so that the proportions between objects and the overall rectangle are related. You can move and resize each element as you wish but the exterior rectangle proportions much remain unchanged. Upload two files one with your gridlines and your measurements and one Each image is the same size 1280 by 720 pixels. Is there as smaller fractional way to express this proportion? How many proportional relationships do you think are necessary for each image? Phi or the golden ratio is not a whole number. The golden ratio is an irrational mathematical constant, approximately 1.61803398874989. The Fibonacci numbers are the numbers in the following integer sequence (0,1,1,2,3,5,8,13,21,34,55... etc) Each new Fibonacci number is generated by adding together the last two number in the sequence. As the Fibonacci number approaching infinity the proportion between two consequective Fibonacci numbers is the golden ration. However at low values it is a very poor approximation of phi. In fact excluding the first seed zero value, taking the next two values makes a square. Many everyday objects have proportions found in the Fibonacci series. Why? Bonus: Take a second image and using another design idea rearrange elements using the golden ratio. Upload the image with and without your gridlines and measurements. Explain in the description why this design is different than question two. Can the golden ratio lead to many different arrangements or only a few? One Response to Assignment Week 4 1. Uh oh, I have a feeling I’m losing my Dada courage this week, lol. This entry was posted in Introduction to Artistic Intelligence. Bookmark the permalink.
{"url":"http://other-ai.org/2011/11/06/assignment-week-4/","timestamp":"2014-04-18T08:04:07Z","content_type":null,"content_length":"39534","record_id":"<urn:uuid:cfa7f10d-573f-4473-946b-1408d422898f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
vector space t/f question January 28th 2007, 09:25 AM #1 Junior Member Dec 2006 vector space t/f question T/F prove... a)The intersection of any two subsets of V is a subspace of V b) If V is a vector space other than the zero vector space, then V contains a subpace W such that W doesnt equal V. can sum1 provide a proof for these? The intersection of two subspaces is again a subspace, correct. b) If V is a vector space other than the zero vector space, then V contains a subpace W such that W doesnt equal V. can sum1 provide a proof for these? Not true, consider a vector space with dimension 1, then any subspace must have dimension 1 also, because it cannot be larger and it cannot be less because it is 1. b) If V is a vector space other than the zero vector space, then V contains a subpace W such that W doesnt equal V. True! If V is not {0}, then take W={0}. January 28th 2007, 09:39 AM #2 Global Moderator Nov 2005 New York City January 28th 2007, 01:01 PM #3 February 11th 2007, 05:55 AM #4
{"url":"http://mathhelpforum.com/advanced-algebra/10759-vector-space-t-f-question.html","timestamp":"2014-04-16T14:14:05Z","content_type":null,"content_length":"40909","record_id":"<urn:uuid:e549c07a-c24d-4823-a6ae-7cefa7427e68>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: March 2009 [00984] [Date Index] [Thread Index] [Author Index] Re: Re: Selecting left hand sides from assignments • To: mathgroup at smc.vnet.net • Subject: [mg97970] Re: [mg97803] Re: [mg97614] Selecting left hand sides from assignments • From: "E. Martin-Serrano" <eMartinSerrano at telefonica.net> • Date: Thu, 26 Mar 2009 05:44:22 -0500 (EST) • References: <200903221047.FAA08220@smc.vnet.net> Hi again, The following works as I wanted: it takes apart *lhs* and *rhs* of *Set* expressions while keeping the relation between the *lhs* to the *rhs* for further use. But, to get it to work one has to start working with a fresh kernel as to ensure the *lhs*(s) have not been assigned any value before the sequence of assignments *assign* is sent to the function *SplitSetSides*. However, since I am to use it as a meta-programming tool (not to be include in end user programs) this is not a big drawback, tough a bit artificial. The first sublist in *Out[4]* contains the *lhs*(s) while the second contains the *rhs*(s). In[1]:= Clear@frhs frhs[rhs_][symbols_] := Module[{step}, step = Fold[Flatten[#1, Infinity, #2] &, rhs, symbols]; step = Union@Flatten@FixedPoint[Apply[List, #, Infinity] &, step]; step = List @@ step; SymbolName[#] & /@ Select[Flatten[step], Head[#] === Symbol &] In[2]:= Clear@SplitSetSides SplitSetSides[holdassignments_] := Quiet@Module[{symbols = Symbol[#] & /@ Names["System`*"]}, {SymbolName[#] & /@ (List @@ Map[First, Map[Unevaluated, holdassignments]]), frhs[#][symbols] & /@ Apply[List, Map[Hold, holdassignments, {1}]]}] In[3]:= assign = Hold[a1 = Log[b1*c1^12], a2 = Sqrt[b2*c2], a3 = Log[b1*c1^12]*Sqrt[a2*c2]^b3*c3 ]; In[4]:= SplitSetSides[assign] Out[4]= {{"a1", "a2", "a3"}, {{"a1", "b1", "c1"}, {"a2", "b2", "c2"}, {"a3", "c3", "b1", "c1", "a2", "c2", "b3"}}} For now, this solution seems to be good enough for my needs. For instance, it will help me in building the data dependency graph for a large notebook meant to make dense computations involving many dynamic interrelated objects. The simplification of the data dependency graph helps in to improving efficiency by eliminating superfluous or redundant computation paths which would slow down significantly the performance. Also would help in pointing out circularities. And, although it needs some additional testing it seems to be working ok under my conditions. One thing which is still left (among many others) is to take out the names in *symbols* which do not correspond to true symbols (operators) in the list generated by the expression *symbols = Symbol[#] & /@ Names["System`*"]*} in the module of the function *SplitSetSides*. Any feed back will be appreciated. I hope it will be of any use to someone else. Cheers and many thanks for your help. E. Martin-Serrano >I need help on the following. >From the list of assignments: >assignmentslist = {LHS1 = RHS1, LHS2 = RHS2, LHS3 = RHS3, ..., LHSi = RHSi, ..., LHSn = RSHn} >I need to extract all the left hand sides (symbols) within a list like: >lhslist = {LHS, LHS2, LHS3, ..., LHSi, ..., LHSn} >Where the left hand sides are all symbols and the right hand sides are any >E. Martin-Serrano • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Mar/msg00984.html","timestamp":"2014-04-18T18:43:04Z","content_type":null,"content_length":"28679","record_id":"<urn:uuid:653a5776-b8b0-45cc-9c2a-0428852743ab>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Itasca, IL ACT Tutor Find a Itasca, IL ACT Tutor ...I have tutored math to many grade school kids successfully. I hold a bachelor of science in electrical engineering with emphasis in mathematics. I teach with solid math acumen, coupled with encouragement and positive reinforcement. 18 Subjects: including ACT Math, geometry, ASVAB, algebra 1 ...I went through the process myself, successfully gaining admission to Northside College Prep, and I am fully up to date on the current CPS protocol for applying to these schools. I am able to walk you through the application process, give you an overview of prospective schools, and provide academ... 38 Subjects: including ACT Math, Spanish, reading, statistics ...I have over 5 years teaching students of all ages. I have also taught adult learners from various backgrounds. I enjoy sharing Chinese culture and history to the students and I make learning Chinese fun, easy, and sustainable. 4 Subjects: including ACT Math, Chinese, algebra 1, algebra 2 ...Then we can work to address your specific needs. My goal is to help you genuinely understand the concepts you need to learn and give you the skills to use them. You can become a stronger, more confident problem-solver. 18 Subjects: including ACT Math, chemistry, writing, GRE ...I look forward to hearing from you.I took discrete math undergraduate at Tufts and received an A. Topics included set theory, graph theory, combinatorics. Since then I have worked as a TA for "Finite Mathematics for Business" which had a major component of counting (combinations, permutations) problems, and linear programming, both of which are common in discrete math. 22 Subjects: including ACT Math, calculus, statistics, precalculus Related Itasca, IL Tutors Itasca, IL Accounting Tutors Itasca, IL ACT Tutors Itasca, IL Algebra Tutors Itasca, IL Algebra 2 Tutors Itasca, IL Calculus Tutors Itasca, IL Geometry Tutors Itasca, IL Math Tutors Itasca, IL Prealgebra Tutors Itasca, IL Precalculus Tutors Itasca, IL SAT Tutors Itasca, IL SAT Math Tutors Itasca, IL Science Tutors Itasca, IL Statistics Tutors Itasca, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/itasca_il_act_tutors.php","timestamp":"2014-04-18T23:42:09Z","content_type":null,"content_length":"23604","record_id":"<urn:uuid:664dd908-6d5a-4c9e-bf33-3da599cd86f8>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionRelated WorkSolution OutlineSolution DescriptionShape DescriptionHypotheses Validation and MatchingEyeglasses DatabaseEyes LocalizationMonte Carlo SamplingShape AdjustmentTests and ResultsDiscussionConclusions and Future WorkConflicts of InterestReferencesFigures and Tables Modern eyeglasses lens manufacturing requires a complex process, called centering, that involves the measurement of several morphological parameters of the patient's face and their correlation with morphological parameters of the frames that the patient will wear. The most important morphological parameters regarding the eyeglasses are: the frame bridge (the distance between the left and right rims of the eyeglasses), the boxing size (height and width) for each lens, the fitting height (the distance between the bottom of the lens and the eye pupil) and the vertex distance (the distance between the interior surface of the lenses and the cornea). The classical tools and methods for measuring these morphological parameters usually involve manual processing; therefore, they are error prone and do not provide the required accuracy. Recently, most of the large eyeglasses producers ([1,2]) developed computer-aided systems that can accurately measure several morphological parameters by capturing and processing a digital image of the An eyeglasses detection algorithm can be used to automatically detect the key features needed by these systems in the measurement process. For example, the bridge, the boxing and the fitting height can be easily measured with the use of a computer vision system if the eyeglasses and the lens contours are accurately detected and extracted from the input image. Eyeglasses are one the most important factors that affect the performance of face recognition algorithms, which are crucial in a variety of applications, such as security and surveillance systems and human computer interaction. The robustness of the facial recognition systems can be increased if the eyeglasses are accurately detected and removed from the subject's face. Finally, from an esthetic perspective, the eyeglasses rim contours can be fed to an augmented reality system that would allow a person to observe his or her appearance with different colors of the eyeglasses rims or even without glasses (if eyeglasses removal is performed). In addition, if the contour of the eyeglasses is accurately determined, its shape can be modified, so that the patient can customize the shape of the rims. We propose a novel approach for accurately determining the position of the eyeglasses and the exact contour and size of the lenses based on a multistage Monte Carlo method. The first step of the algorithm is to determine the approximate region of the eyes; based on the position of the eyes, the eyeglasses search region is determined to be similar to the method presented in [3]. Next, we use a Monte Carlo search method to determine the approximate position and size of the eyeglasses. The search space is explored by morphing between a finite set of key hypotheses stored in a database. In the final step of the algorithm, a random walk is performed around the previously determined solution in order to finely tune the position, size and shape of the eyeglasses. The algorithm is mainly designed for optometric measuring devices, such as [1,2,4], which process one or multiple high resolution facial images of the patient in order to compute several morphological parameters needed by eyeglasses prescriptions. This work will highlight the following contributions: An original method for accurately detecting the position of the eyeglasses and the accurate shape and size of the eyeglasses lenses; An original eyeglasses model: the shape of the eyeglasses rims is described using Fourier descriptors; their size is controlled by a single parameter, namely, the interpupillary distance, while their position is determined by the localization of the eyes. This model can be used both for detection and tracking of the eyeglasses rims; An original method for morphing between shapes using Fourier descriptors . As the input domain is large, we only use a small set of representative shapes and dynamically generate the search space by morphing elements of this set; A lenses contour extraction method based on a multistage Monte Carlo search, using the proposed model. The remainder of this paper is organized as follows. Section 2 presents a general review of the most relevant research literature on eyes and eyeglasses detection, localization and removal. In Section 3, we present the outline of the proposed solution, and in Section 4, we detail this solution. Experimental results on eye detection and eyeglasses rim extraction are presented in Section 5. In Section 6, we perform a comparative study between the previous methods presented in the literature and our eyeglasses extraction algorithm. We conclude this paper in Section 7. Eyeglasses detection was pioneered by Jiang et al. [3], who proposed six measures for determining the presence of eyeglasses in facial images based on edge information from two regions around the eyes. This method requires that the eyes are detected in advance and does not provide any information regarding the localization or the shape of the eyeglasses. Jing et al. [5] extended the methods for detecting the presence of eyeglasses presented in [3], and extracted the eyeglasses based on deformable contours, using geometrical and edge features. The final position of the eyeglasses is determined by means of dynamic programming. This method is based only on edge information and gives inaccurate results, because of the presence of eyebrows, wrinkles, and so on. To overcome this limitation, Jing et al. [6] proposed a method for eyeglasses detection and extraction using Bayes' rule. Prior to glasses detection, a supervised learning scheme is used to learn the visual features of the glasses. Feature vectors are extracted for each edge point in the test image, and a Bayesian classifier is used to determine the points that belong to the glasses class. This method has a better performance than the work presented in [5], but requires knowledge about the eye position, and the faces in the test images must be normalized. Park et al. ([7,8]) presented a method for eyeglasses removal in facial images based on PCA (Principal Component Analysis) reconstruction. The eyeglasses points are extracted from a color input image using the Generalized Skin Color Distribution ([9,10]) transformation and edge information. The facial image without glasses is determined by example-based learning, and finally, the image is updated by means of recursive error compensation using PCA reconstruction. Wu et al. ([11]) proposed a method for eyeglasses detection that does not require any information regarding the position of the eyes, the face pose or the shape of the eyeglasses. The proposed method uses a trinocular vision system and makes use of the 3D Hough Transform in order to determine the 3D plane passing through the rims of the eyeglasses. The eyeglasses rims are extracted based on the determined 3D plane and additional geometrical constraints. This method requires a more complex vision system and has a greater computational complexity than 2D methods. More recently, Wu et al. ([12]) presented a system for automatically removing eyeglasses in facial images. First, the approximate region of the eyes is detected, and then, a Markov Chain Monte Carlo method is used to determine 15 key points on the eyeglasses by searching for the global minimum of the a posteriori probability distribution. Finally, the eyeglasses are removed from the input image using statistical methods to learn the mapping between pairs of face images with and without eyeglasses from a database. Xiao et al. [13] proposed a method based on Delaunay triangulation to detect the eyeglasses in frontal facial images. A Delaunay triangulation is performed on the binarized input image, and dense points in the facial region are clustered by measuring Delaunay triangles of different sizes. The eye region corresponds to small-sized triangles and triangle chains with a short length and narrow width. Eyeglasses are immediate neighbors of the eyes and are detected by specific Delaunay triangles surrounding the eye regions. Xiaodong et al. [14] proposed an eyeglasses detection and removal algorithm, which is not sensitive to illumination and contrast variations, based on phase congruency and progressive inpainting In [15], Zheng presents a method for deciding whether the eyeglasses are present or not in a thermal infrared image. The method is based on the fact that glasses appear as dark areas in a thermal image. Using this property, the eyeglasses are detected based on geometrical conditions (area, shape and location of the eyeglasses). The lens extraction algorithm we propose requires that the location of the eyes is estimated within a certain error margin in the facial image. In the past several years, active research and significant progress has been made in the area of eye and facial feature detection in general. For example, in [16], the authors present a detailed survey of the recent eye detection techniques, emphasizing the challenges they pose, as well as the importance of eye detection in many problems in computer vision and other areas. The work presented in this paper aims at extending the functionality and performance of the state-of-the-art systems, by estimating the exact position of the eyeglasses rims, as well as the exact contour and size of the lenses. The eyeglasses rims come in very different shapes; therefore, a parametric description of their contour proves to be very difficult. In order to represent the rim shapes, Fourier descriptors are used The Fourier descriptors are computed by taking the Fourier transform of a shape, where every point on the contour (x, y) is mapped to a complex number z(i) = x(i)+jy(i), where the x-axis is considered the real axis and the y-axis is considered the imaginary axis: c ( k ) = ∑ i = 0 N − 1 z ( i ) e − j 2 π k i / N The complex coefficients, c(k), are called the shape's Fourier descriptors. The initial shape can be restored by applying the inverse Fourier transform on these descriptors: z ( i ) = 1 N ∑ k = 0 N − 1 c ( k ) e − j 2 π k i / N This representation describes the frequency information of the shape: lower coefficients provide information about the rough shape, while higher order coefficients account for shape details. Fourier descriptors can be easily interpreted. The first coefficient, c(0), represents the centroid of the shape and is the only coefficient that encodes information about the shape position. The second frequency component, c(1), describes the size of the shape; eliminating all but the first two Fourier descriptors, the reconstructed shape will always result in a circle (an n-sided polygon). When reconstructing the shape, if only a few lower frequency coefficients are chosen, then an approximate outline of the initial shape will be obtained, while if higher frequency descriptors are used, then the shape will be described in more detail. The same number of points exists in the shape's representation, but less information is used for representing each point [18]. Figure 2 shows the boundary of an eyeglasses rim reconstructed based on increasing the number of Fourier descriptors. The main reason for the popularity of the Fourier descriptors is their invariance to common geometric transformations, such as translation, scaling and rotation. In order to translate a shape with a point Δ[xy] = Δx + jΔy, this point must be added to each point on the shape contour: z[t](i) = [x(i) + Δx] + j[y(i) + Δy]. All information regarding the shape position is contained in the first descriptor of the shape c(0), so translation only affects this descriptor: c[t](0) = c(0) + Δ[xy]. Translation-invariant Fourier descriptors are obtained by leaving out the first descriptor: c(0). If the contour of a shape is scaled with a coefficient, α, z[s](i) = αz(i), the magnitudes of the coefficients are also scaled with this coefficient: c[s](k) = αc(k). Scaling invariance is achieved by normalizing all Fourier descriptors by |c(1)|. In order to generate a new shape out of two existing shapes, we make use of the closure property of Fourier series: the sum of any two Fourier series is itself a Fourier series. This means that we can obtain an intermediary shape between any two given shapes by using a linear combination of the boundaries of the two initial shapes: C m = β C s 1 + ( 1 − β ) C s 2 , β ∈ [ 0 , 1 ]where C[s][1] represents the Fourier descriptors of the first shape, C[s][2] represents the Fourier descriptors of the second shape and β is the morphing factor. Figure 3 shows the result of morphing between two shapes with different morphing factors. We heuristically determined that the number of Fourier coefficients necessary to describe the rims boundaries is 14. The main advantage of this shape representation is that with a limited number of variables (14 Fourier descriptors), we were able to accurately describe a variety of eyeglasses rims. The following parameters are sufficient for describing a hypothetical eyeglasses rim: v = [ class − shape class ( x , y ) − shape position ( the centroid ) size − shape size { S 1 , S 2 } − reference shapes β − interpolation factor ] The shape of a hypothesis is determined by the two shapes, {S[1], S[2]}, selected from the database, which are morphed with the morphing factor, β. The shapes {S[1], S[2]} belong to the same class, class, and are represented by Fourier descriptors. The shape and position of the hypothesis are controlled by the size attribute and the shape's centroid coordinates (x, y), respectively. All the geometrical transformations (scaling, translation, flipping, morphing) are performed in the Fourier space. In order to estimate the extent to which the generated hypothesis matches the test image, we perform an edge detection [19], followed by a distance transformation (DT) [20]. The boundary of the hypothetical shape is then superimposed onto the DT image, and the matching score of the hypothesis is computed as the average of all the pixels from the distance transform image that lie under its mathing _ score ( ϑ ) = 1 | ϑ | ∑ ( x , y ) ∈ ϑ D T ( x , y )where ϑ represents the contour of the shape. The final matching score of a hypothesis is computed as the average between the matching score of the left rim and the matching score of the right rim. score ( hypothesis ) = ϑ 1 + ϑ 2 2where ϑ[1] and ϑ[1] represent the contour of the left eyeglasses rim and right eyeglasses rim, respectively. Figure 4 depicts the Canny edge and the corresponding distance transformation. Here, the edge image is close to the ideal case, where the shape and the position of the lenses seem easy to extract. However, in the majority of cases, the contours of the lenses have large gaps and many occlusions from other elements, such as eyebrows, light reflection, and so on. In addition, a hypothesis must comply with several geometrical conditions in order to be considered a potential solution. These conditions regard: The position of the rims: the eyes must be enclosed by the rims, and the eyeglasses centroid must be close to the eyes center; The size of the rims: the size of the rims must be larger than the size of the eyes. The neoclassical canon for facial proportions ([21]) divides the face vertically into five equal parts, assuming that the intercanthal distance (which occupies the middle fifth) is equal to the nasal width and widths of the eyes. We used this relation to further restrain the size and the region in which the eyeglasses rims can be localized. The eyeglasses' vertical sides must lie in the first and middle fifth region of the face for the left lens and in the middle and fifth region for the right lens, as depicted in Figure 5. For the generation of a new hypothesis shape, we use a database that contains the Fourier descriptors of the left eyeglasses rims; the corresponding right rim is automatically generated by horizontally flipping the left rim. In this way, the symmetry between the left and the right rim is implicitly contained in the model. To create the database, the boundaries of different eyeglasses rims were manually selected from several facial images. Next, each boundary was described using Fourier descriptors, which were normalized to scaling and translation. The database contains 40 samples of eyeglasses rims. The search space of the eyeglasses shapes is covered by morphing between the shapes contained in this Eyeglasses rims are grouped into three different classes based on their shape similarity: symmetrical rectangular rims, elliptic symmetrical rims and asymmetrical rims. Table 1 presents samples from each one of the eyeglasses class: the contour of the eyeglasses rim, as well as the Fourier descriptors of the contour. In our experiments, we used 14 Fourier descriptors to describe the contour of the eyeglasses rims, as previously stated in Section 4.1. Each time a new hypothesis is generated, two random shapes (belonging to the same class) are selected from this database and morphed with a randomly selected factor. For determining the approximate position of the eyes, we used the well-known Viola-Jones [22] framework, as it provides high accuracy in real time. The algorithm is based on several key features: the use of simple rectangular features, called Haar-like features, a novel image representation technique, Integral image representation, which allows the features to be computed very quickly, the AdaBoost algorithm [23] and a method for combining increasingly complex classifiers into a cascade, which rapidly discards the background pixels. Although this algorithm was initially proposed for face detection, it can be trained for any object. We used the training data provided by the OpenCV framework for the eye detection. The first step of the eye detection algorithm is to detect all the candidate regions for the eyes (separately for the left and for the right eye) using the Viola-Jones algorithm. Next, all the regions that do not have a left eye-right eye pair are discarded, and the candidate regions are split into left and right groups with respect to the center of all the candidate regions. After this step, all the eye zones that overlap are merged into their enclosing rectangle. Finally, the best matching group and the localization of the eyes are computed based on the the number of candidate eye zones contained and the dimension of the enclosing rectangle. The center of each eye is determined by computing the centroid of the overlapping left (or right) eye zones, and the area of the eye is computed from the sequence of Haar-detected eye zones, as a value between the size of the smallest eyes zone and the mean of all the detected eye zones. The interpupillary distance (IPD) is approximated as the distance between the center of the rectangles corresponding to each eye. The information regarding the interpupillary distance is used to delimit the region in which the eyeglasses are likely to be found, in a similar manner as presented in [3]. Figure 6 depicts the computed search area of the eyeglasses based on the eyes position and the approximation of the interpupillary distance. We experimentally determined that δ = 1.5 provides enough accuracy to detect the eyeglasses ROI for most of the human faces. Increasing the value of δ would unnecessarily search for solutions too far away from the eyes, while decreasing it would fail to encompass the entire contour of bigger-sized eyeglasses. Alternate methods could be used for both detecting the eyes ([16]) and establishing the eyeglasses search area. For example, the eyeglasses region of interest could be robustly extracted using Active Appearance Models ([24]) or Constrained Local Models ([25]). We chose, however, to compute this zone heuristically, as presented in [3,5], because it is very simple, does not require a training phase and complies with the conditions we imposed for the hypotheses generation space. This region must be large enough to comprise the contours of the eyeglasses and sufficiently narrow, such as the search space to be limited. Of course, the method of computing the search region of the eyeglasses and the method of localizing the eyes can be replaced at any time, as the eyeglasses detection algorithm (the main contribution of this work) takes as input the eyeglasses search area. Monte Carlo Methods ([26]) refer to a broad class of algorithms that solve a problem by generating random samples from a probability distribution over the input domain and by observing the fraction of the samples obeying several properties. These algorithms are often used for obtaining numerical solutions to problems that are too complex to be solved analytically or are infeasible for deterministic algorithms. Eyeglasses rims have different shapes, and an analytical description of their contour is impractical. Moreover, the eyeglasses region is often occluded by several other elements, and it is very hard to distinguish between the pixels that belong to the eyeglasses lenses and those belonging to other elements. To overcome these problems, we apply a Monte Carlo method to determine the position of the eyeglasses and the contour of the lenses, by randomly generating a large number of hypotheses and clustering them based on their position and size, as shown in Algoritm 1. Algorithm 1: Monte Carlo sampling. Input: eyes position, interpupillary distance Define the possible domain of inputs for each dimension of the hypothesis. whilenumber_of_generated_hypotheses ≤ max_hypothesesdo Generate random hypothesis by sampling from a uniform probability distribution over the domain. Match the hypothesis over the input image. number_of_generated_hypotheses ← number_of_generated_hypotheses + 1 end while Cluster the hypotheses based on their position and size. Select the best cluster based on its matching score. Prior to this sampling procedure, the eyeglasses search space is restricted to an area that surrounds the eyes (as described in Section 4.4). Figure 7 depicts the eyeglasses search area; all the hypotheses will be generated within this region. As stated in Equation (4), a hypothetical rim is defined by the following parameters: the position (the centroid of each one of the eyeglasses lenses), the size of the lenses and the Fourier coefficients that describe the contour of the lenses. For the construction of a new hypothesis, two reference shapes are selected from the database; next, the position, the size and the interpolation factor between the reference shapes are drawn from a uniform probability distribution. The centroids of the eyeglasses lenses are obtained by sampling points from a region determined by the position of the eyes and the interpupillary distance. The size of the lenses is drawn from an interval determined by the interpupillary distance. To determine the shape of the new hypothesis, we choose two shapes from the database and morph them with an interpolation factor within [0, 1], in order to obtain the Fourier coefficients that describe the contours of the lenses. Each hypothesis is validated and assigned a non-negative score by matching it over the Distance transform image (Section 4.2). As stated in Section 4.3, the eyeglasses rims are classified into three distinct classes: rectangular symmetrical rims, elliptic symmetrical rims and asymmetrical rims. The same Monte Carlo sampling procedure is performed for each one of these classes. At the end of the sampling procedure, all the valid hypothesis are grouped into clusters, depending on their position and size, as shown in Algorithm 2. We used an agglomerative clustering approach: initially, all data points belong to a separate cluster, and then, clusters are successively merged based on their similarity. Algorithm 2: Hypotheses clustering algorithm. Input: sequence of hypothesis H Output: clustering of hypotheses based on position and size: C C ← H while clusters can be created do pick c[i], c[j] ∈ H the two nearest clusters c[k]← cluster (c[i], c[j]) C ← C — {c[i], c[j]} C ← C ∪ { c[k]} end while The standard hierarchical clustering algorithm terminates when all the clusters have been merged into a single cluster or after a predefined number of steps. Two clusters are susceptible candidates for merging if the overlapping area of the bounding rectangles of each cluster is at least α%. Based on this criterion, the clustering procedure ends when there are no more clusters that can be grouped into a new one. In our experiments, we used α = 90% in order to ensure the compactness of the clusters. The determined clusters provide information regarding the regions where the eyeglasses are likely to be located. Clusters with good matching scores are more likely to contain the eyeglasses. This clustering process eliminates false positives: for a false positive, even if it has a good matching score, the other hypotheses from its cluster are more likely to have a bad score; while in the cluster that contains the most similar shape to the eyeglasses all shapes have a good matching score. The hypothesis used in the next step of the algorithm is the best hypothesis from the cluster with the best matching score. At the end of this step, we determined the approximate position and size of the eyeglasses rims. The result of the Monte Carlo search is illustrated in Figure 8 for each one of the three eyeglasses classes. After the Monte Carlo search, the approximate location, size and shape of the eyeglasses rims are determined. The problem now is to finely tune these parameters in order to obtain a solution as close as possible to the actual eyeglasses in the image. To address this problem, we perform a random walk around the solution found in the Monte Carlo search, as shown in Algorithm 3. Algorithm 3: Shape adjustment algorithm. Input: x[0] initial solution after Monte Carlo search, max_iterations number of iterations Output: x' final solution of the algorithm x ← x[0] iteration ← 0 while iteration < max_iterations do generate x' by sampling from the distribution P(x'∣x) evaluate x' if score(x) < score(x') then (x) ← x' end if iteration ← iteration + 1 end while At each step of the algorithm, a new sample, x', is generated by randomly picking a value from the normal probability distribution centered in the current solution, x: P(x'∣x). P(x'∣x) is used as a motion model in the state space and suggests a candidate for the next sample, x', given the previous sample, x. We chose to use the Gaussian distribution centered at x, so that the points closer to x are more likely to be visited next. If the new sample, x', is more likely than the previous one, x, the current solution is set to x'. The generated samples are evaluated by matching them over the Distance transform image and validated as presented in Section 4.2. To generate the new candidate, x', we change only one of the dimensions (i.e., the size, the position or the shape) of the current sample, x, by drawing from the Gaussian distribution, P(x'∣x). The algorithm is run in an iterative way for max_iterations = 30. The value for the maximum number of iteration was determined heuristically, after multiple tests. We observed that increasing the maximum number of iterations over 30 has little to no impact in the majority of the cases. The algorithm is run for each one of the classes (elliptic, rectangular and asymmetrical eyeglasses rims). The solution with the best matching score is selected as the final result. An example of the eyeglasses detection algorithm is depicted in Figure 9. The eyeglasses contours extraction algorithm is targeted for optometric measurement systems: we plan to determine the exact shape of the rims and to accurately measure several morphological parameters ([27]): the boxing size, the vertex distance and the wrap angle. Part of this research grant is supported by one of the major optometric measurement systems manufacturers ([4]). The measuring devices produced by the company determine the morphological parameters needed by any eyeglasses prescription by processing a digital image of the patient. For the measurement process, the patient (wearing eyeglasses) stands still in front of the device at a distance varying from 1 to 2 m, and the device captures an image of the patient's face using a high resolution camera, as shown in Figure 10. The morphological parameters are determined by detecting the interest points (the center of the pupils, the frames of the eyeglasses) on this image and computing the relationship between them in real world coordinates (mm or degrees). All these devices have been validated by comparing the result of the measurements obtained using this computer-aided method and those obtained using the classical methods from the optometry field. For the validation process, several optician stores participated in a trial study and provided the company images of patients captured in real life conditions. Currently, for the extraction of the eyeglasses lenses, these devices use an algorithm that provides only an approximation of the rim shape. The optician must manually adjust the contour of the eyeglasses for each image. The dataset we used for testing our algorithm is a subset of the database used in the validation process, over 300 facial images of people wearing different types of glasses. An XMLfile containing the position of the pupils, the points of the contour of the eyeglasses (manually marked by the optician), the measurements computed by the machine and the measurements performed by the optician is attached to each image from the database. We used a subset of this database to test the eyeglasses lens contour extraction algorithm. As our solution is targeting this type of measuring devices, our testing database contains no facial images without eyeglasses. The metric we used for the testing procedure of the eyeglasses extraction algorithm is the distance between the detected eyeglasses rim and the actual rim from the input image. An outcome of the algorithm is considered a true positive if the overlapping area between the extracted contour and the real contour of the lens from the input image is larger or equal to γ = 0.95 of the actual eyeglasses rims and a false positive otherwise. The detection rates of our algorithm are shown in Table 2. The eye region was not detected in images where the lenses of the eyeglasses contained multiple large reflections. These results demonstrate that our method works for a variety of samples with different face profiles, eyeglasses shapes and illumination. Eyeglasses rim extraction results are shown in Figure 11; one can notice, on each vertical group of images, the region of the eyes, as well as the search area of the eyeglasses and the extracted eyeglasses lens contours on the distance transform image and on the original image, respectively. The execution time of our eyes and eyeglasses detection algorithm on a regular PC (Intel Core 2 CPU T7200, 2.00 GHz, 2 GB memory) is 1.1 s for images with a resolution of 1,600 × 1,200 pixels. The detection process is based only on edge information, as the generated hypotheses are matched against the Distance transformation of the Canny edge image; therefore, the algorithm does not yield accurate results on images with inadequate edge information, such as is the case for rimless eyeglasses. Figure 12 presents some failure cases: In this section, we perform a comparative study between the previous methods proposed in the literature and our solution for extracting the eyeglasses. Much of the work presented in the literature is based on eyeglasses removal from facial images ([7,8,12,14]), and other articles are focused on eyeglasses detection to decide whether the eyeglasses are present or not in the input image ([3,15]). The scope of this paper is to precisely extract the contour of the eyeglasses lenses, and not to remove glasses or to determine their presence in the input images. Given this goal difference, we could not establish any relevant comparison criterion between these operations on eyeglasses. Even for the previous eyeglasses contour extraction methods, the metrics used for deciding the true positive and false positive cases differ from one work to another or are even not specified. Moreover, the testing databases used for the validation of these algorithms are not publicly accessible, standard datasets, and therefore, we could not test our implementation on the same images to compare the algorithms' performances. Table 3 illustrates the comparison between the state-of-the-art papers on eyeglasses detection, contour extraction and removal and our method, as well as the performance rate of these algorithms, even though they have been tested on different datasets. Based on this comparison, we can conclude that the proposed algorithm for eyeglasses contour extraction yields very good results, which are at least comparable with the ones obtained in the previous reported implementations, and in some cases, even better. The eyeglasses detection algorithm is targeted for optometric measurement systems, such as [1,2,4]. Therefore, we did not intend to decide whether the eyeglasses are present or not in the image, but to extract an as precise as possible contour of the eyeglasses lenses. Our algorithm assumes that the eyeglasses are present in the input image. In such systems, in order to perform an accurate measurement, the patient should stay as still as possible, with the head in a vertical position. Therefore, we did not necessarily focus on facial pose variation: the algorithm is designed to work on frontal facial images taken under these conditions and not for profile face images or for images under extreme face variations. However, we took into consideration the heads horizontal tilt angle, as most persons' natural standing pose implies a certain (horizontal) tilt of the head. Both the eye detection algorithm and the eyeglasses detection algorithm perform well if the head horizontal tilt angle is less than 10 degrees: when searching for the eyes and the eyeglasses, we imposed a condition on the maximum slope between the left and the right eye/eyeglasses lens. If the head horizontal tilt angle is bigger than 10 degrees, the image is no longer useful for the measurement process, because it induces aberrations for which they can no longer be compensated.
{"url":"http://www.mdpi.com/1424-8220/13/10/13638/xml","timestamp":"2014-04-21T05:55:14Z","content_type":null,"content_length":"85654","record_id":"<urn:uuid:ea2100bb-fc68-4a26-854c-6245808129d5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Fachbereich Mathematik 30 search hits On strategies and implementations for computations of free resolutions (1996) Thomas Siebert Introducing Reduction to Polycyclic Group Rings - A Comparison of Methods (1996) Birgit Reinert t is well-known that for the integral group ring of a polycyclic group several decision problems are decidable. In this paper a technique to solve themembership problem for right ideals originating from Baumslag, Cannonito and Miller and studied by Sims is outlined. We want to analyze, how thesedecision methods are related to Gröbner bases. Therefore, we define effective reduction for group rings over Abelian groups, nilpotent groups and moregeneral polycyclic groups. Using these reductions we present generalizations of Buchberger's Gröbner basis method by giving an appropriate definition of"Gröbner bases" in the respective setting and by characterizing them using concepts of saturation and s-polynomials. Structure and Construction of Instanton Bundles on P3 (1996) Thomas Nüßler Heuristics for the K-Cardinality Tree and Subgraph Problems (1996) Matthias Ehrgott Horst. W. Hamacher J. Freitag F. Maffioli In this paper we consider the problem of finding in a given graph a minimal weight subtree of connected subgraph, which has a given number of edges. These NP-hard combinatorial optimization problems have various applications in the oil industry, in facility layout and graph partitioning. We will present different heuristic approaches based on spanning tree and shortest path methods and on an exact algorithm solving the problem in polynomial time if the underlying graph is a tree. Both the edge- and node weighted case are investigated and extensive numerical results on the behaviour of the heuristics compared to optimal solutions are presented. The best heuristic yielded results within an error margin of less than one percent from optimality for most cases. In a large percentage of tests even optimal solutions have been found. Spherical Wavelet Transform and its Discretization (1996) Willi Freeden U. Windheuser A continuous version of spherical multiresolution is described, starting from continuous wavelet transform on the sphere. Scale discretization enables us to construct spherical counterparts to Daubechies wavelets and wavelet packets (known from Euclidean theory). Essential tool is the theory of singular integrals on the sphere. It is shown that singular integral operators forming a semigroup of contraction operators of class (Co) (like Abel-Poisson or Gauß-Weierstraß operators) lead in canonical way to (pyramidal) algorithms. An Adaptive Hierarchical Approximation Method on the Sphere Using Axisymmetric Locally Supported Basis Functions (1996) Willi Freeden J. Fröhhlich R. Brand The paper discusses the approximation of scattered data on the sphere which is one of the major tasks in geomathematics. Starting from the discretization of singular integrals on the sphere the authors devise a simple approximation method that employs locally supported spherical polynomials and does not require equidistributed grids. It is the basis for a hierarchical approximation algorithm using differently scaled basis functions, adaptivity and error control. The method is applied to two examples one of which is a digital terrain model of Australia. Deformation Analysis Using Navier Spline Interpolation (1996) Willi Freeden E. Groten Michael Schreiner W. Söhhne M. Tüccks The static deformation of the surface of the earth caused by surface pressure like the water load of an ocean or an artificial lake is discussed. First a brief mention is made on the solution of the Boussenesq problem for an infinite halfspace with the elastic medium to be assumed as homogeneous and isotropic. Then the elastic response for realistic earth models is determinied by spline interpolation using Navier splines. Major emphasis is on the derteminination of the elastic field caused by water loads from surface tractions on the (real) earth" s surface. Finally the elastic deflection of an artificial lake assuming a homogeneous isotropic crust is compared for both evaluation methods. On the Vanishing Displacement Current Limit for Time-Harmonic Maxwell Equations (1996) P. Quell M. Reißligeel This paper considers a transmission boundary-value problem for the time-harmonic Maxwell equations neglecting displacement currents which is frequently used for the numerical computation of eddy-currents. Across material boundaries the tangential components of the magnetic field H and the normal component of the magnetization müH are assumed to be continuous. this problem admits a hyperplane of solutions if the domains under consideration are multiply connected. Using integral equation methods and singular perturbation theory it is shown that this hyperplane contains a unique point which is the limit of the classical electromagnetic transmission boundary-value problem for vanishing displacement currents. Considering the convergence proof, a simple contructive criterion how to select this solution is immediately derived. The C Programmes for "Numerical Methods (Programmes and Implementation)" (1996) Michael Schreiner Gradiometry - an Inverse Problem in Modern Satellite Geodesy (1996) Willi Freeden F. Schneider Michael Schreiner Satellite gradiometry and its instrumentation is an ultra-sensitive detection technique of the space gravitational gradient (i.e. the Hesse tensor of the gravitational potential). Gradeometry will be of great significance in inertial navigation, gravity survey, geodynamics and earthquake prediction research. In this paper, satellite gradiometry formulated as an inverse problem of satellite geodesy is discussed from two mathematical aspects: Firstly, satellite gradiometry is considered as a continuous problem of harmonic downward continuation. The space-borne gravity gradients are assumed to be known continuously over the satellite (orbit) surface. Our purpose is to specify sufficient conditions under which uniqueness and existence can be guaranteed. It is shown that, in a spherical context, uniqueness results are obtainable by decomposition of the Hesse matrix in terms of tensor spherical harmonics. In particular, the gravitational potential is proved to be uniquely determined if second order radial derivatives are prescribed at satellite height. This information leads us to a reformulation of satellite gradiometry as a (Fredholm) pseudodifferential equation of first kind. Secondly, for a numerical realization, we assume the gravitational gradients to be known for a finite number of discrete points. The discrete problem is dealt with classical regularization methods, based on filtering techniques by means of spherical wavelets. A spherical singular integral-like approach to regularization methods is established, regularization wavelets are developed which allow the regularization in form of a multiresolution analysis. Moreover, a combined spherical harmonic and spherical regularization wavelet solution is derived as an appropriate tool in future (global and local) high-presision resolution of the earth" s gravitational potential.
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/15997/start/0/rows/10/yearfq/1996/sortfield/year/sortorder/desc","timestamp":"2014-04-18T11:22:11Z","content_type":null,"content_length":"44738","record_id":"<urn:uuid:53c004bd-b397-44b0-b0fb-325cbcf806da>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Real Functions in One Variable - Integrals... ISBN: 978-87-7681-238-6 1 edition Pages : 154 Price: Free Download for FREE in 4 easy steps... We are terribly sorry, but in order to download our books or watch our videos, you will need a browser that allows JavaScript. After entering your email address, a confirmation email will be sent to your inbox. Please approve this email to receive our weekly eBook update. We will not share your personal information with any third party. You can also read this in Bookboon.com Premium This series consists of six book on the elementary part of the theory of real functions in one variable. 300+ Business books exclusively in our Premium eReader • No adverts • Advanced features • Personal library More about Premium Users who viewed this item also viewed About the book This series consists of six book on the elementary part of the theory of real functions in one variable. It is basic in the sense that Mathematics is the language of Physics. The emhasis is laid on worked exammples, while the mathematical theory is only briefly sketched, almost without proofs. The reader is referred to the usual textbooks. The most commonly used formulæ are included in each book as a separate appendix. In this volume I present some examples of Integrals, cf. also Calculus 1a, Functions of One Variable. Since my aim also has been to demonstrate some solution strategy I have as far as possible structured the examples according to the following form A Awareness, i.e. a short description of what is the problem. D Decision, i.e. a reflection over what should be done with the problem. I Implementation, i.e. where all the calculations are made. C Control, i.e. a test of the result. This is an ideal form of a general procedure of solution. It can be used in any situation and it is not linked to Mathematics alone. I learned it many years ago in the Theory of Telecommunication in a situation which did not contain Mathematics at all. The student is recommended to use it also in other disciplines. One is used to from high school immediately to proceed to I. Implementation. However, examples and problems at university level are often so complicated that it in general will be a good investment also to spend some time on the first two points above in order to be absolutely certain of what to do in a particular case. Note that the first three points, ADI, can always be performed. This is unfortunately not the case with C Control, because it from now on may be difficult, if possible, to check one’s solution. It is only an extra securing whenever it is possible, but we cannot include it always in our solution form above. It is my hope that these examples, of which many are treated in more ways to show that the solutions procedures are not unique, may be of some inspiration for the students who have just started their studies at the universities. Finally, even if I have tried to write as careful as possible, I doubt that all errors have been removed. I hope that the reader will forgive me the unavoidable errors. Leif Mejlbro 1. Partial integration 2. Integration by simple substitutes 3. Integration by advanced substitutions 4. Decomposition 5. Integration by decomposition 6. Trigonometric integrals 7. MAPLE programmes 8. Moment of inertia 9. Mathematical models About the Author Leif Mejlbro was educated as a mathematician at the University of Copenhagen, where he wrote his thesis on Linear Partial Differential Operators and Distributions. Shortly after he obtained a position at the Technical University of Denmark, where he remained until his retirement in 2003. He has twice been on leave, first time one year at the Swedish Academy, Stockholm, and second time at the Copenhagen Telephone Company, now part of the Danish Telecommunication Company, in both places doing research. At the Technical University of Denmark he has during more than three decades given lectures in such various mathematical subjects as Elementary Calculus, Complex Functions Theory, Functional Analysis, Laplace Transform, Special Functions, Probability Theory and Distribution Theory, as well as some courses where Calculus and various Engineering Sciences were merged into a bigger course, where the lecturers had to cooperate in spite of their different background. He has written textbooks to many of the above courses. His research in Measure Theory and Complex Functions Theory is too advanced to be of interest for more than just a few specialist, so it is not mentioned here. It must, however, be admitted that the philosophy of Measure Theory has deeply in uenced his thinking also in all the other mathematical topics mentioned above. After he retired he has been working as a consultant for engineering companies { at the latest for the Femern Belt Consortium, setting up some models for chloride penetration into concrete and giving some easy solution procedures for these models which can be applied straightforward without being an expert in Mathematics. Also, he has written a series of books on some of the topics mentioned above for the publisher Ventus/Bookboon. Embed Frame - Terms of Use The embed frame is free to use for private persons, universities and schools. It is not allowed to be used by any company for commercial purposes unless it is for media coverage. You may not modify, build upon, or block any portion or functionality of the embed frame, including but not limited to links back to the bookboon.com website. The Embed frame may not be used as part of a commercial business offering. The embed frame is intended for private people who want to share eBooks on their website or blog, professors or teaching professionals who want to make an eBook available directly on their page, and media, journalists or bloggers who wants to discuss a given eBook If you are in doubt about whether you can implement the embed frame, you are welcome to contact Thomas Buus Madsen on tbm@bookboon.com and seek permission.
{"url":"http://bookboon.com/en/calculus-analyse-1c-3-ebook","timestamp":"2014-04-18T00:12:46Z","content_type":null,"content_length":"43581","record_id":"<urn:uuid:f8599a01-f9ba-4312-8b18-4d5cdf6c4cd4>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Network re-encoding method and device for re-encoding encoded symbols to be transmitted to communication equipments Patent application title: Network re-encoding method and device for re-encoding encoded symbols to be transmitted to communication equipments Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A network re-encoding device is intended for re-encoding encoded symbols to be transmitted to at least one communication equipment connected to a network. This network re-encoding device comprises a re-encoding means arranged for re-encoding output nodes, defined by LT code symbols representative of encoded symbols and representative respectively of the results of XOR Boolean operation between input nodes defining decoded symbols whose values have to be discovered and to which they are linked into a Tanner graph, by combining chosen input node and/or output node having known values, in order to produce new LT code symbols defining generated output nodes ready to be transmitted. Network re-encoding method for re-encoding encoded symbols to be transmitted to at least one communication equipment (CE) connected to a network, comprising the step of re-encoding output nodes, defined by LT code symbols representative of encoded symbols and representative respectively of the results of XOR Boolean operation(s) between input nodes defining decoded symbols whose values have to be discovered and to which they are linked into a Tanner graph, by combining chosen input node(s) and/or output node(s) having known values and called partially decoded nodes, in order to produce new LT code symbols defining generated output nodes ready to be transmitted;wherein during said combining step one combines said partially decoded nodes by means of XOR Boolean operations therebetween;and wherein during said combining step one starts by determining among degrees of a current distribution of degrees of the generated output nodes the one having the highest difference with a same degree of a first chosen reference distribution, and which allows to produce a generated output node having this degree, then one produces a generated output node having this degree by combining at least one partially decoded node. Method according to claim 1, wherein in said combining step one determines a degree allowing to produce a generated output node having this degree by means of a chosen heuristic. Method according to claim 1, wherein, after having produced said generated output node, one compares the current distribution of said input nodes with a second chosen reference distribution in order to determine if at least one input node has been too much used to produce generated output nodes and in the affirmative if this input node allows to let unchanged the degree of said generated output node when combined to input nodes linked to said generated output node, then one replaces at least one of said too much used input nodes into said generated output node by an input node having been too rarely used, in order to normalize said current distribution of said input nodes. Method according to claim 3, wherein one combines said generated output node with an output node comprising said too much used input node and having a degree equal to 2. 5. Method according to claim 1, wherein in said combining step after having produced said generated output node ready to be transmitted one updates said current distribution of degrees of the generated output nodes and said current distribution of the input nodes. Method according to claim 2, wherein in said combining step said heuristic consists in checking that the degree of the node to be generated is lower or equal to a number of covered input nodes comprising the decoded input nodes and the input nodes having at least one neighbor. Method according to claim 6, wherein said heuristic further consists in checking if the condition d ≦ 1 d k n ( k ) ##EQU00004## is verified, where d is the determined degree of the output node to be generated and n(k) is the number of partially decoded nodes of degree k smaller than d that are known. Network re-encoding device for re-encoding encoded symbols to be transmitted to at least one communication equipment connected to a network, comprising a re-encoding means arranged for re-encoding output nodes, defined by LT code symbols representative of encoded symbols and representative respectively of the results of XOR Boolean operation between input nodes defining decoded symbols whose values have to be discovered and to which they are linked into a Tanner graph, by combining chosen input node and/or output node having known values and called partially decoded nodes, in order to produce new LT code symbols defining generated output nodes ready to be transmitted;wherein said re-encoding means is arranged for combining said partially decoded nodes by carrying out XOR Boolean operations therebetween;wherein said re-encoding means is arranged for determining among degrees of a current distribution of degrees of the generated output nodes the one having the highest difference with a same degree of a first chosen reference distribution, and which allows to produce a generated output node having this degree, then for combining at least one partially decoded node to produce a generated output node having this degree. Network re-encoding device according to claim 8, wherein said re-encoding means is arranged for applying a chosen heuristic to determine a degree allowing to produce a generated output node having this degree. Network re-encoding device according to claim 8, wherein said re-encoding means is arranged, after having produced said generated output node, for comparing the current distribution of said input nodes with a second chosen reference distribution in order to determine if at least one input node has been too much used to produce generated output nodes and in the affirmative if this input node allows to let unchanged the degree of said generated output node when combined to input nodes linked to said generated output node, then for replacing at least one of said too much used input nodes into said generated output node by an input node having been too rarely used, in order to normalize said current distribution of said input nodes. Network re-encoding device according to claim 10, wherein said re-encoding means is arranged for combining said generated output node with an output node comprising said too much used input node and having a degree equal to 2. 12. Network re-encoding device according to claim 8, wherein said re-encoding means is arranged, after having produced said generated output node ready to be transmitted, for updating said current distribution of degrees of the generated output nodes and said current distribution of the input nodes. Network re-encoding device according to claim 8, wherein said re-encoding means is arranged for applying a heuristic consisting in checking that the degree of the node to be generated is lower or equal to a number of covered input nodes comprising the decoded input nodes and the input nodes having at least one neighbor. Network re-encoding device according to claim 13, wherein said re-encoding means is arranged for applying a heuristic further consisting in checking if the condition d ≦ 1 d k n ( k ) ##EQU00005## is verified, where d is the determined degree of the output node to be generated and n(k) is the number of partially decoded nodes of degree k smaller than d that are known. TECHNICAL FIELD [0001] The present invention relates to symbol data processing and more precisely to decoding of received encoded symbol and encoding of symbol data to be transmitted to communication equipments connected to a network. One means here by "symbol" a block or packet of data. BACKGROUND OF THE INVENTION [0003] As it is known by the man skilled in the art, it may occur that data be lost or corrupted during their transmission between communication equipments. In this case the receiver may require from the sender that it transmits again the lost or corrupted data, or two copies of the data may be initially transmitted. Another soliton consists in encoding the data to be transmitted by means of codes and more precisely erasure correcting codes. In this case, it is not necessary to wait having received every data of a content to be able to decode them, because only a (sufficient) part of these content data is required to rebuild all the data transmitted by the sender. Among the encoding method the one named "network coding" offers several advantages. This encoding method has been proposed by Rudolf Ahlswede and al, in "Network information flow", IEEE Transactions On Information Theory 2000. It may be used in wireless and/or internet networks, for instance. Network coding allows internal (or intermediate) routers of a network to send combinations of data of the type c=f(a,b) when it receives data a and b, instead of only forwarding the received data a or b. So, network coding allows reaching a maximal flow over a network whereas routing appears to be not powerful enough to reach a maximal flow in some networks. But this requires that routers be capable of performing computation on the received data to encode them before transmitting them, and that each final receiver be capable of decoding the encoded data it receives. As computing the set of functions f( ) that allows to reach a maximal flow is proven to be NP-Hard, some probabilistic schemes have been proposed. For instance, a scheme using rateless random linear network codes (RLNC) has been proposed by T. Ho and al, in "A random linear network coding approach to multicast", IEEE Transaction on Information Theory 2006. This scheme has several advantages: it is rather simple to implement and can be fully distributed. According to this scheme, each router of a network forwards a random linear combination of the data it receives (inputs) to the other routers of its network. The receiver also receives a matrix of coefficient and data that allow it to decode the received data through a Gauss or Gauss-Jordan elimination when this matrix is invertible. Network coding allowing to generate symbols independently, infinite is streams of symbols can be generated. However, random linear network codes involve complex computations not only during encoding but also during decoding. Moreover, as the RLNCs operate on the Gallois Field GF(2 ), they are not suitable for encoding and decoding over general purpose processors that lack arithmetic on finite fields. Another scheme, using Raptor codes, has been proposed by N. Thomos and P. Frossard, in "Collaborative video streaming with Raptor network coding", ICME 2008. This scheme introduces a re-encoding method consisting in combining encoded symbols of a pair by means of a XOR Boolean operation. But, this scheme also requires Gaussian elimination during decoding, and therefore a raptor network coding loses its advantages in term of performance and properties. Another scheme has also been proposed by Puducheri S. et al. in "Coding Schemes for an erasure relay channel" Proc. IEEE International Symposium on Information Theory, ISIT 2007, 24 Jun. 2007, pages SUMMARY OF THE INVENTION [0010] So the object of this invention is to propose network re-encoding method and device using rateless codes named Luby Transform codes (or LT codes) whose structure allows the use of low complexity encoders and decoders. It is recalled that LT codes are parity codes which have been proposed by Michael Luby in IEEE Symposium on Foundations of Computer Science 2002. LT codes can be represented by a Tanner graph establishing a correspondence between input nodes and output nodes. Each output node of the tanner graph is an encoded symbol (or LT code symbol (or else LT Codeword)) to be decoded and which has been received from the network and is linked (through edges) to one or more non-encoded (or decoded) to symbols to discover, called input nodes, and is representative of the result of XOR Boolean operation(s) between these input nodes. So, when a decoder of a communication equipment (such as a router or a user terminal, for instance) receives encoded symbols (i.e. LT codes symbols) with data representative of their respective links, these encoded symbols constitute the output nodes of a Tanner graph that must be decoded, for instance by means of a "belief-propagation (BP) decoding method", to produce non-encoded (or decoded) symbols constituting input nodes of this Tanner graph. The number of links (or edges) between an output node and input nodes defines the degree of this output node. So, it is possible to build the distribution of degrees of the output nodes of a Tanner graph. The ability of LT codes to be decoded efficiently by means of a belief-propagation decoding method relies on their particular distribution of degrees. This efficiency is all the more important as the distribution of degrees of the LT codes is the so-called "robust soliton" distribution. The invention provides a network re-encoding method, intended for re-encoding encoded symbols (or data) to be transmitted to at least one communication equipment of a network, and comprising the step of re-encoding output nodes, defined by LT code symbols representative of encoded symbols and representative respectively of the results of XOR Boolean operation(s) between input nodes defining decoded symbols whose values have to be discovered and to which they are linked into a Tanner graph, by combining chosen input node(s) and/or output node(s) having known values and called partially decoded nodes, in order to produce new LT codes defining generated output nodes ready to be transmitted. The method according to the invention may include additional characteristics considered separately or combined, and notably: one may combine the partially decoded nodes by means of XOR Boolean operations therebetween; one may start by determining among degrees of a current distribution of degrees of the generated output nodes the one having the highest difference with a same degree of a first chosen reference distribution, and which allows to produce a generated output node having this degree, then one may produce a generated output node having this degree by combining at least one partially decoded node (i.e. a node which is temporarily generated during internal re-encoding steps); one may determine a degree allowing to produce a generated output node having this degree by means of a chosen heuristic; after having produced the generated output node, one may compare the current distribution of the input nodes with a second chosen reference distribution in order to determine if at least one input node has been too much used to produce generated output nodes and in the affirmative if this input node allows to let unchanged the degree of the generated output node when combined to input nodes linked to this generated output node, then one may replace at least one of these too much used input nodes into the generated output node by an input node having been too rarely used, in order to normalize the current distribution of the input nodes; one may combine the generated output node with an output node which comprises the too much used input node and having a degree equal to 2; after having produced the generated output node ready to be transmitted, one may update the current distribution of degrees of the generated output nodes and the current distribution of the input the heuristic may consist in checking that the degree of the node to be generated is lower or equal to a number of covered input nodes comprising the decoded input nodes and the input nodes having at least one neighbor (i.e. an output node to which it is linked in the Tanner graph); the heuristic may further consist in checking if the condition d ≦ 1 d k n ( k ) ##EQU00001## is verified , where d is the determined degree of the output node to be generated and n(k) is the number of partially decoded nodes of degree k smaller than d that are known. The invention also provides a network re-encoding device, intended for re-encoding encoded symbols (or data) to be transmitted to at least one communication equipment of a network, and comprising a re-encoding means arranged for re-encoding output nodes, defined by LT code symbols representative of encoded symbols and representative respectively of the results of XOR Boolean operation(s) between input nodes defining decoded symbols whose values have to be discovered and to which they are linked into a Tanner graph, by combining chosen input node(s) and/or output node(s) having known values and called partially decoded nodes, in order to produce new LT code symbols defining generated output nodes ready to be transmitted. The network re-encoding device according to the invention may include additional characteristics considered separately or combined, and notably: its re-encoding means may be arranged for combining the partially decoded nodes by carrying out XOR Boolean operations therebetween; its re-encoding means may be arranged for determining among degrees of a current distribution of degrees of the generated output nodes the one having the highest difference with a same degree of a first chosen reference distribution, and which allows to produce a generated output node having this degree, then for combining at least one partially decoded node to produce a generated output node having this degree; its re-encoding means may be arranged for applying a chosen heuristic to determine a degree allowing to produce a generated output node having this degree; its re-encoding means may be arranged, after having produced the generated output node, for comparing the current distribution of the input nodes with a second chosen reference distribution in order to determine if at least one input node has been too much used to produce generated output nodes and in the affirmative if this input node allows to let unchanged the degree of this generated output node when combined to input nodes linked to this generated output node, then for replacing at least one of these too much used input nodes into the generated output node by an input node having been too rarely used, in order to normalize the current distribution of the input nodes; its re-encoding means may be arranged for combining the generated output node with an output node which comprises the too much used input node and having a degree equal to 2; its re-encoding means may be arranged for combining the generated output node with an output node which comprises the two too much used input nodes having a degree equal to 1; its re-encoding means may be arranged, after having produced the generated output node ready to be transmitted, for updating the current distribution of degrees of the generated output nodes and the current distribution of the input nodes; its re-encoding means may be arranged for applying a heuristic consisting in checking that the degree of the node to be generated is lower or equal to a number of covered input nodes comprising the decoded input nodes and the input nodes having at least one neighbor; its re-encoding means may be arranged for applying a heuristic further consisting in checking if the condition d ≦ 1 d k n ( k ) ##EQU00002## is verified , where d is the determined degree of the output node to be generated and n(k) is the number of partially decoded nodes of degree k smaller than d that are known. The invention also provides a decoder, intended for equipping a communication equipment that can be connected to a network, and comprising a decoding means arranged for: applying a chosen decoding method to received output nodes, defined by LT code symbols representative of encoded symbols, and representative respectively of the results of XOR Boolean operation(s) between input nodes defining decoded symbols whose values have to be discovered and to which they are linked into a Tanner graph, in order to get their respective linked input nodes, and storing data defining the input nodes and the output nodes in correspondence with a degree representative of the number of links these input nodes and output nodes have with other output nodes and input nodes of the same Tanner graph (in other words it maintains an index allowing a random access to nodes of chosen degrees). This decoder may further comprise a network re-encoding device of the type presented above and coupled to its decoding means. This decoder may also further comprise a detection means arranged, in the presence of an output node to be decoded, for determining if it has been previously received by its decoding means, and, in the affirmative, for generating a message signalling that it has been previously received (and possibly previously obtained during a decoding step) and does not have to be inserted again into the Tanner graph. BRIEF DESCRIPTION OF THE FIGURES [0038] Other features and advantages of the invention will become apparent on examining the detailed specifications hereafter and the appended drawings, wherein: [0039]FIG. 1 schematically and functionally illustrates three user communication equipments connected therebetween through a network and each comprising an encoder and an example of embodiment of a decoder according to the invention, [0040]FIG. 2 schematically illustrates a Tanner graph of a decoder according to the invention, FIG. 3 is a graph illustrating an example of robust soliton distribution of degrees (black) and an example of actual (or current) computed distribution of degrees of generated output nodes (grey), FIG. 4 is a graph illustrating an example of uniform distribution of input nodes (horizontal line) and an example of current (presented) distribution of generated input nodes (black), and [0043]FIG. 5 schematically illustrates method sub steps allowing to refine a generated output node of degree 4 with an output node of degree 2. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT [0044] The appended drawings may serve not only to complete the invention, but also to contribute to its definition, if need be. The invention aims at offering a network re-encoding method and a corresponding network re-encoding device (D), intended for re-encoding LT code symbols in order to allow the use of low complexity encoders and decoders into communication equipments that are connected therebetween through a network. In the following description it will be considered that the network is a mobile (or cellular or else radio) communication network (CN) in which communication equipments (CEi) are capable of transmitting contents therebetween at least in a broadcast or ad-hoc mode. But the invention is not limited to this type of network. Indeed, the network may be also of the wired (or fixed) type, such as a DSL network or an optical fiber network or else a cable network, notably if it allows communication equipments to communicate therebetween in a peer-to-peer (P2P) mode. Moreover, the communication equipments (CEi) may be of any type as soon as they are capable of establishing communications therebetween. So a communication equipment (CEi) may be a router, a fixed personal computer, a laptop, a content receiver (for instance a home gateway or a set-top box (STB) located in a user's home premise), a mobile or cellular telephone, a fixed telephone, or a personal digital assistant (PDA), provided that it comprises a communication modem (or any equivalent communication means). In the following description it will be considered that the communication equipments (CEi) belong to users and are mobile telephones. In FIG. 1 only three mobile telephones CE1 to CE3 (i=1 to 3) have been illustrated, but in a mobile network much more communication equipments are usually capable of exchanging at least part of encoded contents therebetween. In the illustrated example, each mobile telephone CEi comprises an encoder ED of a classical type and a decoder DC according to the invention. But it is important to note that some communication equipments CEi, and notably those which initially provide the contents, may only comprise an encoder ED of a classical type and possibly a decoder of a classical type, while some other communication equipments CEi, and notably those which receive and forward contents, may only comprise a decoder DC according to the invention or a decoder of a classical type, adapted according to the invention, and a network re-encoding device D according to the invention. One means here by "encoder of a classical type" an encoder which is capable of encoding non-encoded (content) data (or symbol (content) data) in order to produce LT code symbols. Moreover one means here by "decoder of a classical type" a decoder capable of decoding LT code symbols produced by an encoder of a classical type ED or by a network re-encoding device D according to the invention, by means of a known and classical decoding method. More one means here by "decoder according to the invention" a decoder of a new type, i.e. capable of decoding LT code symbols by means of a known and classical decoding method, and adapted to simplify the working of a network re-encoding device D according to the invention to which it is locally coupled or that it comprises. In the following description it will be considered, as non limiting example, that the decoding method is the so-called "belief-propagation (BP) decoding method". But the invention is not limited to this decoding As illustrated a network re-encoding device D according to the invention comprises a re-encoding module RM which is arranged for accessing to the associated decoder DC, and notably to its internal state (and therefore to its Tanner graph and associated data), for re-encoding symbols previously received by its mobile telephone CEi from one or more other mobile telephones CEi'. As mentioned before these LT code symbols are representative of encoded symbols. They are transmitted into blocks of data with associated data representative of their respective links with non-coded symbol data having known values. An LT code symbol is the result of the combination of the values of one or more symbol data, and more precisely of XOR Boolean operation(s) between symbol data. In other words the links of an output node designate the non-encoded symbols that have been combined by means of XOR to Boolean operation(s) to produce it. So, when a decoder DC of a communication equipment CEi receives encoded symbols (i.e. LT code symbols) with data representative of their respective links, it has to decode these encoded symbols with the associated data to recover the corresponding non-encoded symbols. For this purpose the decoding module DDM of the decoder DC feeds a Tanner graph with the received LT code symbols (or encoded symbols) which then define output nodes ON. In the same time, the non-encoded symbols to be recovered define input nodes IN of the Tanner graph which are linked to the associated output nodes. A limited example of a Tanner graph is illustrated in FIG. 2 . In this example, ten output nodes ON (a-j) are linked to one or more input nodes IN of a group of eight (A-H). More precisely: the output node a is linked to input nodes A, B and C, and then is the result of their combination by means of two XOR Boolean operations (a=A⊕B⊕C), the output node b is linked to input node B, and then is equal to B, the output node c is linked to input nodes D and E, and then is the result of their combination by means of one XOR Boolean operation (c=D⊕E), the output node d is linked to input nodes A and F, and then is the result of their combination by means of one XOR Boolean operation (d=A⊕F), the output node e is linked to input nodes E and H, and then is the result of their combination by means of one XOR Boolean operation (e=E⊕H), the output node f is linked to input nodes F and G, and then is the result of their combination by means of one XOR Boolean operation (f=F⊕G), the output node g is linked to input nodes B and G, and then is the result of their combination by means of one XOR Boolean operation (g=B⊕G), the output node h is linked to input nodes D, E and F, and then is the result of their combination by means of two XOR Boolean operations (h=D⊕E⊕F), the output node i is linked to input nodes G and H, and then is the result of their combination by means of one XOR Boolean operation (i=G⊕H), and the output node j is linked to input node C, and then is equal to C. It is important to note that the number of links (or edges) between an output node ON and input nodes IN defines the degree of this output node ON. So, in the example mentioned above: the degree of a is equal to 3, the degree of b is equal to 1, the degree of c is equal to 2, the degree of d is equal to 2, the degree of e is equal to 2, the degree of f is equal to 2, the degree of g is equal to 2, the degree of h is equal to 3, the degree of i is equal to 2, and the degree of j is equal to 1. By definition in the following description: an "input node" is a node representing original data in the Tanner graph, a "decoded input node" is an input node whose value is known. It will never have any links (or edges) and will never be present in output nodes ON of the Tanner graph. A non-decoded input node does not have a known value and cannot be used while re-encoding, a "covered input node" is either an input node that is decoded or an input node with at least one neighbor (i.e. an output node to which it is linked in the Tanner graph), an "output node" is a node representing encoded symbols. It may have been received or may result from decoding steps, a "partially decoded node" (or "known node") is either a decoded input node or an output node with a known value, a "generated output node" is a node generated by a network re-encoding device D and ready to be transmitted to one or more communication equipments CEi, and a "partially generated node" is a node which is temporarily generated by a network re-encoding device D during internal re-encoding steps. The decoder DC stores the Tanner graph into a storing means, such as a memory, for instance. The re-encoding module RM is arranged for re-encoding output nodes by combining chosen partially decoded nodes (or known nodes), i.e. input node(s) and/or output node(s) having known values, in order to produce new LT code symbols defining generated output nodes ready to be transmitted. These combinations consist preferably in XOR Boolean operations between partially decoded nodes. But it could be also, for instance, a linear combination in a finite field (GF(p )). However, this requires to store a coefficient for the linear combination on all edges (or links). The network re-encoding device D according to the invention (and notably its re-encoding module RM) is preferably made of software modules, at least partly. But it could be also made of electronic circuit(s) or hardware modules, or a combination of hardware and software modules (in this case the re-encoding device D comprises also a software interface allowing interworking between the hardware and software modules). In case where it is exclusively made of software modules it can be stored in a memory of the communication equipment CEi (for instance in its decoder DC), or in any computer software product, such as a CD-ROM, for instance, which can be read by a communication equipment CEi. To re-encode output nodes ON the re-encoding module RM implements a network re-encoding method which is described hereafter. For instance, the method comprises a first step consisting in determining among degrees dj of the current (or actual) distribution of degrees of the output nodes ON, which have been generated up to now, the one having the highest difference with the same degree of a first chosen reference distribution, and which allows to produce a generated output node having this degree. This first chosen reference distribution can be the so-called robust soliton distribution. It is of interest to note that the distributions are over discrete sets. An example of robust soliton distribution of degrees of generated output nodes (in black) and an example of actual (or current) distribution of degrees of generated output nodes (in grey) are both illustrated in the graph of FIG. 3. For instance, the re-encoding module RM may first compute the difference between the current distribution and the first chosen reference distribution for each degree. Then it can sort the results. In the example illustrated in FIG. 3, one may observe that the highest difference between the same degree of the two distributions appears on the degree equal to 4, then on the degree equal to 2, then on the degree equal to 16, then on the degree equal to 5, and so on. The current distribution of degrees of generated output nodes can be computed by the re-encoding module RM from information relative to the output nodes it has previously generated and that it stores into a storing means, such as a memory, for instance. The computation of the degree differences between the current distribution and the first chosen reference distribution aims at determining which degree of the current distribution must be used preferentially for the next output node to be generated in order to make this current distribution closer to the first chosen reference distribution. Indeed, as it is known by the man skilled in the art performances of LT codes depend on their distribution, so these performances are optimal when the current distribution of LT code symbols is maintained close to an optimal distribution, such as the robust soliton distribution. So, once the re-encoding module RM has computed the degree differences between the current distribution and the first chosen reference distribution it can sort the results to get a list of degree suggestions for enhancing the current distribution. Then, for each suggestion of degree dj, it can check if it can generate a node of degree dj. For this purpose it can use a heuristic. For instance the heuristic may consist in checking that the degree d of the output node to be generated is lower or equal to a number of covered input nodes (i.e. the decoded input nodes and the input nodes having at least one neighbor). The value of this number is kept up-to-date by the decoder DC. In order to give better results this heuristic may be completed with another condition, such as the one described hereafter. As one cannot use an output node of degree r to generate an output node of degree d smaller than r (d<r), the re-encoding module RM may check if the condition ≦ 1 d k n ( k ) ##EQU00003## is verified , where d is the determined degree of the output node to be generated and n(k) is the number of partially decoded nodes of degree k smaller than d that are known. In other word it has to check if there are enough partially decoded nodes of degree k smaller than d (k<d) to generate an output node of degree d. The above heuristic does not consider the fact that when two nodes of degree 3 are combined by means of an XOR Boolean operation, it is not sure that this will give a node of degree 6. Indeed if one combines a first node s=a⊕b⊕c with a second node t=d⊕e⊕b, one obtains a node (s⊕t) of degree 4 (s⊕t=a⊕d⊕c⊕e, because b⊕b=0). So the re-encoding module RM may use a more complex heuristic which computes an expected number of conflicts. It is important to note that during the decoding process of LT code symbols, once an encoded symbol (or input node) has been decoded, every link (or edge) it has with an output node of the Tanner graph is removed from this Tanner graph. Therefore, if an input node is decoded, it will not be involved anymore in another node and will not produce any conflict. Let n be the number of decoded input nodes, n be the number of covered input nodes, and n be the total number of partially decoded nodes. With these definitions, the number of conflicts, when a node of degree k>1 is added to a generated node of degree D , follows an hyper-geometric law of parameters (k, D , n ) whose mean C +1 is given by: Therefore, the degree D +1 of a node once a s+1-th node has been added is given by D +1, with D (it is also possible to set D =0 and to add n at the last step, but this requires to use the following definition for C +1: C In order to check if a node of degree d can be generated, one may implement the two following sub steps. A first sub step consists in checking that the degree d of the node to be generated is lower or equal to the number of covered input nodes. A second sub step consists in adding all nodes of degree k smaller or equal to the wished (or expected) degree d (k≦d) to compute the degree D (which is the resulting degree once every combinations have been carried out). If D≧d, one can deduce that it is possible to generate a node of degree d. D can be computed recursively with an algorithm consisting in checking if it is possible to generate a node of degree L. One does not aggregate all nodes in "one shot" but one only aggregates a node if this may increase the current degree. Indeed, if the nodes are aggregated in one shot the result of this aggregation may be a node of degree smaller than anyone of them (for instance, (a⊕b⊕c⊕d)⊕(a⊕b⊕c)=d). Therefore, at each iteration one only keeps the maximum of the previous value and the value once a node is added. This can be done by means of a routine such as the following one: -US-00001 D=0 For k=d to 2 For i=1 to n(k) [where n(k) is the number of known nodes of degree k] D=max(D,D+k-2*k*(D)/(n )) EndFor EndFor Return D+n ≦ L. The preceding heuristic can be improved by replacing its second sub step with another one in order to consider possible conflicts while avoiding to exceed a chosen objective. This other second sub step consists in progressively adding nodes from high degree to low degree, given that a node is added only if its degree is lower or equal to the difference between L and D. In other words this node must not exceed the remaining space in the re-encoded symbols to be generated because one does not want to generate an encoded symbols of too high degree. Therefore, if, for instance, one wants to generate a node of degree 6, one cannot add two nodes of degree 5, but one must add one node of degree 5 and one node of degree 1. This can be done by means of a is routine such as the following one: -US-00002 D=0 For k=d to 2 For i=1 to n(k) [where n(k) is the number of known nodes of degree k] D=max(D,D+k-2*E (K,D)) [where E (k,D) is equal to k*D/(n ) and corresponds to the number of conflicts one can expect when adding a node of degree k to a partially generated node of degree D, when all nodes are taken in a Tanner graph with n covered input nodes and n decoded input nodes. As there are no conflicts for k=1, a node of degree D must not have any link (or edge) with a decoded input node. Therefore, one will always add n at the last step with E (0,D) = 0.] If L ≦ D+n Then Return TRUE If k > L - D Then Break (for i=1 to n(k) loop) EndFor EndFor Return FALSE. The preceding heuristic can still be improved in order to consider possible conflicts and to avoid to exceed a chosen objective while considering conflicts. Indeed, it is possible to consider that one can add a node of degree k greater than L-D (k>L-D) and one can have a node of degree L if enough conflicts occur. This can be done by means of a routine such as the following one: -US-00003 D=0 For k=d to 2 For i=1 to n(k) [where n(k) is the number of known nodes of degree k] D=max(D,D+k-2*E (k,D)) If L ≦ D+n Then Return TRUE If k-2*E (k,D) > L - D Then Break (for i=1 to n(k) loop) EndFor EndFor Return FALSE. It is also important to note that the choice of the heuristic will have an impact on the computation cost and on the performance of the LT codes. The simpler the heuristic is, the lower the computation cost will be and the lower the performance will be. If the heuristic(s) used show(s) that an output node of degree d can be generated, the re-encoding module RM finishes the first method step by combining at least one partially decoded node to produce a generated output node having this degree d. Otherwise it tries the following degree suggestion until it finds a degree suggestion that can be satisfied. This last situation will occur because, at least, it is possible to copy one of the nodes previously received. So, it is of interest to try to generate encoded symbols having degrees with the highest deficit of generated symbols. In order to generate a node of degree d the re-encoding module RM may proceed as follows. It can start from partially decoded nodes of degree d and add partially decoded nodes until having a node of degree d. Each time it adds a partially decoded node, it gets a new resulting value which is equal to the previous value XOR the value of the partially decoded node added. Then, it adds the input node(s), contributing to the partially decoded node added, to the list of nodes contributing to the resulting node. During this node generation, the re-encoding module RM starts with degree d nodes and follows with nodes having decreasing degrees up to 1. Indeed it is preferable to first use the biggest symbols and then to use small symbols for completing a generated node in order to succeed to reach exactly the expected degree d. So, one never allows a combination to decrease the degree of a node under generation. Moreover, it is preferable to only try to add a partially decoded node if its degree is lower or equal to the difference between the expected degree d and the current degree of the node under generation. As soon as a node under generation has the expected degree d, the re-encoding module RM stops the first method step. The above described node generation can be done by means of a routine such as the following one, where L is the expected degree determined in the first part of the first method step and G is a partially generated node: -US-00004 L=Result of the first part of first method step G=O For k=d to 1 While some node N of degree k as not been tried and k ≦ L - degreeOf(G) do N = Choose a random node of degree L If degreeOf (G+N) > degreeOf(G) then G=G+N (XOR their Value, add input nodes and remove input nodes present twice) EnfIf EndWhile EndFor Return G. It is important to note that when the re-encoding module RM uses a heuristic which considers possible occurrences of conflicts, it is possible to allow the degree of the node under generation to fall temporarily. That amounts to relax one of the constraints of the node generation mechanism described above. But, one still keeps on adding a partially decoded node if its degree is lower or equal to the difference between the expected degree and the current degree of the node under generation. This variant of node generation can be done by means of a routine such as the following one: -US-00005 L=Result of the first part of the first step G=O For k=d to 1 While some node n of degree k as not been tried and k ≦ L - degreeOf(G) do N = Choose a random node of degree L G=G+L (XOR their Value, add input nodes and remove input nodes present twice) EndWhile EndFor Return G. The preceding node generation mechanism may be enhanced by relaxing the constraint consisting in only adding a partially decoded node if its degree is lower or equal to the difference between the expected degree and the current degree of the node under generation. In this variant, conflicts are still taken into account. For instance, when a node N of degree L is under generation, the re-encoding module RM may try to add a partially decoded node of degree d to a generated node of degree D only if L-D≧d-2.E (d,D). This variant of node generation is more useful if the last described heuristic is used. It can be done by means of a routine such as the following one: -US-00006 L= Result of the first part of the first step G=O For k=d to 1 While some node n of degree k as not been tried and k - 2.E ) ≦ L - degreeOf(G) and degreeOf(G) < L do N = Choose a random node of degree L G=G+N (XOR their Value, add input nodes and remove input nodes present twice) EndWhile EndFor Return G. In case where there is a lot of nodes of a low degree (such as 1, 2 or 3) it is possible to decide to only use low degree nodes to build high degree nodes instead of starting by adding nodes of the highest possible degree. For this purpose, one can, for instance, use into the last routine a "For loop" going from 1 to 1 (in order to use only nodes of degree 1), or from 2 to 1 (in order to use only nodes of degrees 1 and 2), or else from 3 to 1 (in order to use only nodes of degrees 1, 2 and 3). It is also possible, for instance, to choose a node of degree 1 to be added depending on its score (i.e. its difference with a second chosen reference distribution) instead of choosing a random node of this degree 1. This only requires that the nodes of degree 1 be sorted depending on their score which can be deduced from the second chosen reference distribution. After having produced a generated output node with the first method step, this generated output node can be refined by means of a second method step. This second method step may consist in normalizing the current distribution of the input nodes of the Tanner graph in order it could be maintained close to an optimal distribution (or second chosen reference distribution), such as a uniform distribution, for instance. For this purpose, the re-encoding module RM may compare the current distribution of the input nodes with a second chosen reference distribution in order to determine at least one input node which has been too much used to produce generated output nodes and allows to let unchanged the degree of the generated output node when combined to input nodes linked to the generated output node. The current distribution of input nodes can be computed by the re-encoding module RM from information relative to the output nodes it has previously generated and that it stores into a storing means, such as a memory, for instance. An example of uniform distribution of input nodes (horizontal line) and an example of current (presented) distribution of degrees of generated input nodes (in black) are both illustrated in the graph of FIG. 4. A uniform input node distribution is a distribution in which any input node is used the same number of time. In the example illustrated in FIG. 4, one may observe that one should avoid to send the input node A or F again and to prefer send the input node C or E instead. When the re-encoding module RM has determined at least one too much used input node, it may replace at least one of these too much used input nodes (it has determined) into the generated output node by an input node having been too rarely used, in order to normalize the current distribution of the input nodes. It is advantageous to use output nodes comprising a too much used input node and having a chosen degree equal to 2. Indeed, it is recalled that LT code symbols have more than 50% of (encoded) output nodes having a degree equal to 2 (when they are quite long). So, as x⊕x=0, if one has a partially generated node s=a⊕b⊕c⊕d and a degree 2 output node t=a⊕e, it is possible to produce a generated output node r=s⊕t=e⊕b⊕c⊕d (the input node a (here considered as too much used) has been removed and replaced by the input node e (here considered as too rarely used)). An example of method sub steps allowing to refine a generated output node δ of degree 4 (δ=A⊕B⊕C⊕D) with an output node γ of degree 2 (γ=A⊕E) is illustrated in FIG. 5 . As A⊕A=0, the refined generated output node β is still of degree 4 but the too much used input node A has been replaced by the too rarely used input node E (δ=E⊕B⊕C⊕D). To proceed to the refinement of a generated output node, the re-encoding module RM may proceed as follows. For each input node N included in the generated output node to be refined, it searches all output nodes of degree 2 that contains this input node N. Then it may search amongst these output nodes the one which is capable of best enhancing the score of the generated output node to be refined. If the enhancement is positive, it performs the combination (XOR). Then, it can go to another input node N included in the generated output node to be refined and repeat this process. If a conflict occurs (i.e. if the degree of the generated output node to be refined may be decreased, it does not perform the combination (XOR)). It is important to note that instead of simply forbidding combination to occur in case of the above mentioned conflict, it is possible to allow the combination to be performed provided one finds two input nodes of degree 1 which offer together a better score than the conflicting output node of degree 2. This allows to introduce more diversity and to use nodes of degree 1 more often in After having produced a (refined) generated output node ready to be transmitted, the re-encoding module RM may update the information representative of the current distribution of degrees of the generated output nodes and the current distribution of the input nodes of the Tanner graph, that it stores into a storing means, in order to use the updated distributions during the following re-encoding. The update of the distribution of input nodes can be done by adding one occurrence for each input node included in the generated output node. Instead of saving the really generated degrees, it is possible to save the degree one has wanted (or expected) to generate (i.e. the one determined during the first part of the first method step), in case where the real degree of the generated output node differs from the wanted degree and where the degree of the generated output node is allowed to fall during the second method step. This action is unnecessary when the heuristic is sufficiently precise. In order to ease the task of the network re-encoding device D it is advantageous to adapt the classical decoding module DDM of the decoder DC. More precisely, when a decoding module DDM has decoded received LT code symbols it updates its Tanner graph, which is used by the network re-encoding device D for searching output and input nodes depending on their respective degrees. So, the decoding module DM of the decoder DC is preferably modified for storing data defining the input nodes and output nodes in correspondence with their respective degree in its Tanner graph. These data may be stored in the form of indexes, for instance, in order to be easily accessible to the network re-encoding device D. So the decoding module DM maintains a table of indexes allowing the re-encoding module RM to choose randomly nodes of a particular degree and also to know how much nodes of each degree are present into the Tanner graph of the decoder DC. The modified decoding module DDM according to the invention is preferably made of software modules, at least partly. But it could be also made of electronic circuit(s) or hardware modules, or a combination of hardware and software modules (in this case it comprises also a software interface allowing interworking between the hardware and software modules). In case where it is exclusively made of software modules it can be stored in a is memory of the decoder DC, or in any computer software product, such as a CD-ROM, for instance, which can be read by a communication equipment CEi. Moreover, the decoder DC may be further modified in order to be capable of detecting node redundancies and therefore to simplify the task of the network re-encoding device D. Indeed, the node redundancy is increased by the network re-encoding method according to the invention, because the latter tends to produce more redundant blocks of LT code symbols which increases the decoding complexity (both computational and space) and decreases the performance of the network re-encoder device D (which uses the decoder's data). So, the invention proposes to add to a decoder DC a detection module DTM arranged, in the presence of an output node to be decoded, for determining if it has been previously received by the decoding module DDM, and, in the affirmative, for generating a message signalling that it has been previously decoded and do not have to be inserted again into the decoder Tanner graph. It is important to note that the detection module DTM can be used when an output node is received (to determine if it has been previously received and decoded) or during decoding (for instance when an output node x=a⊕b⊕c⊕d is partially decoded to produce another output node y=a⊕c⊕d). In this last case the detector module DTM can check if y is already known or not. For instance, the detection module DTM may compute quickly a key for a received output node and look into the stored data structures with fast read and insert accesses, such as binary search trees (for instance RB trees ("Red Black trees"--a kind of self balancing binary tree)), or hash tables. If it can find that another key has already been inserted, it can conclude that the same output node has already been received and that it does not have to be decoded again. So, it generates a message in order the received output node be simply dropped instead of being inserted into the Tanner graph of the decoder DC. As most of the output nodes have a low degree (equal to 2 or 3) and that the probability that two output nodes of degree 2 are the same is much higher than the probability that two output nodes of degree 4 are the same, it is possible to restrain the redundancy detection to output nodes of degree 1, 2 or 3. In this case, the detection module DTM may implement a hash method intended for computing a key h(x) for any output node x of degree 1, 2 or 3 such that h(x)=h(x')x=x'. This hash method may be as follows. First, the detection module DTM may sort the original symbols that compound an encoded symbol (or output node) in increasing order (for instance by considering their identifiers). Then, it may compute the key h(x)=s where x=a⊕b⊕c, s +1, s +1, s +1, L is the symbol (or LT code symbols) length, and i is an integer identifying a symbol (or input node) x and taking values between 0 and L-1. If a symbol is of degree 2, one simply sets s =0, and if a symbol is of degree 1, one simply sets s =0 and s This hash method requires only few (constant) additions and multiplications to compute the key h(x). Moreover, this computed key does not require a lot of storing space as one can show that its length is equal to 3 log (L+1). More, the redundancy detection method involves a low cost (for instance, for an output code of length L=65536, it will involves a 64 bit comparison, which is generally offered by general purpose processors). The detection module DTM according to the invention is preferably made of software modules, at least partly. But it could be also made of electronic circuit(s) or hardware modules, or a combination of hardware and software modules (in this case it comprises also a software interface allowing interworking between the hardware and software modules). In case where it is exclusively made of software modules it can be stored in a memory of the communication equipment CEi (for instance in its decoder DC), or in any computer software product, such as a CD-ROM, for instance, which can be read by a communication equipment CEi. The invention offers several advantages, and notably: it allows to generate low complexity network codes, it can be used in a wide range of applications because it allows to generate network codes which are more efficient in terms of computation than random linear network codes (RLNCs) as they may use belief propagation decoding instead of a Gaussian elimination and therefore avoid the use of Gallois Field (GF(2 )) arithmetic. The invention is not limited to the embodiments of network re-encoding method, network re-encoding device and decoder described above, only as examples, but it encompasses all alternative embodiments which may be considered by one skilled in the art within the scope of the claims hereafter. Patent applications by Mary-Luc Champel, Marpire FR Patent applications by Nicolas Le Scouarnec, Rennes FR Patent applications in class Adaptive coding Patent applications in all subclasses Adaptive coding User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20100188271","timestamp":"2014-04-17T23:39:48Z","content_type":null,"content_length":"88671","record_id":"<urn:uuid:92fa5d77-e72d-4fb1-83a0-a4d837b77c09>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
SAS-L archives -- October 2007, week 3 (#275)LISTSERV at the University of Georgia Date: Wed, 17 Oct 2007 16:45:06 -0500 Reply-To: Mary <mlhoward@avalon.net> Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU> From: Mary <mlhoward@AVALON.NET> Subject: Re: Use all numeric variables as predictors in proc reg Comments: To: ginanicolosi@HOTMAIL.COM Content-Type: text/plain; charset="iso-8859-1" Hi, Gina, You could try something like this (not exact): proc sql; describe table dictionary.columns; proc sql; create table num_vars as select name, informat from dictionary.columns where memname='RESULTS5' and {then get a random number of observations from the data set num_vars; let's say we put it in a data set called num_vars_subset } {Then put the names in num_vars_subset into a macro variable } proc sql; select name into :random_vars from num_vars_subset; Then use your macro variable to run against the model: proc reg; model result= &random_vars; ----- Original Message ----- From: Gina Nicolosi To: SAS-L@LISTSERV.UGA.EDU Sent: Tuesday, October 16, 2007 3:44 PM Subject: Use all numeric variables as predictors in proc reg I am writing a program which randomly selects predictor variables to be included in a regression. Since the predictors during any one iteration can change, I can't write them out in the MODEL statement. Therefore, I was wondering if there was any way to specify that all numeric variables in the data set act as regressors? Kind of like putting _ALL_ or _NUMERIC_ after the equals sign when running PROC REG (but this approach doesn't work). Any guidance would be greatly
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0710c&L=sas-l&F=&S=&P=31665","timestamp":"2014-04-20T00:39:41Z","content_type":null,"content_length":"10577","record_id":"<urn:uuid:8857340d-f5b6-4408-a2c5-4e70ab0408cb>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Dunkl operators, Bessel functions and the discriminant of a finite Coxeter group. (English) Zbl 0778.33009 Let $G$ be a finite Coxeter group acting on the Euclidean space $𝔞$ and $R$ the corresponding root system. A complex valued $G$- invariant function on $R$ is called a multiplicity function. To each multiplicity function $k$ and $\xi \in {𝔞}_{ℂ}$ one can associate a differential-difference operator ${T}_{\xi }\left(k\right)$ on ${𝔞}_{ℂ}$, the so called Dunkl operator. The Dunkl operators commute so one can extend the construction to arbitrary polynomials $\xi$ on ${𝔞}_{ℂ}^{*}$. When $\xi$ is $G$-invariant the restriction ${D}_{\xi }$ of ${T}_{\xi }\left(k\right)$ of the $G$-invariant polynomials on ${𝔞}_{ℂ}$ is a partial differential operator. Let $S\left(k\right)$ be the algebra of differential operators obtained in this way. The map $\xi \to {D}_{\xi }$ is an isomorphism $ℂ\ left[{𝔞}_{ℂ}^{*}\right]\to S\left(k\right)$ whose inverse is denoted by $\gamma \left(k\right)$. Thus for $\lambda \in {𝔞}_{ℂ}^{*}$ one has the eigenvalue problem $\left(D-\gamma \left(k\right)\left(D\right)\left(\lambda \right)\right)f=0\phantom{\rule{1.em}{0ex}}\forall D\in S\left(k\right)$ on the space of $G$-invariant polynomials on ${𝔞}_{ℂ}$. This system of equations is called the Bessel equations on $G\setminus {𝔞}_{ℂ}$. When restricted to the regular points in ${𝔞}_{ℂ}$ its local holomorphic solutions form a locally constant sheaf of vector spaces and hence one has an associated monodromy representation. The paper under review gives a detailed study of this monodromy representation and, as an application solves a conjecture of Macdonald’s concerning the evaluation of certain integrals involving the discriminant ${I}_{G}$ of $G$. 33C80 Connections of hypergeometric functions with groups and algebras 20F55 Reflection groups; Coxeter groups
{"url":"http://zbmath.org/?q=an:0778.33009&format=complete","timestamp":"2014-04-19T12:27:18Z","content_type":null,"content_length":"25397","record_id":"<urn:uuid:62e4ef63-06fe-4e32-b58c-dce6e656a6e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Scientific Notation In the early 1970s scientific calculators, often referred to as "slide rule calculators", became commonplace. Most of these calculators had the ability to display from eight to ten digits, depending on the manufacturer. Scientific calculations, however, often involve numbers that contain more than eight or ten digits. To overcome the limitation of an eight-or-ten-digit display, slide rule calculators depend on scientific notation. When a number becomes too large for the calculator to display, scientific notation is used automatically. Scientific Notations are used when numbers are too large or too small in the concise form. Scientific notation is a way of writing numbers that look like this. A number with just one digit to the left of the decimal point times 10 to a power. For example, the number 5.280 could be written in scientific notation as 5.280 x 10$^{3}$ the exponent is 3 because 10$^{3}$ = 1.000 1.000 x 5.280 = 5.280 In all cases, scientific notation represents figures in terms of a number that is 1 or greater and less than 10, multiplied by 10, and raised to a power. Examples on Scientific Notation 1. 595 = 5.95 x 10$^{2}$ 2. 88.500 = 8.85 x 10$^{4}$ 3. 0.1590 = 1.590 x 10$^{-1}$ The exponent is negative since the decimal will have to go to the left to get it back to its original location. 0.000039 = 3.9 x 10$^{-4}$
{"url":"http://www.mathcaptain.com/number-sense/scientific-notation.html","timestamp":"2014-04-18T21:00:54Z","content_type":null,"content_length":"85319","record_id":"<urn:uuid:f61e493b-05e1-4310-bc78-04cbd1e392bc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Algebra - Finding the point of intersection, first year uni maths... October 16th 2010, 02:08 PM Linear Algebra - Finding the point of intersection, first year uni maths... I have been stumped on this question having done all other exercise question and jsut can't do it. Can I have some help please... Find the point of intersection of the lines L1 and L2 if it exists: The line L1 through (5,1,3), parallel to the vector 2i + j + 2k and the line L2 through (2, -3, 0) and (0, 1, -2) October 16th 2010, 02:39 PM mr fantastic I have been stumped on this question having done all other exercise question and jsut can't do it. Can I have some help please... Find the point of intersection of the lines L1 and L2 if it exists: The line L1 through (5,1,3), parallel to the vector 2i + j + 2k and the line L2 through (2, -3, 0) and (0, 1, -2) Start by getting the parametric equations of the two lines.
{"url":"http://mathhelpforum.com/advanced-algebra/159862-linear-algebra-finding-point-intersection-first-year-uni-maths-print.html","timestamp":"2014-04-24T16:14:39Z","content_type":null,"content_length":"4633","record_id":"<urn:uuid:be71e812-d705-4a43-b1b0-9295e1413533>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Paramus Trigonometry Tutor Dear Student, I am a medical doctor who will be working at NYP - Weill Cornell starting June. My passion has always been to help people - why is why I chose medicine as a career. Just as rewarding is helping students reach there academic goals. 29 Subjects: including trigonometry, reading, statistics, biology ...I have also recently passed the Praxis II Exam in Math Content Knowledge while earning Recognition of Excellence for scoring in the top 15 percent over the last 5 years. My most valuable quality, however, is my ability to relate to students of all ages, and make even the most difficult subjects ... 22 Subjects: including trigonometry, reading, English, chemistry ...In addition, I try to foster a learning environment that motivates and builds success. I take pleasure in showing students that with appropriate instruction and a little hard work they are capable of significantly more than they imagined. My goal is to eventually get to a point where a student can be self-motivated to tackle any problem they come up against. 12 Subjects: including trigonometry, calculus, geometry, statistics Hello my name is Andres. I was a language teacher in my native country teaching English as a second language for native students and Spanish as a second language for foreign students. I am currently finishing my second major in engineering science. 9 Subjects: including trigonometry, Spanish, calculus, geometry ...My name is Lawrence and I would like to teach you math! Since 2004, I have been tutoring students in mathematics one-on-one. My approach to mathematics tutoring is creative and 9 Subjects: including trigonometry, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/paramus_nj_trigonometry_tutors.php","timestamp":"2014-04-18T18:39:56Z","content_type":null,"content_length":"23950","record_id":"<urn:uuid:9edf1ab0-dacd-4b70-8075-543fb6541810>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
st: marginal effects Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: marginal effects From Maarten Buis <maartenlbuis@gmail.com> To statalist@hsphsun2.harvard.edu Subject st: marginal effects Date Tue, 13 Nov 2012 10:27:18 +0100 ---- "Qianru Song" wrote me privately: > I would like to ask you a question about derivation of marginal effect of > non-lineal model in stata and how to interpret the results. > I have dos models: probit and possion, and some variables: > x1(dummy),x2(dummy),x3 (continuous),x1*x2,x1*x3: > I have three types for the regression “probit”: > 1. > probit y x1##x2 x1##c.x3 > margins, dydx(*) > 2. > probit y x1#x2 x2 x1##c.x3 > margins, dydx(*) > 3. > gen x1_x2=x1*x2 > gen x1_x3=x1*x3 > probit y x1 x2 x3 x1_x2 x1_x3 > margins, dydx(*) > what´s is the difference among three expressions, although they describe > the same model? Which model is correct if I want to obtain the global > marginal effect on y (for example how does the variable x1 (three parts: > x1, x1*x2 and x1*x3)impact on y ? > I think the possion model has the same structure to derive the marginal > effect because of one member of non linear model. is correct? These questions need to be sent directly to the statalist not to individual members. The reasons for this are clearly explained in the Statalist FAQ: <http://www.stata.com/support/faqs/resources/statalist-faq/#private> The point of a non-linear model is that there is not one marginal effect but many, so you really need to choose: either you want one marginal effect but than you have to live with a linear model or you want a non-linear model but than you have to live with multiple marginal effects. It is logically impossible to have both. So what happens when people report a "global marginal effect"? In that case they have in essence turned their non-linear model into a linear probability model. This may not fit well to their data, but the "indirect route" of first estimating a probit and than computing a global marginal effect does not solve that lack of fit; it just hides it, which is much worse. My answer is to forget about global marginal effects, instead decide what kind of "effect" you want and choose a model that has it as its natural metric: So if you want differences in probabilities choose a linear probability model (-regres varlist, vce(robust)-), if you want ratios of probablities use -poisson varlist, vce(robust) irr-, and if you want ratios of odds use -logit varlist, or-. Notice that only the last model is guaranteed to result in predictions between 0 and 1, so you need to be careful when choosing one of the first two options and check very carefully if the model fits to your data. Also note that -probit- is not part of this list; it just does not have a meaningful natural metric. Models 1 and 2 are equivalent and will produce the same marginal effects. Model 3 is not equivalent as far as Stata is concerned as the information that x1_x2 is an interaction term is not stored in the model, and -margins- will thus incorrectly assume that it is just another variable. -- Maarten Maarten L. Buis Reichpietschufer 50 10785 Berlin * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-11/msg00524.html","timestamp":"2014-04-19T12:26:15Z","content_type":null,"content_length":"10303","record_id":"<urn:uuid:0db4c2a6-2edf-42cd-9596-54544498230a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Job Mix Concrete Volume Posted in Concrete Engineering Email This Post A trial batch of concrete can be tested to determine how much concrete is to be delivered by the job mix. To determine the volume obtained for the job, add the absolute volume V [ a ] of the four components—cements, gravel, sand, and water. Find the V [ a ] for each component from V [ a ] = W [ L ] / ( SG ) W [ u ] where V [ a ] = absolute volume, ft ^ 3 (m ^ 3 ) W [ L ] = weight of material, lb (kg) S [ G ] = specific gravity of the material W [ u ] = density of water at atmospheric conditions (62.4 lb/ft ^ 3 1000 kg /m ^ 3 ) Then, job yield equals the sum of V [ a ] for cement, gravel, sand, and water. • Im searching this formula for how many weeks, in the internet thanks a lot for the posting. Actually I have a pocketbook reviewer (Besavilla) and it is states only direct to the computation I want proof or either formula, which I found here. You know as a civil enginner sometimes our clients or friends ask us ‘How many volume of concrete can one bag of cement produce? Because I forgot already this formula. For the second time thank you very big. • Dear All, I just start working in Concrete company, I had no experience with the formula in making /mixing all material, Is it possible to get the standart mix design for Quality 225 (that’s haw we called in Indonesia). Your information will be highly appreciated. • It’s very imported for our company and i want to know better than and value about mix design. • dear all, i has work exp in site but i am week in mix design can u help me • Dear all, I dont have experience in using the formula .. Can u pls help me how to use the formula for getting volume of cement for 1m3 of concrete… • sir i am fresher civil engineer sie can u help me how t use this formule by any examples.. • in job mix formula no need to consider sand bulkage; • Dear Sir Pls give me details concrete super Plasticizers name and with composition. 2: what is means of Old and cold concretepls give me details. Pradeep Srivastava • It is good &interesting concret. PleaseWould you like to Fined job for me Alemayehu (Ethiopia).immidiate Response I need • Requirements of concrete mix design The requirements which form the basis of selection and proportioning of mix ingredients are : a ) The minimum compressive strength required from structural consideration b) The adequate workability necessary for full compaction with the compacting equipment available. c) Maximum water-cement ratio and/or maximum cement content to give adequate durability for the particular site conditions d) Maximum cement content to avoid shrinkage cracking due to temperature cycle in mass concrete • please send me procedure for desiging of multistoried buildings • can you send me the procedure of concrete mix design including it’s formulas • How to mix the buildig concrete? What is the standardaized formula? • really i need the names of cement test • hi friends please send me mix ratio’s for M20grade of concrete in 1:2:3 • hi frnds please help me concrete mix with p.f.a in detailed • please can you give me tutorials on concrete job design. pls very urgent thank you. • sir plz give me information of 1m3 how many cement,sand& aggrigate □ Lokesh, Depend on the grade of concrete. Usually sand is 44% of 1 cum. and gravel is 88% of 1 cum. Cement are ranging from 12bags, 10 bags, 9 bags per 1 cum. Water Cement ratio usually used 0.40. Hope it helps. • Post a comment 1 2 »
{"url":"http://www.engineeringcivil.com/job-mix-concrete-volume.html/comment-page-1","timestamp":"2014-04-16T10:13:55Z","content_type":null,"content_length":"47819","record_id":"<urn:uuid:3dbc9a3e-fcbe-456e-b33d-93cf352aa490>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Circumference calculator The following circumference calculator is easy to use. Just enter the value of the radius and hit calculate button. Recall that the formula to get the circumference of a circle is 2 ×pi×r with r = 3.141592653589793 Therefore, the circumference depends only on the size of r. That is why you only need to enter r here to get the circumference The calculator will only accept positive value for r since a distance cannot be negative As a reminder, if r = 4 cm for instance, circumference = 2 ×3.14 ×4 = 25.12 cm Fun math game: Destroy numbered balls by adding to 10
{"url":"http://www.basic-mathematics.com/circumference-calculator.html","timestamp":"2014-04-21T09:55:27Z","content_type":null,"content_length":"32070","record_id":"<urn:uuid:27ad14ec-3eb2-49dd-b4d8-13b2114f2b08>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
Lyapunov stability Lyapunov stability From Exampleproblems Lyapunov stability is applicable to only unforced (no control input) dynamical systems. It is used to study the behaviour of dynamical systems under initial perturbations around equilibrium points. Let us consider that the origin is an equilibrium point (EP) of the system. A system is said to be stable about the equilibrium point "in the sense of Lyapunov" if for every ε, there is a δ such that: $\|x(t_o)\| < \delta \quad \implies \quad \|x(t)\| < \epsilon \quad \forall t \in R^{+}$ The system is said to be asymptotically stable if as $t \rightarrow \infty, \quad \|x(t)\| \rightarrow 0 (EP)$ Lyapunov stability theorems Lyapunov stability theorems give only sufficient condition. Lyapunov second theorem on stability Consider a function V(x) : R^n → R such that • $V(x) > 0 : \forall{x} eq 0$ (positive definite) • $\dot{V}(x) < 0$ (negative definite) Then V(x) is called a Lyapunov function candidate and the system is asymptotically stable in the sense of Lyapunov. It is easier to visualise this method of analysis by thinking of a physical system (e.g. vibrating spring and mass) and considering the energy of such a system. If the system loses energy over time and the energy is never restored then eventually the system must grind to a stop and reach some final resting state. This final state is called the attractor. However, finding a function that gives the precise energy of a physical system can be difficult, and for abstract mathematical systems, economic systems or biological systems, the concept of energy may not applicable. Lyapunov's realisation was that stability can be proven without requiring knowledge of the true physical energy, providing a Lyapunov function can be found to satisfy the above constraints. Stability for state space models A state space model $\dot{\textbf{x}} = A\textbf{x}$ is asymptotically stable if A^TM + MA + N = 0 has a solution where N = N^T > 0 and M = M^T > 0 (positive definite matrices). (The relevant Lyapunov function is V(x) = x^TMx.) Consider the Van der Pol oscillator equation: $\ddot{y} + y -\epsilon \left( \frac{\dot{y}^{3}}{3} - \dot{y}\right) = 0< 0$ Let $x_{1} = y , \dot{x_{1}} = x_{2}$ so that the corresponding system is $\dot{x_{1}} = x_{2} , \dot{x_{2}} = -x_{1} + \epsilon \left( \frac{\dot{x_{2}}^{3}}{3} - \dot{x_{2}}\right)$ Let us choose as a Lyapunov function $V = \frac {1}{2} \left(x_{1}^{2}+x_{2}^{2} \right)$ which is clearly positive definite. Its derivative is $\dot{V} = x_{1} \dot x_{1} +x_{2} \dot x_{2}$$= x_{1} x_{2} - x_{1} x_{2}+\epsilon \left(\frac{x_{2}^4}{3} -{x_{2}^2}\right)$$= -\epsilon \left(\frac{x_{2}^4}{3} -{x_{2}^2}\right)$ If the parameter ε is positive, stability is asymptotic for $x_{2}^{2} < 3$ Barbalat's lemma and stability of time-varying systems Assume that f is function of time only. • If $\dot{f}(t) \to 0$ does not imply that f(t) has a limit at $t\to\infty$ • If f(t) has a limit as $t \to \infty$ does not imply that $\dot{f}(t) \to 0$. • If f(t) is lower bounded and decreasing ($\dot{f}\le 0$), then it converges to a limit. But it does not say whether $\dot{f}\to 0$ or not as $t \to \infty$. Barbalat's Lemma says that If f(t) has a finite limit as $t \to \infty$ and if $\dot{f}$ is uniformly continuous (or $\ddot{f}$ is bounded), then $\dot{f}(t) \to 0$ as $t \to\infty$. But why do we need a Barbalat's lemma? Usually, it is difficult to analyze the *asymptotic* stability of time-varying systems because it is very difficult to find Lyapunov functions with a *negative definite* derivative. What's the big deal about it? We have invariant set theorems when $\dot{V}$ is only NSD. Agreed! We know that in case of autonomous (time-invariant) systems, if $\dot{V}$ is negative semi-definite (NSD), then also, it is possible to know the asymptotic behaviour by invoking invariant-set But this flexibility is not available for *time-varying* systems. This is where "Barbalat's lemma" comes into picture. It says: IF V(x,t) satisfies following conditions: (1) V(x,t) is lower bounded (2) $\dot{V}(x,t)$ is negative semi-definite (NSD) (3) $\dot{V}(x,t)$ is uniformly continuous in time (i.e, $\ddot{V}$ is finite) then $\dot{V}(x,t)\to 0$ as $t \to \infty$. But how does it help in determining asymptotic stability? There is a nice example on page 127 of "Slotine Li's book on Applied Nonlinear control" consider a non-autonomous system $\dot{e}=-e + g\cdot w(t)$ $\dot{g}=-e \cdot w(t)$ This is non-autonomous because the input w is a function of time. Let's assume that the input w(t) is bounded. If we take V = e^2 + g^2 then $\dot{V}=-2e^2 \le 0$ This says that V(t) < = 0 by first two conditions and hence e and g are bounded. But it does not say anything about the convergence of e to zero. Moreover, we can't apply invariant set theorem, because the dynamics is non-autonomous. Now let's use Barbalat's lemma: $\ddot{V}= -4e(-e+g\cdot w)$. This is bounded because e, g and w are bounded. This implies $\dot{V} \to 0$ as $t\to\infty$ and hence $e \to 0$. If we are interested in error convergence, then our problem is solved. See also
{"url":"http://www.exampleproblems.com/wiki/index.php/Lyapunov_stability","timestamp":"2014-04-20T16:04:35Z","content_type":null,"content_length":"27983","record_id":"<urn:uuid:e28e63da-30e0-4ec9-8c0b-96413c832876>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
[Scipy-tickets] [SciPy] #1634: scoreateprecentile return the worng value when used on pandas.Series SciPy Trac scipy-tickets@scipy.... Tue Mar 27 09:57:06 CDT 2012 #1634: scoreateprecentile return the worng value when used on pandas.Series Reporter: imrisofer | Owner: somebody Type: defect | Status: new Priority: normal | Milestone: Unscheduled Component: scipy.stats | Version: 0.10.0 Keywords: scoreateprecentile, pandas | Comment(by josefpktd): I'm not sure what the answer is. My impression: it's a Pandas or user problem It looks like the function is written for np.asanyarray and not for asarray. It returns the same array subclass as the input for matrix and masked arrays. (I never checked this function in detail, and I just saw that limit doesn't work with 2d arrays.) As basic policy we need to be able to assume something about array subclasses. For example when I rewrote stats.zscore, I used np.asanyarray and made sure it works for matrices and masked arrays. I would think that we can assume that basic algebraic operations and indexing/slicing works in the same way as for numpy arrays. (besides matrix multiplication. as aside stats.zscore returns different values if it is called with an ndarray or with a pandas series (because of different default ddof) >>> stats.zscore(a).std() >>> stats.zscore(b).std() >>> type(stats.zscore(b)) <class 'pandas.core.series.Series'> >>> np.std(np.asarray(stats.zscore(b))) Ticket URL: <http://projects.scipy.org/scipy/ticket/1634#comment:1> SciPy <http://www.scipy.org> SciPy is open-source software for mathematics, science, and engineering. More information about the Scipy-tickets mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-tickets/2012-March/005100.html","timestamp":"2014-04-20T08:40:47Z","content_type":null,"content_length":"5019","record_id":"<urn:uuid:71c6aeec-5331-443b-985f-3dc24ed705a0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
What does Zero Times Infinity Equal? What does Zero Times Infinity Equal? At first, you may think that zero times infinity equals zero. After all, zero times any number is equal to zero, however infinity is not a number. Logic dictates that zero multiplied by itself no matter how many times will always equal zero. However, I am going to proof that this answer is not quite correct when dealing with infinity. First, I am going to define this axiom that any number divided by infinity is equal to zero: Where c is any real number. Someone pointed out this is axiom is incorrect. Stating that any real number divided by infinity should be equal to 0.000..1. Well, 0.000..1 is equal to zero for the same reason that 1 equals 0.999... Take a look at these proofs for more information. However, any real number divided by infinity is equal to undefined, because you can never finish dividing something into infinite number of parts. Therefore, the axiom above is false. So, lets prove what zero times infinity equals: The first step is to substitute for zero with the axiom: Therefore, when the infinities cancel each other out, we get: Two friends of mine just proved to me that infinity divided by infinity does NOT equal to one, therefore my proof does not work. If you are interested, here is the proof that infinity divided by infinity does not equal to one. In actuality, when any number (including zero) is multiplied with infinity, then the results are always undefined. Therefore, zero times infinity is undefined. This can be rewritten as: So, zero times infinity is an undefined real number. This is the definition of undefined. Therefore, zero times infinity is undefined. Another way of looking at this is that no one can EVER finish multiplying zero times infinity, therefore the answer will always be undefined. Even though logic dictates that the answer will never not be zero, this answer will never be reached. Therefore, trying to multiple zero times infinity is undefined. by Phil B.
{"url":"http://www.philforhumanity.com/Zero_Times_Infinity.html","timestamp":"2014-04-20T10:54:38Z","content_type":null,"content_length":"11938","record_id":"<urn:uuid:e4f96f3f-2446-4bda-a756-d8b5c1c13493>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics halp [Archive] - Order of the Blue Gartr This is a practice test, not homework (I already know the answer, but I want to know how to find it) An article in a journal reports that 34% of American fathers take no responsibility for child care. A researcher claims that the figure is higher for fathers in the town of Cheraw. A random sample of 225 fathers from Cheraw, yielded 97 who did not help with child care. Find the P-value for a test of the researcher's claim. The answer is 0.0019. I don't really know this section that well :( I kinda get the null/alternate hypothesis stuff, for example, H0 here is that mu (would it be mu in this problem?) = .34, and H1: mu(?) > .34 Yeah I'm kinda lost, and this book is a piece of shit that isn't really helping, any help would be appreciated, thanks! edit: Here's 2 more problems I don't really get: In a poll of 278 voters in a certain city, 67% said that they backed a bill. The margin of error in the poll was reported as 6 percentage points (w/ a 95% degree of confidence) Which statement is A) The sample size is too small to achieve the stated margin of error B) For the given sample size, the margin of error should be larger than stated C) The reported margin of error is consistent with the sample size (correct answer) D) There is not enough info to determine whether the margin of error is consistent w/ the sample size E) For the given sample size, the margin of error should be smaller than stated. Also another problem, except it's a poll of 390, 77% backed it, ME is 5% w/ 95% degree of confidence, and the answer is "The stated margin of error could be achieved with a smaller sample size" Don't really get what's going on with those 2, any help would be appreciated :x thanks again
{"url":"http://www.bluegartr.com/archive/index.php/t-90889.html","timestamp":"2014-04-21T06:35:51Z","content_type":null,"content_length":"15510","record_id":"<urn:uuid:59d0064e-b30d-4ff5-95ab-30652d6d2245>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00304-ip-10-147-4-33.ec2.internal.warc.gz"}