content
stringlengths
86
994k
meta
stringlengths
288
619
Bias in trials comparing paired continuous tests can cause researchers to choose the wrong screening modality • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Med Res Methodol. 2009; 9: 4. Bias in trials comparing paired continuous tests can cause researchers to choose the wrong screening modality To compare the diagnostic accuracy of two continuous screening tests, a common approach is to test the difference between the areas under the receiver operating characteristic (ROC) curves. After study participants are screened with both screening tests, the disease status is determined as accurately as possible, either by an invasive, sensitive and specific secondary test, or by a less invasive, but less sensitive approach. For most participants, disease status is approximated through the less sensitive approach. The invasive test must be limited to the fraction of the participants whose results on either or both screening tests exceed a threshold of suspicion, or who develop signs and symptoms of the disease after the initial screening tests. The limitations of this study design lead to a bias in the ROC curves we call paired screening trial bias. This bias reflects the synergistic effects of inappropriate reference standard bias, differential verification bias, and partial verification bias. The absence of a gold reference standard leads to inappropriate reference standard bias. When different reference standards are used to ascertain disease status, it creates differential verification bias. When only suspicious screening test scores trigger a sensitive and specific secondary test, the result is a form of partial verification bias. For paired screening tests with bivariate normally distributed scores, we give formulae and programs to quantify the effect of paired screening trial bias on a paired comparison of area under the curves. We fix the prevalence of disease, and the chance a diseased subject manifests signs and symptoms. We derive the formulas for true sensitivity and specificity, and those for the sensitivity and specificity observed by the study investigator. The observed area under the ROC curves is quite different from the true area under the ROC curves. The typical direction of the bias is a strong inflation in sensitivity, paired with a concomitant slight deflation of specificity. In paired trials of screening tests, when area under the ROC curve is used as the metric, bias may lead researchers to make the wrong decision as to which screening test is better. Paired trials designed to compare the diagnostic accuracy of screening tests using area under the receiver operating characteristic (ROC) curve may fall victim to a strong bias that renders the conclusions of the trial incorrect. In English, "bias" often has a pejorative connotation, implying that those who conduct the study prefer one scientific conclusion, rather than another. We use the term "bias" in the epidemiological and statistical sense, as the difference between the results obtained in a study, and the true results. The bias occurs because limitations in the trial design may differentially affect the area under the ROC curve for each screening test. Many competing statistical approaches have been suggested for comparing the diagnostic accuracy of two continuous tests. We consider area under the ROC curve, because it continues to be used as the standard in prominent medical journals [1-3]. A common design for the comparison of two continuous screening tests is to evaluate participants with both screening tests. The disease status is then determined by either an invasive secondary test, or by a less invasive, but less sensitive approach. Ethically and practically, the invasive secondary test must be reserved only for those participants who have a suspicious result on one or both screening tests, or for those who have signs and symptoms of disease. For those who have a normal result on both screening tests, a less sensitive process is used to approximate the disease status. As the true disease status is not known correctly for all participants, the observed disease status is used for calculations of diagnostic accuracy. For potentially lethal diseases like cancer, where the invasive test is biopsy, this design is the best possible available design. The imperfections of the study design occur because the disease is difficult to diagnose since it is clinically occult, and the study designers must keep the risk of potential harm to subjects as low as possible. The limitations of this design leads to a previously undescribed bias we call paired screening trial bias. This bias results from the synergistic effects of inappropriate reference standard bias, differential verification bias, and partial verification bias [4]. Here, verification is used to describe the process of ascertaining the disease status. In classical partial verification bias, only some participants undergo determination of disease status. A variant of partial verification bias is extreme verification bias, when only strongly abnormal results on one of the screening tests lead to secondary testing [5]. In the paired screening trial design we discuss here, an effect similar to partial verification bias operates. A disease status is assigned for all participants, but determined with great sensitivity and specificity only for those with strongly abnormal results on an initial screening test. Because different methods are used to ascertain disease status, depending on the results of the initial screening tests, the trial is subject to differential verification bias. Finally, paired screening trials often yield fewer observed than true cases of disease. Some cases of disease are missed because the ascertainment of disease status is not perfect. Thus, the trial is subject to inappropriate reference standard bias. All three of these biases interact to inflate the sensitivity and to slightly deflate the specificity, in potentially differential amounts for each screening test. When differentially biased estimates of sensitivity and specificity are used to construct receiver operating characteristic (ROC) curves for the two screening tests, the resulting areas under the ROC curves are also incorrect. Therefore, when tests are used to compare the areas under the ROC curves, the conclusions drawn regarding the relative diagnostic accuracy of the two tests may be wrong. This potential pitfall has strong clinical implications because a paired comparison of areas under ROC curves is one of the most common tests used to compare screening modalities. Thus, paired screening trial bias may have a large impact on the design and interpretation of screening trials. We provide formulas to quantify the bias. We describe the conditions that cause incorrect scientific conclusions as to which screening modality is better. We also demonstrate that paired screening trial bias may not affect the scientific conclusion, and explain when the scientific conclusion is likely to be correct. Study design We consider a hypothetical trial in which each subject receives two screening tests at the same time in a paired design. In the trial, the disease status is determined either by a secondary, sensitive and specific but invasive test, like biopsy, or approximated by a less sensitive process, like follow-up for a certain time period. The diagnostic accuracy of the two screening tests is to be compared using a paired comparison for the difference in area under the ROC curve for each screening test. There are two possible viewpoints for the trial. One is omniscient, in which the true disease status is known for each subject. The other is the viewpoint of the study investigator, who observes the disease status with error due to the limitations of the trial design. Because we use a mathematical model, we can derive the probability of all outcomes from each point of view. A flow chart of the hypothetical study is shown in Figure Figure1.1. Disease is observed by the study investigator in one of four ways. 1) A patient has an abnormal result on screening Test 1 only and then has an abnormal secondary test, leading to the diagnosis of disease. 2) A patient has an abnormal result on screening Test 2 only and then has an abnormal secondary test, leading to the diagnosis of disease. 3) A patient has abnormal results on both screening Test 1 and screening Test 2 and then has an abnormal secondary test, leading to the diagnosis of disease. 4) A patient has normal results on both screening Test 1 and screening Test 2, and thus no secondary test, but later presents with signs and symptoms, which lead to an abnormal secondary test, and the subsequent diagnosis of disease. Paired screening trial flowchart. Trial design, and observed and true outcomes for a paired screening trial of two continuous tests, with two possible secondary tests used determine disease status. Cases of disease which escape detection during the study ... In this analysis, we will refer to the disease status observed in the study as the observed disease status and the true disease status as the true disease status, with observed and true as shorthand, respectively. We quantify bias by examining the difference between the ROC curves drawn using the observed disease status, and those drawn using the true disease status. Model, Assumptions and Definitions We model the potential errors due to paired screening trial bias for this hypothetical trial. A series of assumptions allow us to examine the potential impact of paired screening trial bias in a situation with no experimental noise. First, we assume that the results of screening Test 1 and screening Test 2 have a bivariate normal distribution for the participants with disease, and a potentially different bivariate normal distribution for the participants without disease. While normally distributed data is not typically observed in studies, the assumption of normality underlies the popular ROC analysis technique of Metz et al. [6]. Suppose that the variance, σ^2, is the same for both distributions. The equal variance assumption prevents the true ROC curves from crossing. Let μ[C1 ]and μ[C2 ]be the mean scores for participants with disease given by screening Test 1 and screening Test 2, and μ[N1 ]and μ[N2 ]be the mean scores for participants without disease given by screening Test 1 and screening Test 2, respectively. Suppose ρ[C ]is the correlation between the two test scores for participants with disease, and ρ[N ]is the correlation between the two test scores for participants without disease. Scores for different participants were assumed to be independent. We assume that a high score on a screening test results in an increased level of suspicion. We define x to be the cutpoint for each 2 × 2 table that defines the ROC curve. Scores above x are declared positive on each test, while scores below x are declared negative. We assume that the invasive, yet sensitive and specific secondary test never misses disease when disease is present. Likewise, if a subject has no disease, the invasive, yet sensitive and specific secondary test always correctly indicates that the subject is disease free. We also assume that all test scores above a pre-specified threshold lead to the invasive, yet sensitive and specific secondary test. θ is the value of the test score above which participants must have the invasive, yet sensitive and specific secondary test. We will call θ the threshold for recall. All participants who do not undergo the invasive, yet sensitive and specific secondary test have a less sensitive, but less invasive secondary test, such as For convenience in the derivation, we use the same value of the threshold for recall, θ, for both screening tests. Because ROC analysis is invariant to translation, choosing the same values of θ for each screening test, and then shifting the means of the screening test scores has the same mathematical result as choosing different values of θ for each screening test. During the follow-up period, some participants will experience signs and symptoms of disease. We assume that only participants with disease will experience signs and symptoms of disease. Participants who experience signs and symptoms of disease are then given the invasive, yet sensitive and specific secondary test, which we have previously assumed is infallible. For participants with signs and symptoms, the study investigator always observes the correct outcome. The study investigator incorrectly specifies that a participant has no disease when all three of the following conditions are met: 1) the participant has disease, 2) the participant scores below θ on both screening tests, avoiding the invasive, yet sensitive and specific secondary test, and 3) never experiences signs or symptoms during the follow-up period. The prevalence of disease in the population is r. The proportion of participants with disease who experience signs and symptoms within the study follow-up period, but not at study entry, is ψ. We write Φ(x) to indicate the cumulative distribution function of a normal distribution with mean 0 and standard deviation 1, evaluated at the point x, and Φ(x, y, ρ) to indicate the cumulative distribution function of a bivariate normal distribution with mean vector [0, 0], standard deviations both 1 and correlation ρ, evaluated at the points x and y. That is, if X and Y have a bivariate normal distribution, we write Φ(x, y, ρ) to indicate Pr (X ≤ x and Y ≤ y|$σX2$, $σY2$ = 1, ρ) The data are paired, so there are two observed test scores for each subject. By assumption, the two scores are correlated. Each test score could fall above or below θ, the threshold value for referral to the invasive, yet sensitive and specific secondary test. Thus, for each value of x, we can describe a series of events cross-classified by the Test 1 score, the Test 2 score, the true disease status of the subject and the presence of signs or symptoms. We classify each event both as it truly occurs, and how it is observed by the study investigator. There are 22 possible situations when x <θ (Table (Table1),1), and 19 such situations when x > θ (Table (Table22). For x <θ, observed screening test results, and observed and true disease status. For x > θ, observed screening test results, and observed and true disease status. For each screening test and each value of the test cutpoint x, we can define a table that cross classifies the response of the test (positive or negative), and the truth (the presence or absence of disease). The cell and marginal probabilities for this cross-classification are shown in Tables Tables33 and and4.4. We obtain the probabilities in two steps. First, we use our model, assumption and definitions to assign probabilities to each situation shown in Tables Tables11 and and2.2. Then, using the disease status and screening test results to classify the events in Tables Tables11 and and22 into the appropriate four groups, we sum the appropriate event probabilities to obtain the cell and marginal probabilities shown in Tables Tables33 and and4.4. For example, in Table Table3,3, the screening Test 1 +, true disease + cell has the probability formed by summing all entries in Table Table11 where screening Test 1 is + and the subject has disease. True disease status and Test 1 results. True disease status and Test 2 results. We then calculate the true sensitivity for each test as the number of true positives identified by that screening test divided by the total number of true cases. The true specificity for each test is the number of true negatives correctly identified as negative by that screening test divided by the total number of true non-cases. The true ROC curve is generated by plotting the true sensitivity on the vertical axis versus one minus the true specificity on the horizontal axis. We use a similar technique to calculate the observed sensitivity and observed specificity. In order to generate the observed ROC curves, for each test and each value of the cutpoint x, we define a table that cross classifies the response of the test (positive or negative), and the observed disease status (the presence or absence of observed disease). The cell and marginal probabilities for this cross-classification are shown in Tables Tables55 and and6.6. We then calculate the observed sensitivity and the observed specificity. The observed sensitivity is the fraction of participants observed to have disease who have a positive screening test result. The observed specificity is the fraction of participants who apparently have no disease who have a negative screening test result. Some participants actually may have disease, but the disease is not detected in the trial. Observed disease status and Test 1 results. Observed disease status and Test 2 results. The observed ROC curve is generated by plotting the observed sensitivity on the vertical axis versus one minus the observed specificity on the horizontal axis. Simpson's rule numerical integration methods [[7], p. 608] with accuracy of 0.001 are used to calculate the area under the ROC curve (AUC) for each screening test. We calculate the theoretically correct ROC curves and AUCs (ignoring the error of integration), using our mathematical derivations. In a real trial, the study investigator would use a hypothesis test and a p-value to compare the difference in AUCs. Depending on the sample size chosen for the trial, the precision of the estimates and the accuracy of the decision may change. To illustrate the effect of the bias, we present the theoretical results. To illustrate the effect of sample size on the precision of the estimates, we conduct a simulation. For the simulation, we suppose that the study investigator decided to test the null hypothesis of no difference between the areas under the ROC curves, using a non-parametric AUC test for paired data [8], and fixing the Type I error rate at 0.05. To ensure adequate power, for a fixed set of parameters, we set the sample size so that 90% of the time, if the true state of disease were known, the null hypothesis would be rejected. For that fixed set of parameters and sample size, we simulate 10,000 sets of data. For both the true state of disease, and the observed state of disease, we record the magnitude of the differences in AUCs, and the decision whether to reject the null. The proportion of rejections for the true and observed data is estimated by the number of rejections, divided by 10,000. Ten thousand is chosen so the maximum half width for the confidence interval for the proportion rejected is no more than 0.01. Our derivations demonstrate that the observed ROC curve differs from the true ROC curve, with the amount of bias depending on the correlation between the screening tests for participants with disease, ρ[C], the rate of signs and symptoms, ψ, and the threshold for recall, θ. In some cases, the bias equally affects the observed ROC curve for both screening tests, and the scientific conclusion is the same as it would have been had the true disease state been observed. In other cases, the bias causes a change in the direction of the scientific conclusion. The scientific conclusion only changes direction when for one screening test, for participants with disease, a higher proportion of the scores lead to recall than for the other screening test. Thus, for that screening test, a larger percent of participants with true disease go on to have their disease status correctly ascertained, and observed in the study, than for the other screening test. Figure Figure22 and Figure Figure33 demonstrate the possible effects of bias on the scientific conclusions. In Figure Figure2,2, the study investigator will draw the wrong scientific conclusion. In Figure Figure3,3, the study investigator will draw the correct scientific conclusion, despite the presence of bias. For Figure Figure3,3, the participants with the highest 8% of both screening Test 1 and screening Test 2 scores will be recalled for the sensitive and specific secondary test. For Figure Figure2,2, the participants with the highest 34% of the screening test scores for Test 1 will be recalled, but only the highest 8% for Test 2. In general, the scientific conclusion is correct when both screening tests lead to a secondary test at the same rate. The scientific conclusion may be wrong when the chance of proceeding to the secondary test depends on which screening test produced a high score. True and observed ROC curves for a hypothetical example where bias changes the scientific conclusion. The parameters for this example were chosen to illustrate a case where paired screening trial bias may cause an incorrect scientific conclusion. The ... True and observed ROC curves for a hypothetical example where bias did not change the scientific conclusion. The parameters for this example were chosen to illustrate a case where paired screening trial bias did not change the direction of the difference ... As shown in Figure Figure22 and Figure Figure3,3, the observed curves have inflection points, where the slope changes. There is no inflection point in the true ROC curves for either test, because the formulae that govern the sensitivity and specificity for the true curves are the same no matter what the ROC cutoff points are (see Tables Tables33 and and4).4). By contrast, as shown in Tables Tables55 and and6,6, the formulae for the observed ROC curves change depending on whether the cutpoint is above or below θ. This causes a change in slope for the observed ROC curve. The inflection point is more obvious for Test 2 than for Test 1. The inflection point for Test 1 occurs at specificity of about 0.80, and is obscured in Figure Figure2.2. In general, as θ increases relative to the mean of the test score distribution, the point of inflection occurs at higher values of specificity. In Figure Figure2,2, the true ROC curve for screening Test 2 is higher than the true ROC curve for screening Test 1. Thus, screening Test 2 has better true diagnostic accuracy than screening Test 1. However the observed ROC curve for screening Test 1 is higher than the observed ROC curve for Test 2. In Figure Figure2,2, bias in the observed ROC curves leads to a bias in the observed AUC for each test. Recall that in reality, screening Test 2 has better diagnostic accuracy than screening Test 1. The true AUC of screening Test 1 is 0.64, and the true AUC of screening Test 2 is 0.70. However, the observed AUC tells a different story. The observed AUC for screening Test 1 is 0.82, and the observed AUC for screening Test 2 is 0.75. Since Test 2 truly has better diagnostic accuracy than Test 1, the true difference in AUC between screening Test 2 and Test 1 is positive (Test 2 true AUC – Test 1 true AUC = 0.70 - 0.64 = 0.06). However, in Figure Figure2,2, the observed difference in AUC between Test 2 and Test 1 is negative (Test 2 observed AUC – Test 1 observed AUC = 0.75 – 0.82 = -0.07). If the study investigator were to observe these exact theoretical results, the study investigator would conclude that screening Test 1 has better diagnostic accuracy than Test 2, when in fact the opposite is true. Study investigators never observe the true state of nature. They observe data, and make estimates, the precision of which depends on the sample size. They decide which screening test is better using hypothesis tests. To see which conclusion the hypothesis tests would suggest, both for the true and observed disease status, we conducted a simulation. For the parameters of Figure Figure2,2, for a Type 1 error rate of 0.05, if the true disease status were known, a non-parametric test [8] would have 90% power with 33,000 participants. With the true disease status known, we would reject the null roughly 90% of the time. The remaining 10% of the time, we would conclude no difference in AUC between Test 1 and Test 2. If the true disease status were known, every time we rejected the null, we would conclude correctly that Test 2 is better than Test 1. If we conduct the same simulation experiment from the point of view of the study investigator, for the experimental situation of Figure Figure2,2, we see only the observed state of disease. In that case, the study investigator will reject the null hypothesis only 71% of the time. The remaining 29% of the time, the study investigator will conclude that there is no difference in AUC between Test 1 and Test 2. The lower power is due to more variance in the observed data, compared to the true data. When the study investigator rejects the null, every time, she concludes incorrectly that Test 1 is better than Test 2. The incorrect conclusion in Figure Figure22 is the result of a cascade of errors. The observed sensitivity for Test 1 is inflated more than the observed sensitivity for Test 2. The increase in observed sensitivity makes the observed ROC curve higher for Test 1 than for Test 2. A higher observed ROC curve means a higher observed AUC for Test 1 than for Test 2. To understand how and why paired screening trial bias occurs, consider a single specificity value on the true and observed ROC curves shown in Figure Figure2.2. Choose the value of specificity where there is the greatest increase in observed sensitivity relative to true sensitivity, for Test 1. This occurs when specificity is 0.82. For a hypothetical study of 10,000 participants, and specificity of 0.82, the observed and true 2 × 2 tables for Test 1 and Test 2 are shown in Figure Figure44. For the hypothetical example of Figure 2, true and observed 2 × 2 tables. Numbers were rounded to the nearest whole number. All tables were calculated at specificity of about 0.82. This point was chosen because the maximum difference between the ... Each one of the four tables uses a slightly different ROC cutpoint. For the observed table, Test 1 is positive if it exceeds 2.511; for the true table, Test 1 is positive if it exceeds 2.515. For the observed table, Test 2 is positive if it exceeds 1.269; for the true table, Test 2 is positive if it exceeds 1.265. The tables have different ROC cutpoints because they were chosen to have the same specificity, not the same cutpoint. Also, the number of cases of disease observed in the study, 45, is much smaller than the true number of cases of disease in the population, 100. The observed number of cases of disease is smaller than the true number because not every participant undergoes the invasive, yet sensitive and specific secondary test, and thus some cases of disease are missed. The observed number of cases of disease is the denominator of the observed sensitivity. Because the denominator is smaller for observed sensitivity than for true sensitivity, the observed sensitivity is strongly inflated for both tests. When specificity is 0.82, the observed sensitivity of Test 1 is 0.72, with true sensitivity of 0.33. For Test 2, the observed sensitivity is 0.52, with true sensitivity of 0.43. Yet if the bias only affected the denominator, the inflation in sensitivity would be the same for both tests. After all, the same number of observed cases is used as the denominator for both tests. The differential inflation for Test 1 compared to that for Test 2 must be due to the numerator of the observed sensitivity. For Test 2, the numerator of the observed sensitivity is the number of study participants who are positive on Test 2, and who are observed to have disease in the study. For Test 2, the numerator for observed sensitivity, 23, is smaller than the true numerator, 43. The difference occurs because disease can only be observed if the invasive, yet sensitive and specific secondary test is used. Even though the participants have a score that exceeds the ROC cutpoint for Test 2, they do not all undergo the invasive, yet sensitive and specific secondary test. Thus, they do not yield observed cases of disease. By contrast, for Test 1, because the ROC cutpoint is higher than the threshold which leads to the invasive, yet sensitive and specific secondary test, every participant positive on Test 1 undergoes the secondary test, and is shown to have disease. For each test, there is a different proportion of participants who exceed the cutpoint, who truly have disease, and who proceed to secondary testing. This is the source of the differential bias that causes the curves to reverse order in Figure Figure22. Paired screening trial bias also increases as the proportion of participants with disease who have signs and symptoms (ψ) decreases. If all the cases of the disease were observed during the trial, there would be no difference between true and observed disease status, and no bias. Yet, in every screening trial, some cases of diseases are not identified by either screening test, and never present with signs and symptoms. As the proportion of participants presenting with signs and symptoms (ψ) decreases, fewer cases of disease are discovered during the trial in the interval after screening, and the difference between observed and true disease status grows. Paired screening trial bias increases with the increase in correlation between the results of the screening tests for participants with disease, ρ[C]. The bias in the observed ROC curves increases because as the two index tests become more highly correlated, the number of observed cases of disease becomes smaller relative to the number of true cases of disease. When the two index tests are highly correlated, they essentially produce the same information as to whether a participant has disease. When the index tests are independent, each test makes diagnoses on its own that the other test misses. Thus, when the tests are independent, and ρ[C ]is 0, the number of observed cases is highest, relative to the number of true cases. The percentage of participants receiving the infallible secondary test increases as ρ[C ]decreases. The bias lessens as the true disease status is ascertained for more participants. In general, paired screening trial bias tends to strongly increase the sensitivity, while slightly decreasing the estimate of specificity. The increase in observed sensitivity compared to true sensitivity is expected with verification bias [9]. In this paper, we define a new type of bias that is a result of the interaction between a particular design for a paired screening trial, and the choice of a particular statistical test. Specifically, the bias occurs when the diagnostic accuracy of two continuous tests are compared using area under the ROC curve in a design with two limitations. First, different methods are used to ascertain disease status, depending on the results of the initial screening tests. Secondly, only some subjects undergo an invasive, yet sensitive and specific secondary test. Thus, some cases of disease are missed because the method used to ascertain disease status for those who test negative on both initial screening tests may not be 100% sensitive. Both the statistical test and the trial design we considered were modeled closely after recently completed and published trials [1-3]. These trials compared the diagnostic accuracy of two modalities for breast cancer detection. Although authors have suggested the use of other statistical approaches to compare screening modalities [10,11], the area under the full ROC curve remains the most commonly used test for paired screening trials in major American journals [1-3]. Although we modeled our trial design on real trials, we made simplifying assumptions, which may not accurately reflect reality. We assumed that there was a method for determining disease status which was infallible. In reality, all methods of determining disease status may be fallible. In breast cancer, for example, diagnostic mammography, biopsy and follow-up all make errors. Too short a follow-up time may miss cases of disease. While longer follow-up time will reveal a larger fraction of occult disease, it may also reveal increasing numbers of cases of disease that developed after the initial screening period, thus confusing the results. We assumed that all cases of disease are harmful. In screening studies, cases of disease may resolve, or proceed so slowly as to be considered harmless. We assumed that a test to determine disease status would be conducted any time a screening test result exceeded a given threshold. However, in cancer screening, because other factors may be taken into consideration when deciding a course of clinical action, there is a range of scores that may result in further testing. We also made the simplifying assumption that the scores of the screening tests followed a bivariate normal distribution. In real paired cancer trials, the scores have a conditional probability structure driven by the fact that real observers miss cancers (and score a screening test as if no disease were present), and see cancer where there is none (and then score a non-cancerous finding as abnormal). The resulting distribution of scores is far from the bivariate normal distribution we assumed. There is some theoretical justification that our results will still hold even if the data are non-normal. Hanley [12] points out that single test ROC analysis is robust to the violation of the normality assumption if there exists a monotonely increasing transformation of the test scores that yields a normally distributed result. Thus, the results described in the paper should hold whenever there is a transformation for screening Test 1, and another for screening Test 2 so that the transformed data has a bivariate normal distribution. The previous literature on bias provides some hint of the plethora of possible designs and tests used for statistical analysis. Most previous statistical literature dealt with biases that occur for single, as opposed to paired, tests. A complete summary of biases is given in [4]. Extreme verification bias may occur when the diagnostic test is invasive or dangerous [5]. Verification bias has been studied in binary tests [13,14], and in ordinal tests [15,16]. Alonzo and Pepe [17] described using imputation and re-weighting to correct verification bias for single continuous tests. Alonzo [ 18] suggested corrections for verification bias in paired binary testing situations. We were unable to find published techniques to quantify or correct for paired screening trial bias. Cancer screening trials in particular are susceptible to paired screening trial bias, because the secondary test is typically biopsy. Negative screening results cannot lead to biopsy because there is no visible lesion to be biopsied. Because biopsy is painful and invasive, it is infeasible and unethical to do a biopsy unless there are suspicious screening test results. Also, one can only biopsy what one can see: one cannot put a needle in an invisible lesion. Negative screening test results are verified, but typically by follow-up, which has lower sensitivity than biopsy. Our research suggests that in many published paired screening trials, bias did not affect the scientific conclusion. For example, in Pisano et al., [2], digital and film mammography led to the recall of a very similar proportion of cases for the secondary test, diagnostic imaging. Thus, the trial design was more like Figure Figure3,3, in which bias occurs, but does not change the scientific conclusion, rather than Figure Figure2,2, in which bias occurs differentially, and changes the scientific conclusion. Why criticize a trial design, that though imperfect, cannot be improved, because of ethical constraints? It is our philosophy that it is preferable to understand all the causes of bias. With mathematical formulae for bias, we can defend trials that are fundamentally correct, and reserve doubt for those trials that may be subject to incorrect conclusions. In addition, models for bias are the necessary first step toward mathematical corrections for bias in sensitivity and specificity, and toward designing new clinical trial methodologies. Using a simplified paradigm, we have shown that paired screening trial bias has the potential to subvert the results of paired screening trials, especially when the fraction of the population recalled for secondary testing differs for each screening test. The bias is affected by the rate at which diseased participants experience signs and symptoms of disease, and the chance of recall for a sensitive secondary test. The bias is also influenced by the distributions of the scores for the cases and non-cases for each screening test, and by the correlation between the screening tests. Further research on this bias is needed, so that mathematical corrections for paired screening trial bias can be developed. Programs implemented in SAS and Mathematica to calculate the true and observed sensitivity, specificity, ROC curves, and areas under the curves are available by request from the authors. ROC: Receiver operating characteristic; AUC: Area under the receiver operating characteristic curve. Competing interests Financial support for this study was provided by NCI K07CA88811, a grant from the National Cancer Institute to the Colorado School of Public Health, Deborah Glueck, Principal Investigator. The funding agreement ensured the authors' independence in designing the study, interpreting the data, writing, and publishing the report. The authors declare that they have no competing interests. Authors' contributions DHG conceived of the idea and derived the mathematics. The first draft of the manuscript was a collaborative effort of DHG and MML. MML provided epidemiological expertise. CIO programmed the formulae and produced graphs. KEM provided advice about the mathematics. BMR and JTB aided in a thorough revision of the manuscript. JML and EDP provided clinical expertise and suggestions for how to relate the topic to medical studies. TAA assisted in the literature review, and in checking the math, and collaborated on the revision of the manuscript. All authors read and approved the final manuscript. For those readers not familiar with ROC analysis, we give a short tutorial. For a complete discussion, see [[5];Chapter 4, pages 66–94] or [[19]; Chapter 4, pages 137–153]. The ROC curve is estimated by selecting a series of cutpoints. By convention, for each test, scores below the cutpoint are considered negative, and scores above the cutpoint are considered positive. The cross-classification of test results and disease status yields a set of two by two tables. Each table gives a paired estimate of sensitivity (the number of true positives correctly identified as positive by the test divided by the total number of cases) and specificity (the number of true negatives correctly identified as negative by the test divided by the total number of non-cases). The ROC curve for each test is graphed with sensitivity on the vertical axis, and 1 – specificity on the horizontal axis. The area under the curve (AUC) is a measure of the diagnostic accuracy of the test. A non-informative test follows the 45° line and has an AUC of 0.5. A perfect test follows the top and left boundaries of the ROC plot area, and has an AUC of 1. Pre-publication history The pre-publication history for this paper can be accessed here: Thanks to Gary Grunwald for reading and providing comments on versions of this paper. • Lewin JM, D'Orsi CJ, Hendrick RE, Moss LJ, Isaacs PK, Karellas A, Cutter GR. Clinical comparison of full-field digital mammography and screen-film mammography for detection of breast cancer. AJR Am J Roentgenol. 2002;179:671–677. [PubMed] • Pisano ED, Gatsonis C, Hendrick E, Yaffe M, Baum JK, Acharyya S, Conant EF, Fajardo LL, Basett L, D'Orsi C, Jong R, Rebner M. Diagnostic performance of digital versus film mammography for breast-cancer screening. N Engl J Med. 2005;353:1773–1783. doi: 10.1056/NEJMoa052911. [PubMed] [Cross Ref] • Berg WA, Blume JD, Cormack JB, Mendelson EB, Lehrer D, Böhm-Vélez M, Pisano ED, Jong RA, Evans WP, Morton MJ, Mahoney MC, Larsen LH, Barr RG, Farria DM, Marques HS, Boparai K. for the ACRIN 6666 Investigators. Combined screening with ultrasound and mammography vs mammography alone in women at elevated risk of breast cancer. J Am Med Assoc. 2008;299:2151–2163. doi: 10.1001/ jama.299.18.2151. [PMC free article] [PubMed] [Cross Ref] • Whiting P, Rutjes AW, Reitsma JB, Glas AS, Bossuyt PM, Kleijnen J. Sources of variation and bias in studies of diagnostic accuracy: a systematic review. Ann Intern Med. 2004;140:189–202. [PubMed] • Pepe MS. The Statistical Evaluation of Medical Test for Classification and Prediction. New York: Oxford University Press; 2003. • Metz C, Wang P, Kronman HA. In: Information Processing In Medical Imaging. Deconinck F, editor. The Hague, the Netherlands: Nijihoff; 1984. New approach for testing the significance of differences between ROC curves measured from correlated data; pp. 432–445. • Apostol TM. Calculus: Multivariable Calculus and Linear Algebra, With Applications to Differential Equations and Probability. Second. II. New York: Wiley and Sons; 1969. • DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988;44:837–845. doi: 10.2307/2531595. [PubMed] [Cross Ref] • Begg CB, McNeil BJ. Assessment of radiologic tests: control of bias and other design considerations. Radiology. 1988;167:565–569. [PubMed] • Baker SG, Pinsky P. A proposed design and analysis for comparing digital and analog mammography: special ROC methods for cancer screening. J Am Stat Assoc. 2001;96:421–428. doi: 10.1198/ 016214501753168136. [Cross Ref] • Li CR, Liao CT, Liu JP. A non-inferiority test for diagnostic accuracy based on the paired partial areas under ROC curves. Stat Med. 2008;10:1762–1776. doi: 10.1002/sim.3121. [PubMed] [Cross Ref] • Hanley JA. The robustness of the 'binormal' assumptions used in fitting ROC curves. Med Decis Making. 1988;8:197–203. doi: 10.1177/0272989X8800800308. [PubMed] [Cross Ref] • Begg CB, Greenes RA. Assessment of diagnostic tests when disease verification is subject to selection bias. Biometrics. 1983;39:207–215. doi: 10.2307/2530820. [PubMed] [Cross Ref] • Begg CB. Biases in the assessment of diagnostic tests. Stat Med. 1987;6:411–423. doi: 10.1002/sim.4780060402. [PubMed] [Cross Ref] • Gray R, Begg CB, Greenes RA. Construction of receiver operating characteristic curves when disease verification is subject to selection bias. Med Decis Making. 1984;4:151–164. doi: 10.1177/ 0272989X8400400204. [PubMed] [Cross Ref] • Rodenberg C, Zhou X-H. ROC curve estimation when covariates affect the verification process. Biometrics. 2000;56:1256–1262. doi: 10.1111/j.0006-341X.2000.01256.x. [PubMed] [Cross Ref] • Alonzo TA, Pepe MS. Assessing accuracy of a continuous screening test in the presence of verification bias. J R Stat Soc Ser C Appl Stat. 2005;54:173–190. doi: 10.1111/j.1467-9876.2005.00477.x. [ Cross Ref] • Alonzo TA. Verification bias-corrected estimators of the relative true and false positive rates of two binary screening tests. Stat Med. 2005;24:403–417. doi: 10.1002/sim.1959. [PubMed] [Cross • Zhou X-H, Obuchowski NA, McClish DK. Statistical Methods in Diagnostic Medicine. New York: John Wiley and Sons; 2002. Articles from BMC Medical Research Methodology are provided here courtesy of BioMed Central • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2657218/?tool=pubmed","timestamp":"2014-04-20T16:27:18Z","content_type":null,"content_length":"120483","record_id":"<urn:uuid:21c17cc4-f666-47ba-b38d-c7120611fdce>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
A History of Mechanics "A remarkable work which will remain a document of the first rank for the historian of mechanics." — Louis de Broglie In this masterful synthesis and summation of the science of mechanics, Rene Dugas, a leading scholar and educator at the famed Ecole Polytechnique in Paris, deals with the evolution of the principles of general mechanics chronologically from their earliest roots in antiquity through the Middle Ages to the revolutionary developments in relativistic mechanics, wave and quantum mechanics of the early 20th century. The present volume is divided into five parts: The first treats of the pioneers in the study of mechanics, from its beginnings up to and including the sixteenth century; the second section discusses the formation of classical mechanics, including the tremendously creative and influential work of Galileo, Huygens and Newton. The third part is devoted to the eighteenth century, in which the organization of mechanics finds its climax in the achievements of Euler, d'Alembert and Lagrange. The fourth part is devoted to classical mechanics after Lagrange. In Part Five, the author undertakes the relativistic revolutions in quantum and wave mechanics. Writing with great clarity and sweep of vision, M. Dugas follows closely the ideas of the great innovators and the texts of their writings. The result is an exceptionally accurate and objective account, especially thorough in its accounts of mechanics in antiquity and the Middle Ages, and the important contributions of Jordanus of Nemore, Jean Buridan, Albert of Saxony, Nicole Oresme, Leonardo da Vinci, and many other key figures. Erudite, comprehensive, replete with penetrating insights, A History of Mechanics is an unusually skillful and wide-ranging study that belongs in the library of anyone interested in the history of Reprint of the Editions du Griffon, Neuchatel, Switzerland, 1955 edition. Availability Usually ships in 24 to 48 hours ISBN 10 0486656322 ISBN 13 9780486656328 Author/Editor René Dugas Format Book Page Count 688 Dimensions 5 3/8 x 8 1/2
{"url":"http://store.doverpublications.com/0486656322.html","timestamp":"2014-04-18T03:16:51Z","content_type":null,"content_length":"45114","record_id":"<urn:uuid:26490ff8-a47e-4f25-86d8-0b71982ff62d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
all 4 comments [–]kami_inu1 point2 points3 points ago sorry, this has been archived and can no longer be voted on [–]arcoalien[S] 0 points1 point2 points ago sorry, this has been archived and can no longer be voted on [–]kami_inu1 point2 points3 points ago sorry, this has been archived and can no longer be voted on I just used a/b instead of the actual values because I'm lazy, nothing special there. Your new x and y are correct, so we can do the following: x/y = r(-1/2) / r(√3/2) Now we can simplify the r out top and bottom x/y = (-1/2) / (√3/2) Next is the halves x/y = (-1) / (√3) Now multiply the bottom part on each side away x√3 = -y And last step is to add across [–]arcoalien[S] 0 points1 point2 points ago sorry, this has been archived and can no longer be voted on
{"url":"http://www.reddit.com/r/cheatatmathhomework/comments/1jeewb/trig_theta_2pi3_convert_from_polar_to_rectangular/","timestamp":"2014-04-18T03:17:35Z","content_type":null,"content_length":"57589","record_id":"<urn:uuid:eed7c233-31ee-4774-9303-716a61252e20>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Factoring Examples Page 2 Find the quotient There's more stuff going on there than in a Salvador Dali painting. We'll turn this thing into a calmer, more peaceful Monet if we can. Shall we? We can factor the numerator by grouping to find this crowded mess: Then we cancel (y + 3) from the numerator and denominator and find xy + 12 as our final answer. Ah. We are reminded of lily pads and haystacks.
{"url":"http://www.shmoop.com/polynomial-division-rational-expressions/factoring-examples-2.html","timestamp":"2014-04-19T22:23:50Z","content_type":null,"content_length":"36721","record_id":"<urn:uuid:9479b25a-725e-4622-8794-f054959d123a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from November 2008 on Güzin Bayraksan's OR Blog Scheduling Zoo November 21, 2008 The blog is about 20 days old and as part of the blog entries, I would like to share some OR websites from time to time. These will be tagged as “Useful OR websites”. The first one of these entries is… the Scheduling Zoo. I was looking for the computational complexity of different scheduling problems for my research and came across this website. The Scheduling Zoo is a searchable bibliography on the complexity of scheduling problems by Peter Brucker and Sigrid Knust and the website is maintained by Christoph Duerr. At the zoo, you pick your environment (e.g., single machine, job shop, etc.), then, the problem characteristics (e.g., precedence constraints or not) and the objective function (e.g., minimize makespan, minimize the number of late jobs, etc.) then it returns the known complexity results along with the related problems, their complexity and references for each! Given the elephantine literature on scheduling it is a pretty neat, specialized bibliography search website. You can find a related website here. They collect statistics on the searches. Turns out: * The most popular objective function is to minimize makespan (1936 searches when I looked) followed by a distant second, to minimize the sum of completion times (517 searches). * The most popular machine environment is single machine scheduling (1362 searches) followed by parallel machine scheduling with identical machines (665) * And, the two most popular variants are “release times” (1505 searches) and “precedence constraints” (1038). Update on: “Election Results: OR Wins…” November 17, 2008 On election night, I had my TV, several news websites and the website of the election prediction model of Sheldon Jacobson and colleagues on (the website has been updated since), comparing the model’s predictions to the actual election results as the numbers were coming in… I had to sleep at some point, so, it ended abruptly (see the original entry). It has been a while, but I wanted to write a quick update on how the model performed. You can find the details at the website but here’s a synopsis: ” Our model predicted 50 of the 51 states (including the District of Columbia). The Strong Democrat Swing scenario was the closest to the actual results . . . All of these states, except Indiana, were correctly predicted. . . From these results, Indiana was the most difficult state to predict, closely followed by Missouri. “ Yes, OR indeed won… Also, take a look at the cartogram, a map in which the sizes of states are rescaled according to their population, in Mike Trick’s blog entry. LP Rocks! November 11, 2008 This semester I am teaching Linear Programming (LP). This year, we have students from many different backgrounds and departments taking the class, which I think is great news for our field. The more we make OR accessible to other fields, the better. Anyway, this year I decided to do something different and wanted to show the students all the different uses of linear programming in real life. So, I searched INFORMS journals + more for applications of LP. Even though many real-world problems involve nonlinearities and integer decision variables, the list of applications of LP turned out to be pretty impressive. Take a look: ∙ This is an application in health-care: Remeijn, H., Ahuja, R., Dempsey, J. and A. Kumar, “A New Linear Programming Approach to Radiation Therapy Treatment Planning Problems,” Operations Research, 54(2): 201-216, 2006. ∙ This is an application in alternative energy (specifically wind energy). I came across this while reading GreenOR. Here’s the original entry on it. This application has been developed by the US National Renewable Energy Laboratory (NREL) to determine the expansion of wind electric generation and transmission capacity. It passes information from a Geographic Information System (GIS) into a linear program. Here’s the link to the Wind Deployment System (WinDS) model and link to the LP model. ∙ This application is in finance: Chalermkraivuth, K. C., Bollapragada, S., Clark, M. C., Deaton, J., Kiaer, L., Murdzek, J.P., Neeves, W., Scholz, B.J. and D. Toledano, “GE Asset Management, Genworth Financial, and GE Insurance Use a Sequential-Linear-Programming Algorithm to Optimize Portfolios,” Interfaces, 35: 370 – 380, September-October 2005. ∙ This application is in data mining (classification, statistics): Eva K. Lee, Richard J. Gallagher, David A. Patterson, “A Linear Programming Approach to Discriminant Analysis with a Reserved-Judgment Region,” INFORMS Journal on Computing, 15(1): pp. 23–41, 2003. ∙ And, this is an application in Sports: I. Horowitz, “Aggregating Expert Ratings Using Preference-Neutral Weights: The Case of the College Football Polls,” Interfaces, 34(4): 314–320, July–August 2004. There are many more of course, but the above sample alone makes a strong case for the versatility and power of LP. Election Results: OR wins… November 5, 2008 Now that the election results (or, estimates) are in, I was curious to see how the election prediction model of Sheldon H. Jacobson along with co-authors Steven E. Rigdon, Edward C. Sewell and Christopher J. Rigdon has performed. OR bloggers Micheal Trick and Laura McLay wrote about this in their blogs earlier. If you are not familiar with it, here is a link to their website and here is a link to their paper that explains the model. From their paper: It uses a Bayesian estimation approach that incorporates polling data, including the effect of third party candidates and undecided voters, as input to a dynamic programming algorithm … to build the probability distribution of the total number of Electoral College votes for each candidate. Their results were last updated on Tuesday, Nov 4th, using the latest polling data. Of course, the model is as correct as the polling data that is given as an input to the model. However, take a look at this: 11:45pm AZ time: Obama has 338 and McCain has 159 electoral votes. The prediction model has 338 Safe Electoral Votes for Obama and 157 Safe Electoral Votes for McCain. They define “safe” when they predict the candidate has a 0.85 chance or better for winning. Montana, Missouri, Indiana and N. Carolina still not decided. In the prediction model, these are the states that are not “safe” along with N. Dakota. 12:15am AZ time: Indiana goes Blue… Hmm… Their model tends to Red… (the only time it does not match) —>> Can’t wait to see the final results. . . INFORMS Computing Society Student Paper Competition November 2, 2008 Guanghui Lan took the ICS Student Paper Award for his paper titled “Efficient Methods for Stochastic Composite Optimization”. Last year’s winner was Amit Partani on his paper titled “Adaptive Jackknife Estimators for Stochastic Programming”. Both papers deal with stochastic optimization, which is great news for the future of the field. Microchip Embedded Cactus November 1, 2008 One of the greatest perks of living in AZ is to experience the wonderful saguaro cacti everyday. I read this in Time the other day: “National Park Service officials will soon embed microchips in Arizona’s signature saguaro cactus plants to deter thieves who dig them up and sell them to landscapers and nurseries. The microchips, which are inserted with a syringe, will help authorities identify stolen plants.” Apparently, they sell for about $1,000 each. Talking about microchips, there is substantial research on effective use of RFID chips in OR/MS, for instance, for inventory tracking and control. Hello world! November 1, 2008 Quoting Mike Trick: Like everyone and their dog, I have a blog I’ve been reading so many blogs these days (mainly on OR), decided to start one…
{"url":"http://opsres.wordpress.com/2008/11/","timestamp":"2014-04-20T11:23:14Z","content_type":null,"content_length":"43121","record_id":"<urn:uuid:78489b39-28c3-431a-bac4-fe685a161a89>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
Simulation, the game we all can play! Washington University's (St. Louis, MO) Robert Heider provides a game plan for using simulation in process control. « Prev 1 | 2 Next » View on one page This simple simulation demonstrates the potential problems with commercial process simulator packages. Integration Algorithms So how can you integrate with a spreadsheet? I’ll bet it is too complicated. Writing differential equations is not a very complicated exercise. The integration algorithms are found in most college math texts or on the Internet. Writing a differential equation is just a matter of writing a difference equation and solving it with an algorithm. Remember, difference equations first, then given to solve through the algorithm. Integrators come in two varieties, relative to step size, variable or fixed. Examples of variable sized are Gear’s Stiff and Runge-Kutta-Fehlberg. These algorithms reduce the integration interval to minimize the error. There is the Simpson’s rule that states that the error in an algorithm is related to the size of the integration interval raised to the power of the number of evaluations. Variable size algorithms keep reducing the interval until the error is less than that specified by the user. These algorithms are frequently used by those who are making a space shot or in determining the reaction rate constants for some process chemistry. They are of little use for control simulations because in control, fixed intervals are required. Even if the process is static, control must continue and mixing different algorithms is seldom worth the effort. The Runge-Kutta level 4, meaning four levels or intermediate solutions per iteration, is the best for most control purposes. Even a simple Euler algorithm can give good results. To write a differential equation, just remember you only need to write the difference, which is the input minus the output. Just remember solving a differential equation required the initial value, so you have to start somewhere. The following is a Visual Basic code for a differential equation program solving a simple pressure control loop using the ideal gas law and the Universal Gas Sizing Equations. In this case, a tank is blanketed with nitrogen. The nitrogen is supplied through a regulator upstream of an orifice plate. The tank is vented through a control valve. x is a three element array, x(1) = mass in the tank, x(2) = valve time constant, x(3) = integrated error term. x is set to the initial condition values. The differential equations solve for a new x term based on the differences. In this example, it is best to determine the mass in the tank by assuming an initial pressure. Runge-Kutta level 4 integration example ‘ first set the existing variables to a xnew For j = 1 To ssv xnew(j) = x(j) ' begin the four step process For j = 1 To 4 ' calculate the initial mass in tank p=nRT/V P_tank = (xnew(1) / MW) * R_gas * t / vol ' calculate the change in pressure ' first we need the outlet flow, n_out dP = P_tank - p_atm If (dP > P1_orifice - p_atm) Then dP = P1_orifice - p_atm End If If (dP < 0#) Then dP = 0# End If Qout = Cg_valve(Index) * P_tank * _ ((520 / (spgr * t)) ^ 0.5) * _ Sin((59.64 / C1_valve) * ((dP / P_tank) ^ 0.5)) Qout = Qout / minperhr n_out = Qout * spgr / 13.1 ' then calculate the flow across the orifice plate, n_in dP = P1_orifice - P_tank If (dP > P1_orifice - p_atm) Then dP = P1_orifice - p_atm End If If (dP < 0#) Then dP = 0# End If Qin = Cg_orifice * P1_orifice * _ ((520 / (spgr * t)) ^ 0.5) * _ Sin((59.64 / C1_orifice) * ((dP / P1_orifice) ^ 0.5)) Qin = Qin / minperhr n_in = Qin * spgr / 13.1 ‘ for x(1), the mass difference is the inlet minus the outlet ‘ for x(2), the first order lag simulating valve travel ‘ for x(3), the integration of the control error x_dot(1) = n_in - n_out x_dot(2) = (1 / tau_v) * (u(Index) - xnew(2)) x_dot(3) = (Psp(Index) - P_tank) ‘ These are the integration equations If j = 1 Then For i = 1 To ssv k1(i) = step * x_dot(i) xnew(i) = x(i) + k1(i) / 2 End If If j = 2 Then For i = 1 To ssv k2(i) = step * x_dot(i) xnew(i) = x(i) + k2(i) / 2 End If If j = 3 Then For i = 1 To ssv k3(i) = step * x_dot(i) xnew(i) = x(i) + k3(i) End If If j = 4 Then For i = 1 To ssv k4(i) = step * x_dot(i) x(i) = x(i) + (1 / 6) * (k1(i) + 2 * k2(i) + 2 * k3(i) + k4(i)) ' Calculated next state End If ' calculate the new outlet pressure by calculating the mass in tank p=nRT/V P_tank = (x(1) / MW) * R_gas * t / vol Using the Control System as a Simulator In many cases where the user is only interested in simulating hydraulic or thermal systems, or where chemical reactions are simplified or ignored, the control system itself can be modified to provide the simulated process. The following example illustrates this, the controller is diagramed in figure 2 and the simulation is shown in figure 3. Assume that using an internal cooling coil cools a heat-jacketed vessel. The temperature is controlled by the amount of cooling water through the coil and the heat is controlled to keep the flow it a high enough value it facilitate good heat Modifying the control program can simply be done adding the required function blocks to calculate the heat transfer. The cooling heat value can be subtracted from the heating heat value and be totalized. The resultant total functions as an integrator, which is the heat value in BTU/minute, This can be converted to a temperature reading. In the simulation blocks the following functions are simply calculated: The cooling flow, F_COIL, is equal to K*(TV-104) where TV-104 in percent. The heat transfer coefficient, U_COIL, is equal to a K0 + K1*(TV-104). The coil outlet temperature is calculated by a heat balance across the coil is equal to the heat flowing through the coil: Q_COIL1 = U_COIL * A *( (TI-104R) – (TOUTCOIL + Tincoil) /2 ) = F_COIL * (TOUTCOIL – Tincoil) Solving the above for TOUTCOIL: (U_COIL * A ((TI-104R) – (Tincoil/2) + F_COIL*Tincoil) / (F_COIL + U_COIL*A/2) The heat transferred through the coil, Q_COIL1, is equal to F_COIL*(TCOILOUT – Tincoil) / 60. Tincoil is a constant of 529.7 DegR. The electrical power percentage, P-104, is converted to heat in BTUs. This value is as lagged by a first order lag equal to the thermal time constant across the jacket, tau. Q_ELECT = K*(P-104)* (1.0 - e-t/tau ) The Q_COIL is a lagged value of the calculated Q_COIL1. Once the electrical heat and cooling heat values are calculated, the subtracted element, D, subtracts the cooling heat from the electrical heat. This heat value is then totalized. The controlled variable, TI-104, is finally calculated in degrees C. The temperature in degrees R, TI-104R, is calculated also. Robert L. Heider, PE, is Adjunct Professor in the Chemical Engineering Department of Washington University, St. Louis, MO. He can be reached by phone at 314-935-6070; by fax at 314- 935-7211; or by e-mail at heider@wuche.che.wustl.edu « Prev 1 | 2 Next » View on one page
{"url":"http://www.controlglobal.com/articles/2004/422/?start=1","timestamp":"2014-04-21T15:45:58Z","content_type":null,"content_length":"62458","record_id":"<urn:uuid:c486a493-c792-42f8-a593-37866ffbabc2>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical properties of semiclasssical solutions of the non-stationary Schrödinger equation on metric graphs Seminar Room 1, Newton Institute The talk is devoted to the development of the semiclassical theory on quantum graphs. For the non-stationary Schrödinger equation, propagation of the Gaussian packets initially localized in one point on an edge of the graph is described. Emphasis is placed on statistics behavior of asymptotic solutions with increasing time. It is proven that determination of the number of quantum packets on the graph is associated with a well-known number-theoretical problem of counting the number of integer points in an expanding polyhedron. An explicit formula for the leading term of the asymptotics is presented. It is proven that for almost all incommensurable passing times Gaussian packets are distributed asymptotically uniformly in the time of passage of edges on a finite compact graph. Distribution of the energy on infinite regular trees is also studied. The presentation is based on the joint work with A.I. Shafarevich. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/AGA/seminars/2010072717301.html","timestamp":"2014-04-17T04:23:05Z","content_type":null,"content_length":"6727","record_id":"<urn:uuid:17004536-0b82-4ecb-9675-3d72314f418c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Pinpointing Utility - Less Wrong Comments (154) Sort By: Best What I took away from this post is that confusing a decision-theoretic utility function with hedonic utility will make you very sick, and you might have to go to the hospital. I like this. Stay safe! It would be interesting to see more exposition and discussion of hedonic utility. For example, why is there a distinction between positive and negative hedonic utility (i.e., hedons vs dolors), which do not seem to make decision theoretic sense? Has anyone proposed a design for an AI or reinforcement learning agent that can be said to make use of hedonic utility, which might help explain its evolutionary purpose? I think I'm not quite understanding your question. If I've understood you correctly, you're asking why we're wired to respond differently to avoiding changes that make us less-happy (that is, avoiding dolors) than to seeking out changes which make us more-happy (that is, seeking out hedons) even if the magnitude of the change is the same. For example, why avoiding a loss motivates us differently than gaining something of equivalent value. If that's right, can you clarify why you expect an answer beyond "historical reasons"? That is, we have a lot of independent systems for measuring "hedons" and "dolors" in different modalities; we respond to grief with different circuits than we respond to pain, for example. We create this hypothetical construct of an inter-modal "hedon/dolor" based on people's lottery behavior... do I prefer a 50% chance of losing my husband or having an elephant jump up and down on my leg for ten minutes, and so forth. And we know that people have inconsistent lottery behaviors and can be Dutch booked, so a "hedon/dolor" is at best an idealization of what are in humans several different inconsistently-commensurable units of happiness and unhappiness. Is there anything else that needs to be explained, here? It sounds like you're assuming that this jury-rigged system was specifically selected for, and you want to know what exerted the selection pressure, when there doesn't seem to be any reason to assume that it's anything more than the best compromise available between a thousand independently-selected-for motivational systems operating on the same brain. Or have I misunderstood your question? I's not clear that the two can be reconciled. It's also not clear that the two can't be reconciled. Suppose for simplicity there are just hedons and dolors into which every utilitarian reaction can be resolved and which are independent. Then every event occupies a point in a plane. Now, ordering real numbers (hedons with no dolorific part or dolors with no hedonic part) is easy and more or less unambiguous. However, it's not immediately obvious whether there's a useful way to specify an order over all events. A zero hedon, one dolor event clearly precedes a one hedon, zero dolor event in the suck--->win ordering. But what about a one hedon, one dolor event vs. a zero hedon, zero dolor event? It might seem like that we can simply take the signed difference of the parts (so in that last example, 1-1=0-0 so the events are 'equal'), but the stipulation of independence seems like it forbids such arithmetic (like subtracting apples from oranges). Orders on the complex numbers that have been used for varying applications (assuming this has been done) might shed some light on the matter. Clearly a CEV over all complex (i.e. consisting of exactly a possibly-zero hedonic part and possibly-zero dolorific part) utilities would afford comparison between any two events, but this doesn't seem to help much at this point. Beyond knowledge of the physical basis of pleasure and pain, brain scans of humans experiencing masochistic pleasure might be a particularly efficient insight generator here. Even if, say, pure pleasure and pure pain appear very differently on an MRI, it might be possible to reduce them to a common unit of utilitarian experience that affords direct comparison. On the other hand, we might have to conclude that there are actually millions of incommensurable 'axes' of utilitarian experience. Orders on the complex numbers that have been used for varying applications (assuming this has been done) might shed some light on the matter. It can be proven that there is no ordering of complex numbers which is compatible with the normal conventions of multiplication and addition. It's not even possible to reliably seperate complex numbers into "positive" and "negative", such that multiplying two positive numbers gives a positive number, multiplying a positive number by -1 gives a negative number, multiplying a negative number by -1 gives a positive number, and -1 is negative. To further complicate the matter, I don't think that hedons and dolors are fully independant; if you place the 'hedons' line along the x-axis, the 'dolors' line may be a diagonal. Or a curve. That settled that quickly. Thanks. Then I suppose the next question in this line would be: To what extent can we impose useful orders on R^2? (I'd need to study the proof in more detail, but it seems that the no-go theorem on C arises from its ring structure, so we have to drop it.) I'm thinking the place to start is specifying some obvious properties (e.g. an outcome with positive hedonic part and zero dolorific part always comes after the opposite, i.e. is better), though I'm not sure if there'd be enough of them to begin pinning something down. Edit: Or, oppositely, chipping away at suspect ring axioms and keeping as much structure as possible. Though if it came down to case-checking axioms, it might explode. The most useful order on R^2 seems to be an order by the absolute value. (That is to say, the distance from the origin.) This is an ordering that has many uses, and gives us certain insights into the structure of the space. (Note though that it is only a partial order, not a complete one, as you can have two different points with the same absolute value.) Yeah, absolute value is the second-most obvious one, but I think it breaks down: It seems that if we assume utility to be a function of exactly (i.e. no more and no less than) hedons and dolors in R^2, we might as well stipulate that each part is nonnegative because it would then seem that any sense of dishedons must be captured by dolors and vice versa. So it seems that we may assume nonegativity WLOG. Then given nonnegativity of components, we can actually compare outcomes with the same absolute value: Given nonnegativity, we can simplify (I'm pretty sure, but even if not, I think a slightly modified argument still goes through) our metric from sqrt(h^2+d^2) (where h,d are the hedonic and dolorific parts) to just d+h. Now suppose that (h1,d1) and (h2,d2) are such that h1+d1=h2+d2. Then: 1) If h1<h2, then d1>d2 and so (h1,d1) is clearly worse than (h2,d2) 2) If h1=h2, then d1=d2 and equipreferable 3) If h1>h2, then d1<d2 and so (h1,d1) is clearly better than (h2,d2) So within equivalence classes there will be differing utilities. Moreover, (0,2)<<(0,0)<<(2,0) but the LHS and RHS fall in the same equivalence classs under absolute value. So the intervals of utility occupied by equivalence classes can overlap. (Where e.g. 'A<<B' means 'B is preferable over A'.) Hence absolute value seems incompatible with the requirements of a utility ordering. The most obvious function of (h,d) to form equivalence classes is h minus d as in my earlier comment, but that seems to break down (if we assume every pair of elements in a given equivalence class has the same utility) by its reliance on fungibility of hedons and dolors. A 'marginal dolor function' that gives the dolor-worth of the next hedon after already having x hedons seems like it might fix this, but it also seems like it would be a step away from practicality. You are correct, it does break down like that. Actually, for some reason I wasn't thinking of a space where you want to maximize one value and minimize another, but one where you want to maximize both. That is a reasonable simplification, but it does not translate well to our problem. Another potential solution if you want to maximize hedons and dolors, you could try sorting by the arguments of points. (i.e. maximize tan(hedons/dolors) or in other words, (given that both hedons and dolors are positive), maximize hedons/dolors itself.) Ultimately, I think you need some relation between hedons and dolors, something like "one hedon is worth -3.67 dolors" or similar. In the end, you do have have to choose whether (1 hedon, 1 dolor) is preferable to (0 hedons, 0 dolors). (And also whether (2 hedons, 1 dolor) is preferable to (1 hedon, 0 dolors), and whether (1 hedon, 2 dolors) is preferable to (0 hedons, 1 dolor), and so forth.) I suspect this relation would be linear, as the way we have defined hedons and dolors seems to suggest this, but more than that has to be left up to the agent who this utility system belongs to. And on pain of lack of transitivity in his or her preferences, that agent does seem to need to have one relation like this or another. maximize hedons/dolors Then 0.002 hedons and 0.00001 dolors is 20 times better than 10 hedons and 1 dolor. This would be surprising. Ultimately, I think you need some relation between hedons and dolors, something like "one hedon is worth -3.67 dolors" or similar. That's linear, with a scaling factor. If it is linear, then the scaling factor doesn't really matter much ('newdolors' can be defined as 'dolors' times the scaling factor, then one hedon is equal to one newdolor). But if it's that simple, then it's basically a single line that we're dealing with, not a plane. There are any number of possible alternative (non-linear) functions; perhaps the fifth power of the total number of dolors is equivalent to the fourth power of the number of hedons? Perhaps, and this I consider far more likely, the actual relationship between hedons and dolors is nowhere near that neat... I would suspect that there are several different, competing functions at use here; many of which may be counter-productive. For example; very few actions produce ten billion hedons. Therefore, if I find a course of action that seems (in advance) to produce ten billion or more hedons, then it is more likely that I am mistaken, or have been somehow fooled by some enemy, than that my estimations are correct. Thus, I am automatically suspicious of such a course of action. I don't dismiss it out-of-hand, but I am extremely cautious in proceeding towards that outcome, looking out for the hidden we might have to conclude that there are actually millions of incommensurable 'axes' of utilitarian experience. Yeah, I guess I more or less take this for granted. Or, rather, not that they're incommensurable, exactly, but that the range of correspondences -- how many Xs are worth a Y -- is simply an artifact of what set of weighting factors was most effective, among those tested, in encouraging our ancestors to breed, which from our current perspective is just an arbitrary set of historical factors. I think it might be due to the type of problem we are facing as living entities. We have a consistent never ending goal of "not killing ourselves" and "not mucking up our chances of reproduction". Pain is one of the signs that we might be near doing these things. Every day we manage not to do these things is in some way a good day. This presents a baseline of utility where anything less than it is considered negative and anything more than that positive. So it just might be what this type of algorithm feels like from the inside. Sometimes people refer to this relativity of utilities as "positive definite affine structure" or "invariant up to a scale and shift", as if there were some background quantities that could scale and shift. This is a misrepresentation of the mathematical point of view. In particular, the word "were" is misleading: when I say things like this, I am referring to a property of the map, not a property of the territory. The mathematical keyword is equivalence relation; when I say that utility functions are only well-defined up to positive affine transformations, what I mean is that "utility function" does not mean "function on outcomes," it means "equivalence class of functions on outcomes," where positive affine transformations define the equivalence relation. There are other equivalent ways of describing what a utility function is that don't require working with equivalence classes, but it doesn't matter which one you pick in the sense that the resulting mathematical theory has the same mathematical consequences. Thanks for correcting me! I changed that paragraph. Is it less offensive to people who know what they are talking about now? Sometimes people refer to this relativity of utilities as "positive affine structure" or "invariant up to a scale and shift", which confuses me by making me think of a utility function as a set of things with numbers coming out, which don't agree on the actual numbers, but can be made to agree with a linear transform, rather than a space I can measure distances in. It's somewhat confusing to me; you're using words like "set," "space," and "measure distances" that have mathematically precise meanings but in a way which appears to disagree with those mathematically precise meanings (I don't know what you mean when you say that a utility function is a space). It might be helpful to non-mathematicians, though. I mean set as in set-theory. As in the utility function is a set of equivalent functions. If I'm disagreeing with math use, please correct me. (on second thought that wording is pretty bad, so I might change it anyway. Still, are my set-intuitions wrong?) I mean space as in a 1-dimensional space (with a non-crazy metric, if crazy metrics even exist for 1d). By "measure distance" I mean go into said space with a tape measure and see how far apart things are. I call it a space because then when I visualize it as such, it has all the right properties (scale/shift agnosticism). If I call it a real-valued function, I imagine the real number line, which has a labeled axis, so to speak, so it tempts me to do numerology. You can think of a utility function as defining a measure of "signed distance" on its domain. Utilities have some similarity to distance in physical space, in that to give coordinates to all objects you need to select some origin and system of units for your coordinate system, but the physical reality is the same regardless of your coordinate system. A member of a particular utility function's equivalence class, can then be thought of as a function that gives the coordinates of each thing in the domain (world-states, presumably), in some particular coordinate system. For an example, if I prefer to have three ice creams over zero, three times as much as I prefer one ice cream over zero, then we can write that as a "utility function" u(no ice cream) = 0; u(one ice cream) = 1; u(three ice creams) = 3. In this case we have chosen arbitrarily no ice cream as the origin of our coordinate system, and "distance between one ice cream and none" as the basic unit of Is this what you mean by a 1-dimensional space? That's exactly what I mean. Four servings of ice cream would have me ill. I mean set as in set-theory. As in the utility function is a set of equivalent functions. If I'm disagreeing with math use, please correct me. (on second thought that wording is pretty bad, so I might change it anyway. Still, are my set-intuitions wrong?) Got it. This is strictly speaking true, but "equivalence class of functions" would be a more precise way of putting it. I mean space as in a 1-dimensional space (with a non-crazy metric, if crazy metrics even exist for 1d). By "measure distance" I mean go into said space with a tape measure and see how far apart things are. So there are some technical points I could go into here, but the short story is that most equivalence classes under positive affine transformations are 2-dimensional, not 1-dimensional, and also aren't naturally endowed with a notion of distance. the short story is that most equivalence classes under positive affine transformations are 2-dimensional, not 1-dimensional, and also aren't naturally endowed with a notion of distance. I can see how distance would be trouble in 2d affine-equivalent spaces, but distance seems to me to be a sensible concept in a 1d space, even with positive-scale and shift. And utility is 1d, so it's safe to call it a "distance" right? Maybe you're referring to distance-from-A-to-B not having a meaningful value without defining some unit system? Maybe we should call them "relative distances", except that to me, "distance" already connotes relativeness. And utility is 1d I'm not sure what you mean by this. Maybe you're referring to distance-from-A-to-B not having a meaningful value without defining some unit system? Maybe we should call them "relative distances", except that to me, "distance" already connotes relativeness. This is a totally sensible point of view but disagrees with the mathematical definition. It also doesn't apply directly to the 2-dimensional equivalence classes, as far as I can tell. For example, suppose we're talking about utilities over two possible outcomes {heads, tails}. There are three equivalence classes here, which are u(heads) > u(tails), u(heads) = u(tails), and u(heads) < u(tails). The first and third equivalence classes are 2-dimensional. What is the distance between the two functions (u(heads) = 2, u(tails) = 1) and (u(heads) = 3, u(tails) = 2) in the first case, even in a relative sense? Ohhhhhhh, do you mean 2d as in 2 degrees of freedom? I mean it as in spatial coordinates. As an aside, I just realized that "displacement" is more accurate for what I'm getting at than "distance". The thing I'm talking about can be negative. And distance/displacement isn't between equivalent utility functions, it's between two outcomes in one utility function. "X is 5 tasty sandwiches better than Y" is what I'm referring to as a And the displacement numbers will be the same for the entire equivalence class, which is why I prefer it to picking one of the equivalent functions out of a hat. If you only ever talk about measured distances, there is only one utility function in the equivalence class, because all the scales and shifts cancel out: This way, the utility function can scale and shift all it wants, and my numbers will always be the same. Equivalently, all agents that share my preferences will always agree that a day as a whale is "400 orgasms better than a normal day", even if they use another basis themselves. Was that less clear than I thought? If there are only two points in a space, you can't get a relative distance because there's nothing to make the distance relative to. For that problem I would define U(heads) = 1 and U(tails) = 0, as per my dimensionless scheme. Ohhhhhhh, do you mean 2d as in 2 degrees of freedom? I mean it as in spatial coordinates. What's the difference? And distance/displacement isn't between equivalent utility functions, it's between two outcomes in one utility function. "X is 5 tasty sandwiches better than Y" is what I'm referring to as a Your use of the word "in" here disagrees with my usage of the word "utility function." Earlier you said something like "a utility function is a space" and I defined "utility function" to mean "equivalence class of functions over outcomes," so I thought you were referring to the equivalence class. Now it looks like you're referring to the space of (probability distributions over) outcomes, which is a different thing. Among other things, I can talk about this space without specifying a utility function. A choice of utility function allows you to define a ternary operation on this space which I suppose could reasonably be called "relative displacement," but it's important to distinguish between a mathematical object and a further mathematical object you can construct from it. Your use of the word "in" here disagrees with my usage of the word "utility function." Yes, it does. You seem to understand what I'm getting at. it's important to distinguish between a mathematical object and a further mathematical object you can construct from it. I don't think anyone is making mathematical errors in the actual model, we are just using different words which makes it impossible to communicate. If you dereference my words in your model, you will see errors, and likewise the other way. Is there a resource where I could learn the correct terminology? I feel confused. "a space I can measure distances in" is a strong property of a value, and it does not follow from your initial 5 axioms, and seems contrary to the 5th axiom. In fact, your own examples given further seem to provide a counterexample - i.e., if someone prefers being a whale to 400 actual orgasms, but prefers 1/400 of being a whale to 1 orgasm, then both "being a whale" and "orgasm" have some utility value, but they cannot be used as units to measure distance. If you're in a reality where a>b and 2a<2b, then you're not allowed to use classic arithmetic simply because some of your items look like numbers, since they don't behave like numbers. "Hawaii" can't be used as a unit to measure distance, nor can "the equator", but "the distance from Hawaii to the equator" can. Similarly, "the difference between 0 orgasms and 1 orgasm" can be used as a unit to measure utilities (you could call this unit "1 orgasm", but that would be confusing and silly if you had nonlinear utility in orgasms: 501 orgasms could be less than or more than "1 orgasm" better than 500). Also, did you mean to have these the other way around?: but prefers 1/400 of being a whale to 1 orgasm While this is a basic point, it's one people seem to screw up around here a lot, so I'm glad someone wrote an article going over this in detail. Upvoted. I have one nitpick: You say, "We have to take the ratio between two utility differences", but really, because only positive affine transformations are OK, what we really have to take is the ratio between a utility difference and the absolute value of a utility difference. Tangentially, I'd also like to point out the article Torsors Made Easy by John Baez. OK, to be honest, I'm not sure how understandable this really is to someone who doesn't already know a bit. But "torsor" is a useful concept to have when thinking about things like this, and there probably isn't a better quick explanation out there. Tangentially, I'd also like to point out the article Torsors Made Easy by John Baez. OK, to be honest, I'm not sure how understandable this really is to someone who doesn't already know a bit. Having read that article years ago, without any previous exposure to the concept of torsors (other than the implicit exposures Baez notes, that everyone's had), torsors also came to mind for me when reading nyan_sandwich's article. I have one nitpick: You say, "We have to take the ratio between two utility differences", but really, because only positive affine transformations are OK, what we really have to take is the ratio between a utility difference and the absolute value of a utility difference. Why? Positive affine transformations are OK, and they don't affect the sign of utility differences. Yes; the point of making this change is to exclude negative affine transformations. what we really have to take is the ratio between a utility difference and the absolute value of a utility difference. Ooops, you are totally right. Your units have to be absolute value. Thank you, I'll maybe fix that. Your "dimensionless" example isn't dimensionless; the dimensions are units of (satandate - whalefire). You only get something like a reynolds number when the units cancel out, so you're left with a pure ratio that tells you something real about your problem. Here you aren't cancelling out any units, you're just neglecting to write them down, and scaling things so that outcomes of interest happen to land at 0 and 1. Expecting special insight to come out of that operation is numerology. Great article other than that, though. I hadn't seen this quote before: "We have practically defined numerical utility as being that thing for which the calculus of mathematical expectations is legitimate." For me that really captures the essence of it. Here you aren't cancelling out any units, you're just neglecting to write them down, and scaling things so that outcomes of interest happen to land at 0 and 1. Expecting special insight to come out of that operation is numerology. Hmm. You are right, and I should fix that. When we did that trick in school, we always called it "dimensionless", but you are right it's distinct from the pi-theorem stuff (reynolds number, etc). I'll rethink it. Edit: Wait a minute, on closer inspection, your criticism seems to apply to radians (why radius?) and reynolds number (characteristic length and velocity are rather arbitrary in some problems). Why are some unit systems "dimensionless", and others not? More relevently, taboo "dimensionless", why are radians better (as they clearly are) than degrees or grads or arc-minutes? Why is it useful to pick the obvious characteristic lengths and velocities for Re, as opposed to something else. For radians, it seems to be something to do with euler's identity and the mathematical foundations of sin and cos, but I don't know how arbitrary those are, off the top of my head. For Re, I'm pretty sure it's exactly so that you can do numerology by comparing your reynolds number to reynolds numbers in other problems where you used the same charcteristic length (if you used D for your L in both cases, your numerology will work, if not, not). I think this works the same in my "dimensionless" utility tricks. If we are consistent about it, it lets us do (certain forms of) numerology without hazard. Why are some unit systems "dimensionless", and others not? Some ratios are dimensionless because the numerator and denominator are in the same dimension, so they cancel. for example, a P/E (price to earnings) ratio of a stock. The numerator & denominator are both in $ (or other currency). Radians are a ratio of lengths (specifically, arc length to radius) whereas degrees are the same ratio multiplied by an arbitrary constant (180/pi). We could imagine that halfradians (the ratio of arc length to diameter) might also be a natural unit, and then we'd have to go into calculus to make a case for radians, but degrees and arc-minutes are right out. Lengths offer one degree of freedom because they lack units but not an origin (all lengths are positive, and this pinpoints a length of 0). For utilities, we have two degrees of freedom. One way to convert such a quantity to a dimensionless one is to take (U1 - U2)/(U1 - U3), a dimensionless function of three utilities. This is more or less what you're doing in your "dimensionless utility" section. But it's important to remember that it's a function of three arguments: 0.999 is the value obtained from considering Satan, paperclips, and whales simultaneously. It is only of interest when all three things are relevant to making a decision. Incidentally, there's a typo in your quote about Re: 103 should be 10^3. We could imagine that halfradians (the ratio of arc length to diameter) might also be a natural unit, and then we'd have to go into calculus to make a case for radians I was actually thinking of diameter-radians when I wrote that, but I didn't know what they were called, so somehow I didn't make up a name. Thanks. For utilities, we have two degrees of freedom. One way to convert such a quantity to a dimensionless one is to take (U1 - U2)/(U1 - U3), a dimensionless function of three utilities. This is more or less what you're doing in your "dimensionless utility" section. But it's important to remember that it's a function of three arguments: 0.999 is the value obtained from considering Satan, paperclips, and whales simultaneously. It is only of interest when all three things are relevant to making a decision. Ok good, that's what I was intending to do, maybe it should be a bit clearer? Incidentally, there's a typo in your quote about Re: 103 should be 10^3. Shamelessly ripped from wikipedia; their typo. 10^3 does seem more reasonable. Thanks. I was actually thinking of diameter-radians when I wrote that, but I didn't know what they were called, so somehow I didn't make up a name. For the record, I was also making up a name when I said "halfradians". And now that I think about it, it should probably be "twiceradians", because two radians make one twiceradian. Oops. This post is excellent. Part of this is the extensive use of clear examples and the talking through of anticipated sticking points, objections, and mistakes, and its motivating, exploratory approach (not plucked out of thin vacuum). For example, if we have decided that we would be indifferent between a tasty sandwich and a 1/500 chance of being a whale for tomorrow, and that we'd be indifferent between a tasty sandwich and a 30% chance of sun instead of the usual rain, then we should also be indifferent between a certain sunny day and a 1/150 chance of being a whale. I think you didn't specify strong enough premises to justify this deduction; I think you didn't rule out cases where your utility function would depend on probability and outcome in such a way that simply multiplying is invalid. I might have missed it. Edit: D'oh! Never mind. This is the whole point of an Expected Utility theorem... ...{50%: sunny+sandwich; 50% baseline} and {50%: sunny; 50%: sandwich}, and other such bets. (We need a better solution for rendering probability distributions in prose). I doubt that significantly better compression is possible. I expect that communicating, uncompressed, the outcome and the probability is necessary, so stronger compression seems doubtful than what you did, which seems minimal with respect to those constraints. However, you might have been referring to clarity more generally. I would avoid the use of some of the more grim examples in this context. Putting nonconsensual, violent sex, torture, and ruination of vulnerable people through mental manipulation alongside ice cream, a day as a whale, and a sunny day would overstep my flippant-empathetic-Gentle-depressing threshold, and it seems like it would be possible to come up with comparably effective examples that didn't. Make of that what you will. (I encourage others to reply with their own assessment, particularly those who also felt (slightly) uncomfortable on this point, since I imagine their activation energy for saying so would be highest.) Yeah, the violent rape and torture jarred unpleasantly with me as well. I liked the other examples and the post in general. I see what you guys are getting at, but it was useful to go somewhere hellish to demonstrate certain invariances, and the quoted comment was too good to pass up. I could have used more sensitive examples, but it did go through my mind that I wanted to make it horrible for some reason... I won't change it, but will steer away from such examples in the future. That said, it's interesting that people react to the thought of rape and torture, but not the universe getting paperclipped, which is many many orders of magnitude worse. I get more angry at a turtle getting thrown against the wall than I do at genocides... I guess some things just hit you hard out of proportion to their actual value. That said, it's interesting that people react to the thought of rape and torture, but not the universe getting paperclipped, which is many many orders of magnitude worse. I guess rape and torture hit closer to home for some people... no one has ever actually experienced the universe getting paperclipped, nor is it remotely likely to happen tomorrow. Lots of very real people will be raped and tortured tomorrow, though. Thanks for taking on board the remarks! That said, it's interesting that people react to the thought of rape and torture, but not the universe getting paperclipped, which is many many orders of magnitude worse. I get more angry at a turtle getting thrown against the wall than I do at genocides... I guess some things just hit you hard out of proportion to their actual value. Ooops, you tried to feel a utility. Go directly to type theory hell; do not pass go, do not collect 200 utils. Ooops, you tried to feel a utility. Go directly to type theory hell; do not pass go, do not collect 200 utils. I don't think this example is evidence against trying to 'feel' a utility. You didn't account for scope insensitivity and the qualitative difference between the two things you think you're comparing. You need to compare the feeling of the turtle thrown against the wall to the cumulative feeling when you think about EACH individual beheading, shooting, orphaned child, open grave, and every other atrocity of the genocide. Thinking about the vague concept "genocide" doesn't use the same part of your brain as thinking about the turtle incident. What does it mean for something to be labeled as a certain amount of "awesome" or "good" or "utility"? "Awesome" is an emotional reaction whereas "utility" (as you point out in this post) is a technical and not entirely intuitive concept in decision theory. Maybe one ought to be derived from the other, but it's not right to just implicitly assume that they are the same thing. Each outcome has a utility Unlike some recent posts about VNM, you don't say what "outcome" means. If we take an outcome to be a world history, then "being turned into a whale for a day" isn't an outcome. As far as I can tell, the great project of moral philosophy is an adult problem, not suited for mere mortals like me. I'm having trouble reconciling this with 'You already know that you know how to compute "Awesomeness", and it doesn't feel like it has a mysterious essence that you need to study to discover.' VNM says nothing about your utility function. Consequentialism, hedonism, utilitarianism, etc are up to you. I'm pretty sure VNM, or just the concept of utility function, implies consequentialism (but not the other two). (These comments occurred to me as I read the OP right after it was posted, but I waited a while to see if anyone else would make the same points. No one did, which makes me wonder why.) not right to just implicitly assume that they are the same thing. Yes, good point. I was just listing words that people tend to throw around for that sort of problem. "awesome" is likewise not necessarily "good". I wonder how I might make that clearer... If we take an outcome to be a world history, then "being turned into a whale for a day" isn't an outcome. Thanks for pointing this out. I forgot to substantiate on that. I take "turned into a whale for a day" to be referring to the probability distribution over total world histories consistent with current observations and with the turned-into-a-whale-on-this-day constraint. Maybe I should have explained what I was doing... I hope no one gets too confused. I'm having trouble reconciling this "Awesomeness" is IMO the simplest effective pointer to morality that we currently have, but that morality is still inconsistent and dynamic. I take the "moral philosophy" problem to be working out in explicit detail what exactly is awesome and what isn't, from our current position in morality-space, with all its meta-intuitions. I think this problem is incredibly hard to solve completely, but most people can do better than usual by just using "awesomeness". I hope this makes that clearer? VNM, or just the concept of utility function, implies consequentialism In some degenerate sense, yes, but you can easily think up a utility function that cares what rules you followed in coming to a decision, which is generally not considered "consequentialism". It is after all part of the world history and therefor available to the utility function. We may have reached the point where we are looking at the problem in more detail than "consequentialism" is good for. We may need a new word to distinguish mere VNM from rules-don't-matter type I take "turned into a whale for a day" to be referring to the probability distribution over total world histories consistent with current observations and with the turned-into-a-whale-on-this-day I don't think this works for your post, because "turned into a whale for a day" implies I'm probably living in a universe with magic, and my expected utility conditional on that would be mostly determined by what I expect will happen with the magic for the rest of time, rather the particular experience of being a whale for a day. It would no longer make much sense to compare the utility of "turned into a whale for a day" with "day with an orgasm" and "day without an orgasm". but most people can do better than usual by just using "awesomeness" It's possible that I judged your previous post too harshly because I was missing the "most people" part. But what kind of people do you think can do better by using "awesomeness"? What about, for example, Brian Tomasik, who thinks his morality mostly has to do with reducing the amount of negative hedons in the universe (rather than whales and starships)? I don't think this works for your post, because ... Ooops. I suppose to patch that, we have to postulate that we at least believe that we live in a world where a wizard turning you into a whale is normal enough that you don't totally re-evaluate everything you believe about reality, but rare enough that it would be pretty awesome. Thanks for catching that. I can't believe I missed it. What about, for example, Brian Tomasik, who thinks his morality mostly has to do with reducing the amount of negative hedons in the universe (rather than whales and starships)? I would put that guy in the "needs awesomeism" crowd, but maybe he would disagree, and I have no interest in pushing it. I don't much like his "morality as hostile meme-warfare" idea either. In fact, I disagree with almost everything in that post. Last night, someone convinced me to continue on this writing trend that the OP is a part of, and end up with a sane attack, or at least scouting mission, on moral philosophy and CEV or CEV-like strategies. I do have some ideas that haven't been discussed around here, and a competent co-philosopher, so if I can merely stay on the rails (very hard), it should be interesting. EDIT: And thanks a lot for your critical feedback; it's really helpful given that so few other people come up with useful competent criticism. I don't much like his "morality as hostile meme-warfare" idea either. In fact, I disagree with almost everything in that post. What do you mean by "don't like"? It's epistemically wrong, or instrumentally bad to think that way? I'd like to see your reaction to that post in more detail. And thanks a lot for your critical feedback; it's really helpful given that so few other people come up with useful competent criticism. It seems to me that people made a lot more competent critical comments when Eliezer was writing his sequences, which makes me think that we've driven out a bunch of competent critics (or they just left naturally and we haven't done enough to recruit replacements). "Awesomeness" is IMO the simplest effective pointer to morality that we currently have, but that morality is still inconsistent and dynamic. The more I think about "awesomeness" as a proxy for moral reasoning, the less awesome it becomes and the more like the original painful exercise of rationality it looks. tl;dr: don't dereference "awesome" in verbal-logical mode. It's too late for me. It might work to tell the average person to use "awesomeness" as their black box for moral reasoning as long as they never ever look inside it. Unfortunately, all of us have now looked, and so whatever value it had as a black box has disappeared. You can't tell me now to go back and revert to my original version of awesome unless you have a supply of blue pills whenever I need them. If the power of this tool evaporates as soon as you start investigating it, that strikes me as a rather strong point of evidence against it. It was fun while it lasted, though. It's too late for me. It might work to tell the average person to use "awesomeness" as their black box for moral reasoning as long as they never ever look inside it. Unfortunately, all of us have now looked, and so whatever value it had as a black box has disappeared. You seem to be generalizing from one example. Have you attempted to find examples of people who have looked inside the box and not destroyed its value in the process? I suspect that the utility of this approach is dependent on more than simply whether or not the person has examined the "awesome" label, and that some people will do better than others. Given the comments I see on LW, I suspect many people here have looked into it and still find value. (I will place myself into that group only tentatively; I haven't looked into it in any particular detail, but I have looked. OTOH, that still seems like strong enough evidence to call "never ever look inside" into question.) That was eminently readable. Thank you. I hope you don't mind if I ask for elaboration? I'm fairly unlikely to read a dry, mathy post pointing out mistakes that people make when wielding utility in making decisions. Clear, humorous examples help, as does making abstract things concrete when possible -- the radioactive utilities made me laugh. The post was fairly long, but the summary wrapped things up nicely. If you are really paying attention, you may be a bit confused, because it seems to you that money or time or some other consumable resource can force you to assign utilities even if there is no uncertainty in the system. That issue is complex enough to deserve its own post, so I'd like to delay it for now. It seems simple enough to me- when making decisions under certainty, you only need an acyclic preference ordering. The reals are ordered and acyclic, but they also have scale. You don't need that scale under certainty, but you need it to encode probabilistic preferences under uncertainty. You don't need that scale under certainty, but you need it to encode probabilistic preferences under uncertainty. Well put, but there is a way that scale-utilities partially show up in economics when you try to factor outcomes, even without uncertainty. It does all cash out to just a preference ordering on the monolithic outcome level, though. What it means is that I'd be indifferent between a normal day with a 1/400 chance of being a whale, and a normal day with guaranteed extra orgasm. "Not tonight honey, I've determined that I have a 1/399 chance of being a whale!" "What if I give you two orgasms?" "Sorry, my utility function isn't linear in orgasms!" Doesn't have to be. Two orgasms is almost certainly better than a 1/399 whale-day if you are indifferent between one orgasm and 1/400 whale day. In other words, that's some pretty intense nonlinearity you've got there. Can't wait till I have time to write my linearity post... Should be straightforward. Only locally. "Sure, but surely 2 orgasms are better than 1, so, since you're at 1/399 for turning into a whale, and a single orgasm is equal to 1/400 chance of turning into a whale, so wouldn't two orgasms be good enough to at least require 1/398 chance of turning into a whale?" I'd like that, but let's stay on topic here. I don't trust the transitivity axiom of VNM utility. Thought I should mention this to make it clear that the "most of us" in your post is not a rhetorical device and there really are actual people who don't buy into the VNM hegemony. Thanks for pointing that out. I did try to make it clear that the essay was about "if you trust VNM, here's what it means". I, for one, trust the transitivity axiom. It seems absurd to value going in circles, but only if you run into the right lotteries. Maybe you could give an example of a preference cycle you think is valuable, so they rest of us could see where our intuitions diverge? Out of curiosity, why don't you trust the transitivity axiom? Because when I introspect on my preferences it doesn't seem to hold. Answering for myself, my unreflective preferences are nontransitive on problems like dust specks vs torture. I prefer N years of torture for X people to N years minus 1 second of torture for 1000X people, and any time of torture for X people over the same time of very slightly less painful torture for 1000X people, and yet I prefer a very slight momentary pain for any number of people, however large, to 50 years of torture for one person. If I ever reverse the latter preference, it will be because I will have been convinced by theoretical/abstract considerations that non transitive preferences are bad (and because I trust the other preferences in the cycle more), but I don't think I will ever introspect it as a direct preference by itself. If I ever reverse the latter preference, it will be because I will have been convinced by theoretical/abstract considerations that non transitive preferences are bad (and because I trust the other preferences in the cycle more), but I don't think I will ever introspect it as a direct preference by itself. Nicely put. So suppose we use the dust specks vs. torture situation to construct a cycle of options A1, A2, ..., An, in which you prefer A1 to A2, A2 to A3, and so on, and prefer An to A1. (For example, say that A1 is 50 years of torture for one person, and the other options spread things out over more people up until An is dust specks for lots of people.) If you were asked to choose between any of the options A1 through An, which one do you pick? And why? That might depend strongly on the filling-in details and on how the choice is framed. I can't visualize all the options and compare them together, so I always end up comparing the nearby cases and then running through the loop. I suspect that forced to make the choice I would say An (the dust specks) but more because of it being a Schelling point than any substantial, defensible reason. And I would say it while still endorsing A(n-1) to be better than An. Can you give an example? I am having a hard time imagining preferences contradicting that axiom (which is a failure on my part). Typo: it's meditation, not mediation. What a disaster! Thank you. It might be interesting to learn if anyone active in this community, has actually defined their utility function, stated it publicly and attempted to follow through. Thanks nyan, this was really helpful in comprehending what you told me last time. So if I understand you correctly, utilities are both subjective and descriptive. They only identify what a particular single agent actually prefers under uncertain conditions. Is this right? If so, how do we take into account situations where one is not sure what one wants? Being turned into a whale might be as awesome as being turned into a gryphon, but since you don't (presumably) know what either would be like, how do you calculate your expected payoff? Can you link me to or in some way dereference "what I told you last time"? one is not sure what one wants? how do you calculate your expected payoff? If you have a probability distribution over possible utility values or something, I don't know what to do with it. It's a type error to aggregate utilities from different utility functions, so don't do that. That's the moral uncertainty problem, and I don't think there's a satisfactory solution yet. Though Bostrom or someone might have done some good work on it that I haven't seen. For now, it probably works to guess at how good it seems relative to other things. Sometimes breaking it down into a more detailed scenario helps, looking at it a few different ways, etc. Fundamentally though, I don't know. Maximizing EU without a real utility function is hard. Moral philosophy is hard. My bad, nyan. You were explaining to me the difference between utility in Decision theory and utility in utilitarianism. I will try to find the thread later. Being turned into a hale [sic] might be as awesome as being turned into a gryphon Are all those ostensibly unintentional typos an inside joke of some kind? No, they are due solely to autocorrect, sloppy writing and haste. I will try to be more careful, apologies. You know you can go back and fix them right? ...Am I the only who is wondering how being turned into a hale would even work and whether or not that would be awesome? Probably not possible since it isn't even a noun. Amartya Sen argues (it's discussed in his Nobel prize lecture: http://www.nobelprize.org/nobel_prizes/economics/laureates/1998/sen-lecture.pdf) that social choice theory requires making some interpersonal comparisons of utility, as without some such comparisons there is no way to evaluate the utility of total outcomes. However, the interpersonal comparisons do not need to be unlimited; just having some of them can be enough. Since interpersonal comparisons certainly do raise issues, they doubtless require some restrictions similar to those you mention for the individual case, which seems to be why Sen takes it as a very good thing that restricted interpersonal comparisons may be sufficient. I think that interpersonal "utility" is a different beast from VNM utility. VNM is fundamentally about sovereign preferences, not preferences within an aggregation. Inside moral philosophy we have an intuition that we ought to aggregate preferences of other people, and we might think that using VNM is a good idea because it is about preferences too, but I think this is an error, because VNM isn't about preferences in that way. We need a new thing built from the ground up for utilitarian preference aggregation. It may turn out to have similarities to VNM, but I would be very surprised if it actually was VNM. Are you familiar with the debate between John Harsanyi and Amartya Sen on essentially this topic (which we've discussed ad nauseam before)? In response to an argument of Harsanyi's that purported to use the VNM axioms to justify utilitarianism, Sen reaches a conclusion that broadly aligns with your take on the issue. If not, some useful references here. ETA: I worry that I've unduly maligned Harsanyi by associating his argument too heavily with Phil's post. Although I still think it's wrong, Harsanyi's argument is rather more sophisticated than Phil's, and worth checking out if you're at all interested in this area. Oh wow. Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5. This is the same ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population No, not at all. You can't derive mathematical results by playing word games. Even if you could, it doesn't even make sense to take the average utility of a population. Different utility functions are not commensurable. This is clearer if you use a many-worlds interpretation, and think of maximizing expected value over possible futures as applying average utilitarianism to the population of all possible future No. That is not at all how it works. A deterministic coin toss will end up the same in all everett branches, but have subjective probability distributed between two possible worlds. You can't conflate them; they are not the same. Having your math rely on a misinterpreted physical theory is generally a bad sign... Therefore, I think that, if the 4 axioms are valid when calculating U(lottery), they are probably also valid when calculating not our private utility, but a social utility function s(outcome), which sums over people in a similar way to how U(lottery) sums over possible worlds. Really? Translate the axioms into statements about people. Do they still seem reasonable? 1. Completeness. Doesn't hold. Preferred by who? The fact that we have a concept of "pareto optimal" should raise your suspicions. 2. Transitivity. Assuming you can patch Completeness to deal with pareto-optimality, this may or may not hold. Show me the math. 3. Continuity. Assuming we let population frequency or some such stand in for probability. I reject the assumption that strict averaging by population is valid. So much for reasonable assumptions. 4. Independence. Adding another subpopulation to all outcomes is not necessarily a no-op. Other problems include the fact that population can change, while the sum of probabilities is always 1. The theorem probably relies on this. Assuming you could construct some kind of coherent population-averaging theory from this, it would not involve utility or utility functions. It would be orthogonal to that, and would have to be able to take into account egalitarianism and population change, and varying moral importance of agents and such. It is even more shocking that it is thus possible to prove, given reasonable assumptions, which type of utilitarianism is correct. Shocking indeed. While I'm in broad agreement with you here, I'd nitpick on a few things. Different utility functions are not commensurable. Agree that decision-theoretic or VNM utility functions are not commensurable - they're merely mathematical representations of different individuals' preference orderings. But I worry that your language consistently ignores an older, and still entirely valid use of the utility concept. Other types of utility function (hedonic, or welfarist more broadly) may allow for interpersonal comparisons. (And unless you accept the possibility of such comparisons, any social welfare function you try to construct will likely end up running afoul of Arrow's impossibility theorem). Translate the axioms into statements about people. Do they still seem reasonable? I'm actually pretty much OK with Axioms 1 through 3 being applied to a population social welfare function. As Wei Dai pointed out in the linked thread (and Sen argues as well), it's 4 that seems the most problematic when translated to a population context. (Dealing with varying populations tends to be a stumbling block for aggregationist consequentialism in general.) That said, the fact that decision utility != substantive utility also means that even if you accepted that all 4 VNM axioms were applicable, you wouldn't have proven average utilitarianism: the axioms do not, for example, rule out prioritarianism (which I think was Sen's main point). But I worry that your language consistently ignores an older, and still entirely valid use of the utility concept. Other types of utility function (hedonic, or welfarist more broadly) may allow for interpersonal comparisons. I ignore it because they are entirely different concepts. I also ignore aerodynamics in this discussion. It is really unfortunate that we use the same word for them. It is further unfortunate that even LWers can't distinguish between an apple and an orange if you call them both "apple". "That for which the calculus of expectation is legitimate" is simply not related to inter-agent preference aggregation. I'm hesitant to get into a terminology argument when we're in substantive agreement. Nonetheless, I personally find your rhetorical approach here a little confusing. (Perhaps I am alone in that.) Yes, it's annoying when people use the word 'fruit' to refer to both apples and oranges, and as a result confuse themselves into trying to derive propositions about oranges from the properties of apples. But I'd suggest that it's not the most useful response to this problem to insist on using the word 'fruit' to refer exclusively to apples, and to proceed to make claims like 'fruit can't be orange coloured' that are false for some types of fruit. (Even more so when people have been using the word 'fruit' to refer to oranges for longer than they've been using it to refer to apples.) Aren't you just making it more difficult for people to get your point that apples and oranges are different? On your current approach, every time you make a claim about fruit, I have to try to figure out from context whether you're really making a claim about all fruit, or just apples, or just oranges. And if I guess wrong, we just end up in a pointless and avoidable argument. Surely it's easier to instead phrase your claims as being about apples and oranges directly when they're intended to apply to only one type of fruit? P.S. For the avoidance of doubt, and with apologies for obviousness: fruit=utility, apples=decision utility, oranges=substantive utility. "Fruit" is a natural category; apples and oranges share interesting characteristics that make it useful to talk about them in general. "Utility" is not. The two concepts, "that for which expectation is legitimate", and some quantity related to inter-agent preference aggregation do not share very many characteristics, and they are not even on the same conceptual abstraction layer. The VNM-stuff is about decision theory. The preference aggregation stuff is about moral philosophy. Those should be completely firewalled. There is no value to a superconcept that crosses that As for me using the word "utility" in this discussion, I think it should be unambiguous that I am speaking of VNM-stuff, because the OP is about VNM, and utilitarianism and VNM do not belong in the same discussion, so you can infer that all uses of "utility" refer to the same thing. Nevertheless, I will try to come up with a less ambiguous word to refer to the output of a "preference function". The VNM-stuff is about decision theory. The preference aggregation stuff is about moral philosophy. Those should be completely firewalled. There is no value to a superconcept that crosses that But surely the intuition that value ought to be aggregated linearly across "possible outcomes" is related to the intuition that value ought to be aggregated linearly across "individuals"? I think it basically comes down to independence: how much something (a lottery over possible outcomes / a set of individuals) is valued should be independent of other things (other parts of the total probabilistic mixture over outcomes / other individuals who exist). When framed this way, the two problems in decision theory and moral philosophy can be merged together as the question of "where should one draw the boundary between things that are valued independently?" and the general notion of "utility" as "representation of preference that can be evaluated on certain objects independently of others and then aggregated linearly" does seem to have There is no value to a superconcept that crosses that boundary. This doesn't seem to me to argue in favour of using wording that's associated with the (potentially illegitimate) superconcept to refer to one part of it. Also, the post you were responding to (conf) used both concepts of utility, so by that stage, they were already in the same discussion, even if they didn't belong there. Two additional things, FWIW: (1) There's a lot of existing literature that distinguishes between "decision utility" and "experienced utility" (where "decision utility" corresponds to preference representation) so there is an existing terminology already out there. (Although "experienced utility" doesn't necessarily have anything to do with preference or welfare aggregation either.) (2) I view moral philosophy as a special case of decision theory (and e.g. axiomatic approaches and other tools of decision theory have been quite useful in to moral philosophy), so to the extent that your firewall intends to cut that off, I think it's problematic. (Not sure that's what you intend - but it's one interpretation of your words in this comment.) Even Harsanyi's argument, while flawed, is interesting in this regard (it's much more sophisticated than Phil's post, so I'd recommend checking it out if you haven't already.) Conversely, because VNM utility is out here, axiomized for the sovereign preferences of a single agent, we don't much expect it to show up in there, in a discussion if utilitarian preference aggregation. In fact, if we do encounter it in there, it's probably a sign of a failed abstraction barrier. Money is like utility - it is society's main represenation of utility. Be careful, money is an economics and game theory thing that can give you evidence about people's preferences, but I would not directly call it a representation of utility. It is likewise not directly relevant to utilitarianism. I'd like to take a crack at discussing how money (and fungiblish consumablish resources in general) relate to utility, but that's a big topic on it's own, and I think it's beyond the scope of this I'd like to take a crack at discussing how money (and fungiblish consumablish resources in general) relate to utility Something like “Money: The Unit of Caring” by EY? Money is a form of value. It has an equivalent to wireheading - namely hyperinflation. And we have maximisers of share value - namely companies. So: money is a kind of utility - though obviously not the utility of people. money is a kind of utility Is it? Are there any expected-money maximizers in the world? (risk aversion is not allowed; utility is linear in utility, so if money is utility, it must have linear utility. Does it?) Does anyone value money for its own sake? Or do they value what it can buy? Is money a quantity associated with an entire world-history? It seems accurate to say that it's treated in a utility-like way within certain incentive systems, but actually calling it a form of utility seems to imply a kind of agency that all the money-optimizers I can think of don't have. Except perhaps for automated trading systems, and even those can have whatever utility curves over money that their designers feel like setting. actually calling it a form of utility seems to imply a kind of agency that all the money-optimizers I can think of don't have. You don't think economic systems have "agency"? Despite being made up of large numbers of humans and optimising computer systems? Not really, no. They have goals in the sense that aggregating their subunits' goals gives us something of nonzero magnitude, but their ability to make plans and act intentionally usually seems very limited compared to individual humans', never mind well-programmed software. Where we find exceptions, it's usually because of an exceptional human at the helm, which of course implies more humanlike and less money-optimizerlike behavior. Where we find exceptions, it's usually because of an exceptional human at the helm, which of course implies more humanlike and less money-optimizerlike behavior. Right. So, to a first approximation, humans make reasonable money-optimizers. Thus the "Homo economicus" model. I think it is perfectly reasonable to say that companies have "agency". Companies are powerfully agent-like entities, complete with mission statements, contractual obligations and reputations. Their ability to make plans and act intentionally is often superhuman. Also, in many constitutuencies they are actually classified as legal persons. So, money is a representation of utility. Representations of utilities don't have to be "linear in utility". I already said "obviously not the utility of people", so whether people value money for its own sake doesn't seem very relevant. Perhaps a better point of comparison for money would be with utility-related signals in the brain - such as dopamine. I don't like having to say "representation of utility". Representations are all we have. There is no utility apart from representations. It has an equivalent to wireheading - namely hyperinflation. This is a difference to utility. Not a similarity. Wireheading gives low utility (for most plausible utility functions) but huge measures for other things that are not utility, like 'happiness'. It is the reason it would be utterly absurd to say "The federal government can print arbitrarily large amounts of utility". And we have maximisers of share value - namely companies. You can approximate (or legislate) companies that way and. It wouldn't be quite as inaccurate as saying "we have homo economicus" but it'd be a similar error. The statements following "So" do not follow from the statements preceding it. The preceding statements are respectively negatively relevant and irrelevant. So "So" does not fit between them. money is a kind of utility - though obviously not the utility of people. There is a relationship between money and utility. It is not an "is a kind of" relationship. (If Nyan takes a crack at explaining what the actual relationship is between fungible resources and utility it will most likely be worth reading.) (If Nyan takes a crack at explaining what the actual relationship is between fungible resources and utility it will most likely be worth reading.) Thanks for the encouragement! I do plan to do that soon, but I am hardly an agent that can be described as following through on "plans". So: I was using "utility" there to mean "representation of utility". In fact I did previously say that money was a "representation of utility". This is a case where there are only really representations. Utility is defined by its representations (in its "that which is maximised" sense). Without a representation, utility doesn't really exist. To be more sepcific about about hyperinflation, that is a form of utility counterfeitting. It's on the "wireheading" side, rather than the "pornography" side. This isn't really an analogy, but an instance of another phenomenon in the same class. Hyperinflation produces poor outcomes for countries, just as wireheading produces poor outcomes for those that choose it. This is a similarity - not a difference. I am not sure why you don't recognise the relationship here. Are you sure you that have thought the issue through? Money is like utility - it is society's main represenation of utility. Money is like utility but different. One thing you didn't address that was uncertainty about preferences. Specifically, will I die of radiation poisoning if I use VNM utility to make decisions when I'm uncertain about what my preferences even are? I.e., maximize expected utility, where the expectation is taken over my uncertainty about preferences in addition to any other uncertainty. I thought you took a position on this and was about to comment on it but I couldn't find what you said about it in the post! Apparently my brain deduced a conclusion on this issue from your post, then decided to blame/give credit to you. Yeah I totally sidestepped that issue because I don't know how to solve it. I don't think anyone knows, actually. Preference uncertainty is an open problem, AFAIK. Specifically, will I die of radiation poisoning if I use VNM utility to make decisions when I'm uncertain about what my preferences even are? I.e., maximize expected utility, where the expectation is taken over my uncertainty about preferences in addition to any other uncertainty. Yes. You can't compare or aggregate utilities from different utility functions. So at present, you basically have to pick one and hope for the best. Eventually someone will have to build a new thing for preference uncertainty. It will almost surely degenerate to VNM when you know your utility function. There are other problems that also sink naive decision theory, like acausal stuff, which is what UDT and TDT try to solve, and anthropics, which screw up probabilities. There's a lot more work on those than on preference uncertainty, AFAIK. Specifically, will I die of radiation poisoning if I use VNM utility to make decisions when I'm uncertain about what my preferences even are? I.e., maximize expected utility, where the expectation is taken over my uncertainty about preferences in addition to any other uncertainty. Yes. You can't compare or aggregate utilities from different utility functions. So at present, you basically have to pick one and hope for the best. This is exactly what my brain claimed you said :) Now I can make my comment. Game theorists do this all the time - at least economists. They'll create a game, then say something like "now let's introduce noise into the payoffs" but the noise ends up being in the utility function. Then they go and find an equilibrium or something using expected utility. Now every practical example I can think of off the top of my hand, you can reinterpret the uncertainty as uncertainty about actual outcomes with utilities associated with those outcomes and the math goes through. Usually the situation is something like letting U($)=$ for simplicity because risk aversion is orthogonal to what they're interested in, so you can easily think about the uncertainty as being over $ rather than U($). This simplicity allows them to play fast and loose with VNM utility and get away with it, but I wouldn't be surprised if someone made a model where they really do mean for the uncertainty to be over one's own preferences and went ahead and used VNM utility. In any case, no one ever emphasized this point in any of the econ or game theory courses I've taken, grad or you can reinterpret the uncertainty as uncertainty about actual outcomes with utilities associated with those outcomes and the math goes through. If you can do that, it seems to work; Noise in the payoffs is not preference uncertainty, just plain old uncertainty. So I guess my question is what does it look like when you can't do that, and what do we do instead? you can reinterpret the uncertainty as uncertainty about actual outcomes with utilities associated with those outcomes and the math goes through. If you can do that, it seems to work; Noise in the payoffs is not preference uncertainty, just plain old uncertainty. So I guess my question is what does it look like when you can't do that, and what do we do instead? You can at least simplify the problem somewhat by applying VNM utility using each of the candidate utility functions, and throwing out all solutions that do not appear in any of them. If you think you like either carrots or apples, you're not going to go to the store and buy asparagus. The other thorny issue is that uncertainty in the utility function makes learning about your utility function valuable. If you think you like either carrots or apples, then taking two grocery trips is the best answer - on the first trip you buy a carrot and an apple and figure out which one you like, and on the second trip you stock up. The other thing is that I don't think it's possible to model uncertainty inside your utility function - you can only have uncertainty about how you evaluate certain events. If you don't know whether or not you like carrots, that's a fact about eating carrots and not one about how to decide whether or not to eat a carrot. I think that every uncertainty about a utility function is just a hidden uncertainty about how the being the experiences utility works. Let me be specific about the math. Suppose you have a lottery L with a 1/3rd chance of result A and a 2/3rd chance of result B. Suppose furthermore that you are uncertain about whether you enjoy things as in U1 or U2, with equal probability of each. L is equivalent to a lottery with 1/6 chance (A, U1), 1/3 chance (B, U1), etc. Now you can make the first utility function of this exercise that takes into account all your uncertainty about preferences. Note that U1 and U2 aren't numbers - it's how much you enjoy something if your preferences are as in U1. What this lets us do is convert "there's a chance I get turned into a whale and I'm not sure if I will like it" into "there's a chance that I get turned into a whale and like it, and another chance that I get turned into a whale and don't like it". experiences utility Ooops. Radiation poisoning. Utility is about planning, not experiencing or enjoying. What this lets us do is convert "there's a chance I get turned into a whale and I'm not sure if I will like it" into "there's a chance that I get turned into a whale and like it, and another chance that I get turned into a whale and don't like it". I went through the math a couple days ago with another smart philosopher-type. We are pretty sure that this (adding preference uncertainty as an additional dimension of your ontology) is a fully general solution to preference uncertainty. Unfortunately, it requires a bit of moral philosophy to pin down the relative weights of the utility functions. That is, the utility functions and their respective probabilities is not enough to uniquely identify the combined utility function. Which is actually totally ok, because you can get that information from the same source where you got the partial utility functions. I'll go through the proof and implications/discussion in an upcoming post. Hopefully. I don't exactly have a track record of following through on things... Unfortunately, it requires a bit of moral philosophy to pin down the relative weights of the utility functions. That is, the utility functions and their respective probabilities is not enough to uniquely identify the combined utility function. Right, to get that answer you need to look inside your utility function... which you're uncertain about. Stated differently, your utility function tells you how to deal with uncertainty about your utility function, but that's another thing you're uncertain about. But luckily your utility function tells you how do deal with uncertainty about uncertainty about your utility function... I think you can see where this is going. Naively, my intuition is that simply adding uncertainty about preferences as part of your ontology isn't enough because of this regress - you still don't even know in principle how to choose between actions without more precise knowledge of your utility function. However, this regress sounds suspiciously like the sort of thing that once formalized precisely isn't really a problem at all - just "take the limit" as it were. That's not the issue we ran into. Your (partial) utility functions do not contain enough information to resolve uncertainty between them. As far as I can tell, utility functions can't contain meta-preferences. You can't just pull a correct utility function out of thin air, though. You got the utility function from somewhere; it is the output of a moral-philosophy process. You resolve the uncertainty with the same information-source from which you constructed the partial utility functions from in the first place. No need to take the limit or do any extrapolation (except that stuff like that does seem to show up inside the moral-philosophy process.) I think we're using "utility function" differently here. I take it to mean the function containing all information about your preferences, preferences about preferences, and higher level meta-preferences. I think you're using the term to refer to the function containing just object-level preference information. Is that correct? Now that I make this distinction, I'm not sure VNM utility applies to meta-preferences. Now that I make this distinction, I'm not sure VNM utility applies to meta-preferences. It doesn't, AFAIK, which is why I said your utility function does not contain meta-preference and the whole moral dynamic. "utility function" is only a thing in VNM. Using it as a shorthand for "my whole reflective decision system" is incorrect use of the term, IMO. I am not entirely sure that your utility function can't contain meta-preference, though. I could be convinced by some well-placed mathematics. My current understanding is that you put the preference uncertainty into your ontology, extend your utility function to deal with those extra dimensions, and lift the actual moral updating to epistemological work over those extra ontology-variables. This still requires some level of preliminary moral philosophy to shoehorn your current incoherent godshatter-soup into that formal I'll hopefully formalize this some day soon to something coherent enough to be criticized. I'll hopefully formalize this some day soon to something coherent enough to be criticized. I look forward to it! Nice catch on the radiation poisoning. Revised sentence: I think that every uncertainty about a utility function is just a hidden uncertainty about how to weigh the different experiences that generate a utility function That is, the utility functions and their respective probabilities is not enough to uniquely identify the combined utility function. This is 100% expected, since utility functions that vary merely by a scaling factor and changing the zero point are equivalent. I think we're talking about the same thing when you say "adding preference uncertainty as an additional dimension of your ontology". It's kind of hard to tell at this level of abstraction. Most of [?] agree that the VNM axioms are reasonable My problem with VNM-utility is that while in theory it is simple and elegant, it isn't applicable to real life because you can only assign utility to complex world states (a non-trivial task) and not to limited outcomes. If you have to choose between $1 and a 10% chance of $2, then this isn't universally solvable in real life because $2 doesn't necessarily have twice the value of $1, so the completeness axiom doesn't hold. Also, assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can't assign any utility to actually infinite immortality, or you can't differentiate between higher-quality and lower-quality immortality, or you can't represent utility as a real number. Neither of these problems is solved by replacing utility with awesomeness. Also, assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can't assign any utility to actually infinite immortality, or you can't differentiate between higher-quality and lower-quality immortality, or you can't represent utility as a real Could you explain that? Representing the quality of each day of your life with a real number from a bounded range, and adding them up with exponential discounting to get your utility, seems to meet all those criteria. Indeed, already figured that out here. If you have to choose between $1 and a 10% chance of $2, then this isn't universally solvable in real life because $2 doesn't necessarily have twice the value of $1, so the completeness axiom doesn't hold. Do you mean it's not universally solvable in the sense that there is no "I always prefer the $1"-type solution? Of course there isn't. That doesn't break VNM, it just means you aren't factoring outcomes properly. either you can't assign any utility to actually infinite immortality, or you can't differentiate between higher-quality and lower-quality immortality, or you can't represent utility as a real Do you mean it's not universally solvable in the sense that there is no "I always prefer the $1"-type solution? Of course there isn't. That doesn't break VNM, it just means you aren't factoring outcomes properly. That's what I mean, and while it doesn't "break" VNM, it means I can't apply VNM to situations I would like to, such as torture vs dust specks. If I know the utility of 1000 people getting dust specks in their eyes, I still don't know the utility of 1001 people getting dust specks in their eyes, except it's probably higher. I can't quantify the difference between 49 and 50 years of torture, which means I have no idea whether it's less than, equal to, or greater than the difference between 50 and 51 years. Likewise, I have no idea how much I would pay to avoid one dust speck (or 1000 dust specks) because there's no ratio of u($) to u(dust speck), and I have absolutely no concept how to compare dust specks with torture, and even if I had, it wouldn't be scalable. VNM is not a complete theory of moral philosophy, and isn't intended to be. I tried to make that clear in OP by discussing how much work VNM does and does not do (with a focus on what it does not All it does is prevent circular preferences and enforce sanity when dealing with uncertainty. It does not have anything at all to say about torture vs dust specs, the shape of utility curves, (in) dependence of outcome factors, or anything about the structure of your utility function, because none of those are problems of circular preference or risk-sanity. From wiki: Thus, the content of the theorem is that the construction of u is possible, and they claim little about its nature. Nonetheless, people read into it all sorts of prescriptions and abilities that it does not have, and then complain when they discover that it does not actually have such powers, or don't discover such, and make all sorts of dumb mistakes. Hence the OP. VNM is a small statement on the perhiphery of a very large, very hard problem. Moral Philosophy is hard, and there are (so far) no silver bullets. Nothing can prevent you from having to actually think about what you prefer. Yes, I am aware of that. The biggest trouble, as you have elaborately explained in your post, is that people think they can perform mathematical operations in VNM-utility-space to calculate utilities they have not explicitly defined in their system of ethics. I believe Eliezer has fallen into this trap, the sequences are full of that kind of thinking (e.g. torture vs dust specks) and while I realize it's not supposed to be taken literally, "shut up and multiply" is symptomatic. Another problem is that you can only use VNM when talking about complete world states. A day where you get a tasty sandwich might be better than a normal day, or it might not be, depending on the world state. If you know there's a wizard who'll give you immortality for $1, you'll chose $1 over any probability<1 of $2, and if the wizard wants $2, the opposite applies. VNM isn't bad, it's just far, far, far too limited. It's somewhat useful when probabilities are involved, but otherwise it's literally just the concept of well-ordering your options by preferability. Assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can't assign any utility to actually infinite immortality, or you can't differentiate between higher-quality and lower-quality immortality, or you can't represent utility as a real number. Turns out this is not actually true: 1 day is 1, 2 days is 1.5, 3 days is 1.75, etc, immortality is 2, and then you can add quality. Not very surprising in fact, considering immortality is effectively infinity and |ℕ| < |ℝ|. Still, I'm pretty sure the set of all possible world states is of higher cardinality than ℝ, so... (Also it's a good illustration why simply assigning utility to 1 day of life and then scaling up is not a bright idea.) Another problem is that you can only use VNM when talking about complete world states. You can talk about probability distributions over world-states as well. When I say "tasty sandwich day minus normal day" I mean to refer to the expected marginal utility of the sandwich, including the possibilities with wizards and stuff. This simplifies things a bit, but goes to hell as soon as you include probability updating, or actually have to find that value. I've been very entertained by this framing of the problem - very fun to read! I find it strange that you claim the date with Satan is clearly the best option, but almost in the same breath say that the utility of whaling in the lake of fire is only 0.1% worse. It sounds like your definition of clarity is a little bit different from mine. On the Satan date, souls are tortured, steered toward destruction, and tossed in a lake of fire. You are indifferent to those outcomes because they would have happened anyway (we can grant this a premise of the scenario). But I very much doubt you are indifferent to your role in those outcomes. I assume that you negatively value having participated in torture, damnation, and watching others suffer, but it's not immediately clear if you had already done those things on the previous 78044 days. Are you taking into account duration neglect? If so, is the pain of rape only slightly worse than burning in fire? This probably sounds nitpicky; the point I'm trying to make is that computing utilities using the human brain has all kinds of strange artifacts that you probably can't gloss over by saying "first calculate the utility of all outcomes as a number then compare all your numbers on relative scale". We're just not built to compute naked utilities without reference anchors, and there does not appear to be a single reference anchor to which all outcomes can be compared. Your system seems straightforward when only 2 or 3 options are in play, but how do you compare even 10 options? 100? 1000? In the process you probably do uncover examples of your preferences that will cause you to realize you are not VNM-compliant, but what rule system do you replace it with? Or is VNM correct and the procedure is to resolve the conflict with your own broken utility function TL;DR: I think axiom #1 (utility can be represented as a single real number) is false for human hardware, especially when paired with #5. you probably can't gloss over by saying "first calculate the utility of all outcomes as a number then compare all your numbers on relative scale". We're just not built to compute naked utilities without reference anchors, and there does not appear to be a single reference anchor to which all outcomes can be compared. That was one of the major points. Do not play with naked utilities. For any decision, find the 0 anchor and the 1 anchor, and rank other stuff relative to them. In the process you probably do uncover examples of your preferences that will cause you to realize you are not VNM-compliant, but what rule system do you replace it with? Or is VNM correct and the procedure is to resolve the conflict with your own broken utility function somehow? Yep, you are not VNM compliant, or the whole excercise would be worthless. The philosophy involved in actually making your preferences consistent is hard of course. I swept that part under the rug. That was one of the major points. Do not play with naked utilities. For any decision, find the 0 anchor and the 1 anchor, and rank other stuff relative to them. I understood your major point about the radioactivity of the single real number for each utility, but I got confused by what you intended the process to look like with your hell example. I think you need to be a little more explicit about your algorithm when you say "find the 0 anchor and the 1 anchor". I defaulted to a generic idea of moral intuition about best and worst, then only made it as far as thinking it required naked utilities to find the anchors in the first place. Is your process something like: "compare each option against the next until you find the worst and best?" It is becoming clear from this and other comments that you consider at least the transitivity property of VNM to be axiomatic. Without it, you couldn't find what is your best option if the only operation you're allowed to do is compare one option against another. If VNM is required, it seems sort of hard to throw it out after the fact if it causes too much trouble. What is the point of ranking other stuff relative to the 0 and 1 anchor if you already know the 1 anchor is your optimal choice? Am I misunderstanding the meaning of the 0 and 1 anchor, and it's possible to go less than 0 or greater than 1? Is your process something like: "compare each option against the next until you find the worst and best?" Yes, approximately. It is becoming clear from this and other comments that you consider at least the transitivity property of VNM to be axiomatic. I consider all the axioms of VNM to be totally reasonable. I don't think the human decision system follows the VNM axioms. Hence the project of defining and switching to this VNM thing; it's not what we already use, but we think it should be. If VNM is required, it seems sort of hard to throw it out after the fact if it causes too much trouble. VNM is required to use VNM, but if you encounter a circular preference and decide you value running in circles more than the benefits of VNM, then you throw out VNM. You can't throw it out from the inside, only decide whether it's right from outside. What is the point of ranking other stuff relative to the 0 and 1 anchor if you already know the 1 anchor is your optimal choice? Expectation. VNM isn't really useful without uncertainty. Without uncertainty, transitive preferences are enough. If being a whale has utility 1, and getting nothing has utility 0, and getting a sandwich has utility 1/500, but the whale-deal only has a probability of 1/400 with nothing otherwise, then I don't know until I do expectation that the 1/400 EU from the whale is better than the 1/500 EU from the sandwich. I think I have updated slightly in the direction of requiring my utility function to conform to VNM and away from being inclined to throw it out if my preferences aren't consistent. This is probably mostly due to smart people being asked to give an example of a circular preference and my not finding any answer compelling. Expectation. VNM isn't really useful without uncertainty. Without uncertainty, transitive preferences are enough. I think I see the point you're trying to make, which is that we want to have a normalized scale of utility to apply probability to. This directly contradicts the prohibition against "looking at the sign or magnitude". You are comparing 1/400 EU and 1/500 EU using their magnitudes, and jumping headfirst into the radiation. Am I missing something? You are comparing 1/400 EU and 1/500 EU using their magnitudes You are allowed to compare. Comparison is one of the defined operations. Comparison is how you decide which is best. we want to have a normalized scale of utility to apply probability to. I'm uneasy with this "normalized". Can you unpack what you mean here? What I mean by "normalized" is that you're compressing the utility values into the range between 0 and 1. I am not aware of another definition that would apply here. Your rule says you're allowed to compare, but your other rule says you're not allowed to compare by magnitude. You were serious enough about this second rule to equate it with radiation death. You can't apply probabilities to utilities and be left with anything meaningful unless you're allowed to compare by magnitude. This is a fatal contradiction in your thesis. Using your own example, you assign a value of 1 to whaling and 1/500 to the sandwich. If you're not allowed to compare the two using their magnitude, then you can't compare the utility of 1/400 chance of the whale day with the sandwich, because you're not allowed to think about how much better it is to be a whale. There's something missing here, which is that "1/400 chance of a whale day" means "1/400 chance of whale + 399/400 chance of normal day". To calculate the value of "1/400 chance of a whale day" you need to assign a utility for both a whale day and a normal day. Then you can compare the resulting expectation of utility to the utility of a sandwhich = 1/500 (by which we mean a sandwich day, I guess?), no sweat. The absolute magnitudes of the utilities don't make any difference. If you add N to all utility values, that just adds N to both sides of the comparison. (And you're not allowed to compare utilities to magic numbers like 0, since that would be numerology.) I notice we're not understanding each other, but I don't know why. Let's step back a bit. What problem is "radiation poisoning for looking at magnitude of utility" supposed to be solving? We're not talking about adding N to both sides of a comparison. We're talking about taking a relation where we are only allowed to know that A < B, multiplying B by some probability factor, and then trying to make some judgment about the new relationship between A and xB. The rule against looking at magnitudes prevents that. So we can't give an answer to the question: "Is the sandwich day better than the expected value of 1/400 chance of a whale day?" If we're allowed to compare A to xB, then we have to do that before the magnitude rule goes into effect. I don't see how this model is supposed to account for that. 1. You can't just multiply B by some probability factor. For the situation where you have p(B) = x, p(C) = 1 - x, your expected utility would be xB + (1-x)C. But xB by itself is meaningless, or equivalent to the assumption that the utility of the alternative (which has probability 1 - x) is the magic number 0. "1/400 chance of a whale day" is meaningless until you define the alternative that happens with probability 399/400. 2. For the purpose of calculating xB + (1-x)C you obviously need to know the actual values, and hence magnitudes of x, B and C. Similarly you need to know the actual values in order to calculate whether A < B or not. "Radiation poisoning for looking at magnitude of utility" really means that you're not allowed to compare utilities to magic numbers like 0 or 1. It means that the only thing you're allowed to do with utility values is a) compare them to each other, and b) obtain expected utilities by multiplying by a probability distribution. (And you're not allowed to compare utilities to magic numbers like 0, since that would be numerology.) Unless you rescale everything so that magic numbers like 0 and 1 are actually utilities of possibilities under consideration. But that's like cutting corners in the lab; dangerous if you don't know what you are doing, but useful if you do. requiring my utility function to conform to VNM If you don't conform to VNM, you don't have a utility function. I think you mean to refer to your decision algorithms. No, I mean if my utility function violates transitivity or other axioms of VNM, I more want to fix it than to throw out VNM as being invalid. if my utility function violates transitivity or other axioms of VNM then it's not a utility function in the standard sense of the term. I think what you mean to tell me is: "say 'my preferences' instead of 'my utility function'". I acknowledge that I was incorrectly using these interchangeably. I do think it was clear what I meant when I called it "my" function and talked about it not conforming to VNM rules, so this response felt tautological to me.
{"url":"http://lesswrong.com/lw/ggm/pinpointing_utility/","timestamp":"2014-04-18T00:14:51Z","content_type":null,"content_length":"582563","record_id":"<urn:uuid:502cb312-092d-40ca-8d42-4612cde57ae6>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Simple question about irrationals, with a short note in the margin. :-) Replies: 30 Last Post: Jan 15, 2014 6:17 PM Messages: [ Previous | Next ] Re: Simple question about irrationals, with a short note in the margin. :-) Posted: Jan 13, 2014 10:39 PM On Tue, 14 Jan 2014, Port563 wrote: > CAN AN ALGEBRAIC IRRATIONAL RAISED TO (the power of) AN ALGEBRAIC IRRATIONAL > (not necessarily the same one) BE RATIONAL? > Prove the answer. > This proof must be _very_ short. Gelfand and Schneider. > Given you've found the above quick-and-dirty technique, what are the answers > to these: STOP YOUR RUDE SHOUTING; REST IGNORED. > TRANSCENDENTAL? > IRRATIONAL? > IRRATIONAL? > Reminders: > Reals only, everywhere > Where types are alike, there's no requirement the power and base must be the > same number > Some may not be simple > Proofs welcomed Date Subject Author 1/13/14 Simple question about irrationals, with a short note in the margin. :-) Port563 1/13/14 Re: Simple question about irrationals, with a short note in the William Elliot margin. :-) 1/14/14 Re: Simple question about irrationals, with a short note in the margin. :-) Port563 1/14/14 Re: Simple question about irrationals, with a short note in the William Elliot margin. :-) 1/14/14 Re: Simple question about irrationals, with a short note in the margin. :-) Port563 1/14/14 Re: Simple question about irrationals, with a short note in the William Elliot margin. :-) 1/14/14 Re: Simple question about irrationals, with a short note in the margin. :-) Port563 1/15/14 Re: Simple question about irrationals, with a short note in the William Elliot margin. :-) 1/15/14 Re: Simple question about irrationals, with a short note in the margin. :-) Port563 1/15/14 Re: Simple question about irrationals, with a short note in the margin. :-) Port563 1/15/14 Re: Simple question about irrationals, with a short note in the margin. :-) Port563 1/15/14 Re: Simple question about irrationals, with a short note in the William Elliot margin. :-) 1/15/14 Re: Simple question about irrationals, with a short note in the margin. :-) Port563 1/15/14 Re: Simple question about irrationals, with a short note in the albrecht margin. :-) 1/15/14 Re: Simple question about irrationals, with a short note in the margin. :-) Port563 1/15/14 Four conjectures re transcendentals Port563 1/15/14 Re: Four conjectures re transcendentals Port563 1/15/14 Re: Four conjectures re transcendentals quasi 1/15/14 Re: Four conjectures re transcendentals Port563 1/15/14 Re: Four conjectures re transcendentals quasi 1/15/14 Re: Four conjectures re transcendentals Port563 1/15/14 Re: Four conjectures re transcendentals quasi 1/15/14 Re: Four conjectures re transcendentals quasi 1/15/14 Re: Four conjectures re transcendentals Port563 1/15/14 Re: Four conjectures re transcendentals quasi 1/15/14 Re: Four conjectures re transcendentals Port563 1/15/14 Re: Four conjectures re transcendentals quasi 1/15/14 Re: Four conjectures re transcendentals quasi 1/15/14 Re: Four conjectures re transcendentals Port563 1/15/14 Re: Four conjectures re transcendentals quasi 1/15/14 Re: Four conjectures re transcendentals quasi
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2615491&messageID=9363359","timestamp":"2014-04-17T10:08:30Z","content_type":null,"content_length":"53695","record_id":"<urn:uuid:3c7834d5-54bd-4516-b9f1-865c3de0bc5a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Add numbers Suppose you want to calculate a price total for the inventory of a store or the total gross profit margins for all departments that are under budget for the year. There are several ways to add What do you want to do? Add numbers in a cell Use the + (plus sign) arithmetic operator in a formula. For example, if you type the following formula in a cell: The cell displays the following result: Add all contiguous numbers in a row or column If you have a range of contiguous numbers (that is, there are no blank cells), you can use the AutoSum 1. Click a cell below the column of numbers or to the right of the row of numbers. 2. On the Home tab, in the Editing group, click AutoSum Add noncontiguous numbers If you have a range of numbers that might include blank cells or cells containing text instead of numbers, use the SUM function in a formula. Even though they might be included in the range that is used in the formula, any blank cells and cells that contain text are ignored. The example may be easier to understand if you copy it to a blank worksheet. 1. Create a blank workbook or worksheet. 2. Select the example in the Help topic. Note Do not select the row or column headers. Selecting an example from Help 3. Press CTRL+C. 4. In the worksheet, select cell A1, and press CTRL+V. 5. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas A B Salesperson Invoice Buchanan 15,000 Buchanan 9,000 Suyama 8,000 Suyama 20,000 Buchanan 5,000 Dodsworth 22,500 Formula Description (Result) =SUM(B2:B3,B5) Adds two invoices from Buchanan, and one from Suyama (44,000) =SUM(B2,B5,B7) Adds individual invoices from Buchanan, Suyama, and Dodsworth (57,500) Note The SUM function can include any combination of up to 30 cell or range references. For example, the formula =SUM(B2:B3,B5) contains one range reference (B2:B3) and one cell (B5). Add numbers based on one condition You can use the SUMIF function to create a total value for one range based on a value in another range. In the following example, you want to create a total only for the values in column B (Invoice) that correspond to values in column A (Salesperson) for the salesperson named Buchanan. The example may be easier to understand if you copy it to a blank worksheet. 1. Create a blank workbook or worksheet. 2. Select the example in the Help topic. Note Do not select the row or column headers. Selecting an example from Help 3. Press CTRL+C. 4. In the worksheet, select cell A1, and press CTRL+V. 5. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas A B Salesperson Invoice Buchanan 15,000 Buchanan 9,000 Suyama 8,000 Suyama 20,000 Buchanan 5,000 Dodsworth 22,500 Formula Description (Result) =SUMIF(A2:A7,"Buchanan",B2:B7) Sum of invoices for Buchanan (29000) =SUMIF(B2:B7,">=9000",B2:B7) Sum of large invoices greater than or equal to 9,000 (66500) =SUMIF(B2:B7,"<9000",B2:B7) Sum of small invoices less than 9,000 (13000) The SUMIF function uses the following arguments Formula with SUMIF function Add numbers based on multiple conditions To do this task, use the SUMIFS function. The example may be easier to understand if you copy it to a blank worksheet. 1. Create a blank workbook or worksheet. 2. Select the example in the Help topic. Note Do not select the row or column headers. Selecting an example from Help 3. Press CTRL+C. 4. In the worksheet, select cell A1, and press CTRL+V. 5. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas A B C D Region Salesperson Type Sales South Buchanan Beverages 3571 West Davolio Dairy 3338 East Suyama Beverages 5122 North Suyama Dairy 6239 South Dodsworth Produce 8677 South Davolio Meat 450 South Davolio Meat 7673 East Suyama Produce 664 North Davolio Produce 1500 South Dodsworth Meat 6596 Formula Description (Result) =SUMIFS(D2:D11,A2:A11,"South",C2:C11,"Meat") Sum of Meat sales in the South region (14719) =SUM(IF((A2:A11="South")+(A2:A11="East"),D2:D11)) Sum of sales where the region is South or East (32753) Note The second formula in the example must be entered as an array formula (array formula: A formula that performs multiple calculations on one or more sets of values, and then returns either a single result or multiple results. Array formulas are enclosed between braces { } and are entered by pressing CTRL+SHIFT+ENTER.). After copying the example to a blank worksheet, select the formula cell. Press F2, and then press CTRL+SHIFT+ENTER. If the formula is not entered as an array formula, the error #VALUE! is returned. How the functions are used in the preceding example The SUMIFS function is used in the first formula to find rows in which "South" is in column A and "Meat" is in column C. There are three cases of this; in rows 7, 8, and 11. The function first looks at column A, which contains the regions, to find a match for “South.” It then looks at column C, which contains the food type, to find a match for “Meat.” Finally, the function looks in the range that contains the values to sum, D2:D11, and sums only the values in that column that meet those two conditions. The second formula, which uses the SUM and the IF functions, is entered as an array formula (by pressing CTRL+SHIFT+ENTER) to find rows in which either one or both of "South" or "East" is in column A. There are seven cases of this; in rows 2, 4, 6, 7, 8, 9, and 11. Because this formula is an array formula, the + operator isn't used to add values; it is used to check for two or more conditions, at least one of which must be met. Then, the SUM function is used to add the values that meet these criteria. Add numbers based on criteria stored in a separate range To do this task, use the DSUM function. The example may be easier to understand if you copy it to a blank worksheet. 1. Create a blank workbook or worksheet. 2. Select the example in the Help topic. Note Do not select the row or column headers. Selecting an example from Help 3. Press CTRL+C. 4. In the worksheet, select cell A1, and press CTRL+V. 5. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas A B C D Region Salesperson Type Sales South Buchanan Beverages 3571 West Davolio Dairy 3338 East Suyama Beverages 5122 North Suyama Dairy 6239 South Dodsworth Produce 8677 South Davolio Meat 450 South Davolio Meat 7673 East Suyama Produce 664 North Davolio Produce 1500 South Dodsworth Meat 6596 Region Salesperson Type Sales South Meat Formula Description (Result) =DSUM(A1:D11, "Sales", A12:D13) Sum of Meat sales in the South region (14719) =DSUM(A1:D11, "Sales", A12:D14) Sum of Meat and Produce sales in the South region (25560) The DSUM function uses the following arguments. What happened to the Conditional Sum Wizard? This add-in is no longer included with Excel 2010. In earlier versions of Excel, you could use the Conditional Sum Wizard to help you write formulas that calculate the sums of values that met specified conditions. This functionality has been replaced by the Insert Function dialog box (Formulas tab, Function Library group) and other existing worksheet functions, such as SUMIFS and using a combination of SUM and IF together in a formula. For more information about using these functions to conditionally sum columns or rows of data, see the section Add numbers based on multiple conditions, earlier in this article. Add only unique values To do this task, use the SUM, IF, and FREQUENCY functions. The following example uses the: ● FREQUENCY function to identify the unique values in a range. For the first occurrence of a specific value, this function returns a number equal to the number of occurrences of that value. For each occurrence of that same value after the first, this function returns a 0 (zero). ● IF function to assign a value of 1 to each true condition. ● SUM function to add the unique values. Tip To see a function evaluated step by step, select the cell containing the formula, and then on the Formulas tab, in the Formula Auditing group, click Evaluate Formula. The example may be easier to understand if you copy it to a blank worksheet. 1. Create a blank workbook or worksheet. 2. Select the example in the Help topic. Note Do not select the row or column headers. Selecting an example from Help 3. Press CTRL+C. 4. In the worksheet, select cell A1, and press CTRL+V. 5. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas Formula Description (Result) =SUM(IF(FREQUENCY(A2:A10,A2:A10)>0,A2:A10)) Add the unique values in cells A2:A10 (2289)
{"url":"http://office.microsoft.com/en-us/excel-help/add-numbers-HP010342146.aspx","timestamp":"2014-04-16T19:57:47Z","content_type":null,"content_length":"57319","record_id":"<urn:uuid:243be521-7d14-4862-8d5d-ce9b291f38e7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic question about GSL ODE func RK4 05-07-2007 #1 Registered User Join Date May 2007 Basic question about GSL ODE func RK4 I am looking for some help on rk4 and coulomb repulsion. It is a technical question but one has to go through the story to understand what I am tyring to achieve. Thank you already. I am trying to simulate the Coulomb repulsion between charged particles. I am using RK4 to do this which requires a function. The physics formula of the function is: dx/dt = k*qi*(xi-xj)/sqrt((xi-xj)^2+(yi-yj)^2+(zi-zj)^2) where qi is the charge of the i-th particle, xi, yi, zi are the coordinates of the i-th particle and xj, yj,zj are the coordinates of the j-th As an input for the initial values of the positions and charge for each particle, I am using arrays (currently only for 4 particles): q[4] = { 1.0, 1.0, -1.0, 1.0 } charge x0[4] = { 1.0, 2.3, 3.2, 4.0 } init posn x y0[4] = { 1.0, 2.4, 3.0, 4.0 } init posn y z0[4] = { 1.0, 2.5, 3.3, 4.2 } init posn z and by using two "for" loops I can loop through each particle and compute the value of the total effect of all the particles on eachother. My question is: since there are two values for each coordinate (i and j), how can I write the function in an acceptable form for RK4? I did try the following: int func (double t, const double y[], double f[], void *params){ double mu = *(double *)params; double dx = (y[0]-y[1])/r; double dy = (y[2]-y[3])/r; double dz = (y[4]-y[5])/r; f[0] = mu*y[6]*dx/(r*r); f[1] = mu*y[6]*dy/(r*r); f[2] = mu*y[6]*dz/(r*r); return GSL_SUCCESS; however, I am not convinced this is correct (at all). I would appreciate any feedback. If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. 05-07-2007 #2
{"url":"http://cboard.cprogramming.com/game-programming/89587-basic-question-about-gsl-ode-func-rk4.html","timestamp":"2014-04-17T17:40:36Z","content_type":null,"content_length":"44476","record_id":"<urn:uuid:12e29c45-1808-4d42-810a-27144b0384eb>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Standards in this domain: Reason with shapes and their attributes. • 1.G.A.1 Distinguish between defining attributes (e.g., triangles are closed and three-sided) versus non-defining attributes (e.g., color, orientation, overall size) ; build and draw shapes to possess defining attributes. • 1.G.A.2 Compose two-dimensional shapes (rectangles, squares, trapezoids, triangles, half-circles, and quarter-circles) or three-dimensional shapes (cubes, right rectangular prisms, right circular cones, and right circular cylinders) to create a composite shape, and compose new shapes from the composite shape.^1 • 1.G.A.3 Partition circles and rectangles into two and four equal shares, describe the shares using the words halves, fourths, and quarters, and use the phrases half of, fourth of, and quarter of. Describe the whole as two of, or four of the shares. Understand for these examples that decomposing into more equal shares creates smaller shares. ^1 Students do not need to learn formal names such as “right rectangular prism.”
{"url":"https://www.educateiowa.gov/pk-12/standards-curriculum/iowa-core/mathematics/grade-1/geometry","timestamp":"2014-04-16T11:49:24Z","content_type":null,"content_length":"24976","record_id":"<urn:uuid:9aa4141e-86f9-4089-8609-341537ba69cc>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Infinity—Nothing to Trifle With Philosopher and apologist William Lane Craig walks where most laymen fear to tread. Like an experienced actor, he has no difficulty imagining himself in all sorts of stretch roles—as a physicist, as a biologist, or as a mathematician. Since God couldn’t have created the universe if it has been here forever, Craig argues that an infinitely old universe is impossible. He imagines such a universe and argues that it would take an infinite amount of time to get to now. This gulf of infinitely many moments of time would be impossible to cross, so the idea must be impossible. But why not arrive at time t = now? We must be somewhere on the timeline, and now is as good a place as any. The imaginary infinite timeline isn’t divided into “Points in time we can get to” and “Points we can’t.” And if going from a beginning in time infinitely far in the past and arriving at now is a problem, then imagine a beginningless timeline. Physicist Vic Stenger, for one, makes the distinction between a universe that began infinitely far in the past and a universe without a beginning Hoare’s Dictum is relevant here. Infinity-based arguments are successful because they’re complicated and confusing, not because they’re accurate. One of Craig’s conundrums is this: Suppose we meet a man who claims to have been counting from eternity and is now finishing: . . ., –3, –2, –1, 0. We could ask, why did he not finish counting yesterday or the day before or the year before? By then an infinite time had already elapsed, so that he should already have finished by then.… In fact, no matter how far back into the past we go, we can never find the man counting at all, for at any point we reach he will have already finished. Before we study this ill-advised descent into mathematics, let’s first explore the concept of infinity. Everyone knows that the number of integers {1, 2, 3, …} is infinite. It’s easy to see that if one proposed that the set of integers was finite, with a largest integer n, the number n + 1 would be even larger. This understanding of infinity is an old observation, and Aristotle and other ancients noted it. But there’s more to the topic than that. I remember being startled in an introductory calculus class at a shape sometimes called Gabriel’s Horn (take the two-dimensional curve 1/x from 1 to ∞ and rotate it around the x-axis to make an infinitely long wine glass). This shape has finite volume but infinite surface area. In other words, you could fill it with paint, but you could never paint it. A two-dimensional equivalent is the familiar Koch snowflake. (Start with an equilateral triangle. For every side, erase the middle third and replace it with an outward-facing V with sides the same length as the erased segment. Repeat forever.) At every iteration (see the first few in the drawing above), each line segment becomes 1/3 bigger. Repeat forever, and the perimeter becomes infinitely long. Surprisingly, the area doesn’t become infinite because the entire growing shape could be bounded by a fixed circle. In the 2D equivalent of the Gabriel’s Horn paradox, you could fill in a Koch snowflake with a pencil, but all the pencils in the world couldn’t trace its outline. Far older than these are any of Zeno’s paradoxes. In one of these, fleet-footed Achilles gives a tortoise a 100-meter head start in a foot race. Achilles is ten times faster, but by the time he reaches the 100-meter mark, the tortoise has gone 10 meters. This isn’t a problem, and he crosses that next 10 meters. But wait a minute—the tortoise has moved again. Every time Achilles crosses the next distance segment, the tortoise has moved ahead. He must cross an infinite series of distances. Will he ever pass the tortoise? The distance is the infinite sum 100 + 10 + 1 + 1/10 + …. This sum is a little more than 111 meters, which means that Achilles will pass the tortoise and win the race. Some infinite sums are finite (1 + 1/2 + 1/4 + 1/8 + … = 2). And some are infinite (1 + 1/2 + 1/3 + 1/4 + … = ∞). (And this post is getting a bit long. Read Part 2.) Photo credit: Wikipedia Related posts: Related articles: 42 thoughts on “Infinity—Nothing to Trifle With” 1. To Bob the atheist, If the universe is infinite, that is, if it has an infinite amount of particles, is the amount of particles even or odd? However, if the universe is not infinite, yet has no beginning, how can it be anything else than cyclical? □ I don’t follow your logic leading to a cyclic universe, but even if I did, I fear that you’re applying common sense thinking to issues at the frontier of science, where common sense is not especially reliable. ☆ I agree that in the area of mathematics, commonsense is not very reliable, but I am making a philosophical objection, which rests on the experience of the physical world. An actual infinite may make sense in mathematics, but it doesn’t in the real world. ○ I’ve seen some arguments that an actual infinite does make sense (for example, here), though this is beyond my interest. You referred to mathematics, philosophy, and the real world (which perhaps is physics). My point is that common sense isn’t much help at the frontier of physics. If your common sense is violated when you study the ideas in cosmology, well, get used to it. □ “If the universe is infinite, that is, if it has an infinite amount of particles, is the amount of particles even or odd?” Whether something is even or odd is a property of integers. To ask if “infinity” is even or odd makes no sense because it is not an integer. For a more mundane example, I could ask if pi is even or odd, that question also makes no sense because pi is also not an integer. ☆ The problem is that in the world, there is nothing real but natural numbers. We use mathematical devices to understand how it works, but they do not literally apply to the real world. ○ Well that is simply incorrect. Natural numbers are only the positive integers. This doesn’t even include negative numbers and fractions. Certainly you don’t mean to say that fractions are only theoretical. Have you ever cut something in half? ○ I certainly don’t claim that mathematical devices like fractions are useless to understand the world, but they don’t exist literally in the world. When I cut something in half, I make two things out of one thing, but I never get a “physical” fraction, so to speak. Of course I may say that, for instance, I get half a pie by cutting it, but it’s because I compare one of the current halves with what the pie WAS before it was cut or maybe with my ideal concept of a pie. But such a comparison, involving a fraction, is only possible for a thinking being. in the physical world, such a fraction does not exist as such. ○ well what if I don’t actually cut the pie, but instead I just want to talk about the left half of it? Certainly now this exists. ○ But then we are no longer speaking of beings as wholes, we are speaking of the PARTS of beings. In that sense, yes I may grant that fractions exist in the real world. But there is an arbitrary element, because it is our mind that decides what counts as the part and what counts as the whole. One thing may be both a whole and a part, depending on what we compare it with. On the contrary, natural numbers are fully objective. But I still maintain that actual infinites don’t exist as such in the real world. ○ I guess if you are restricting to counting discrete things, then sure, natural numbers are the only relevant numbers. I would argue that other numbers come up in the real world But lets get back to infinities because they are more fun. When you say that infinities don’t exist in the real world, I’m not so sure. You may be right, but I think you may be wrong also. It is conceivable that time is infinite, no matter how far back you go, you can keep going farther back. Granted, we have a beginning of our universe, but perhaps there was something before that which we came from. If this were the case, then we would be talking about an infinity that exists in reality. Is this really what happened? I don’t know. None of us do really. But I don’t see why it can’t be a possibility. 2. To Hausdorff, I don’t claim either that the Big Bang is the time of “divine creation”. I disagree with apologists who try to use the Big Bang for their own ends. No one knows what existed before the time limit set by “Planck’s wall”. But Big Bang or not, the concept of infinite time seems absurd. Time is logically posterior to change. In speaking of infinite time, we actually speak of an infinite amount of past changes. But a medieval philosopher, st. Bonaventure, raised an interesting paradox in his attempt to prove that the universe had a beginning. If there was an infinite amount of changes in the past, then some of them are at an finite distance from us, such was yesterday, one year ago, the formation of our planet and on on. But others must be at an infinite temporal distance from us because an infinite amount of changes or events could not fit into a finite temporal line. But in that case, since those events belong to the past, so that they have already elapsed, when did we cross the boundary between changes infinitely remote and changes finitely remote? It makes no sense because any finite amount + 1 unit will still make a finite amount… So there are only two ways out: 1) Time is linear and the universe had a beginning (and therefore a beginner). 2) Time is cyclical and contains only a finite amount of events which endlessly recur. I cannot imagine a more meaningless world than this one. □ Yes, infinite time does sound pretty crazy. But so does finite time. As for St. Bonaventure, I think he’s struggling with infinty as a number vs. infinity as a concept. ☆ “the concept of infinite time seems absurd” It does seem fairly crazy. On the other hand, the thought of there not being an infinite amount of time in the past means there must be a beginning. In other words, there is a point where something happens that wasn’t the result of something before it. That seems pretty crazy too. To have the first thing happen just spontaneously with no precursor is hard to imagine. Of the 2 things, an infinite amount of time into the past makes more sense to me. But I do see that both are strange. “Time is logically posterior to change” I’m not quite sure I understand what you are saying here. Are you saying we need time first then we can have change? Can you elaborate on this idea? As to the paradox of Bonaventure, it is the same mistake that was made above. And actually, the key to it is something you said “an infinite amount of changes or events could not fit into a finite temporal line” That’s true, for any fixed finite amount of time cannot hold an infinite amount of moments. But for each pair of moments, there is a finite amount of time large enough to encompass both of them. For example, suppose the finite temporal line we are talking about is between now and 1 year ago. There are a lot of points in time before that. Jan 1 2000 for example, is not in this time period, but it is not an infinite amount of time away, I just need to expand my time frame to about 12 years. If we do that, there are still points in time farther back, but again, they are not an infinite amount of time away, we just need to expand our bubble of time we are looking at. If there is infinite time, it doesn’t mean that there is a finite bubble big enough to encompass everything. It actually means the opposite, no finite amount will be enough. No matter how big of a finite span of time we consider, there will always be something outside of it. But for each specific point in time, we can find a finite line large enough for it. It might help to think of time as the number line. Every point in time is some value x on the number line. So if we set ‘right now’ to be zero, then for any moment that happened x minutes ago, we can see it is a finite amount of time ago, it fits into the finite line [-x,0] and is therefore a finite time away. There are no points on the number line and infinite distance from the origin in the same way that there are no moments in time an infinite amount of time in the past. I hope this makes sense. It would be easier if we were talking face to face, and ideally if I had a chalk board 3. Actually, now that I think about it, translating this into natural numbers might make it easier. When you said “If there was an infinite amount of changes in the past, then some of them are at an finite distance from us…But others must be at an infinite temporal distance from us” This is equivalent to saying ‘If there are an infinite number of natural numbers, then some of them are finitely large, but others must be infinitely large” But this is not true, even though no finite interval is large enough to encompass them all, every natural number will fit inside some interval of the form [1,k] and is therefore finite. (not sure if that helps, but it’s worth a shot I guess) 4. Oh, one more thing. Since we have talked about fun math stuff this long, I feel I should post a link to vi hart’s videos. If you haven’t seen them before it is totally worth your time. □ Excellent videos! Like the study of Christianity, math is something you can lose yourself in for a lifetime. 5. Hi Hausdorff, It’s hard for me to find the right words to translate my hunches. If I had studied mathematics, maybe it would help. I’m not so sure we can equate a time line with a number line, due to the ontological difference between the physical world and the mathematical realm… I mean, don’t forget that we are speaking of real PAST events, events that already took place. Suppose we associate each of this past event with a negative number. Then however far we get, we will always get finite time and a beginning. Unless at some point we posit infinitely remote events. Now you say that, however, we can always get further into the past, and that’s the genuine meaning of infinite. But though it’s adequate to speak thusly of a number line, what could it possibly mean for a time line? You often use expressions such as “for every pair of numbers, there will always be a finite distance. This may be correct in the realm of maths. The problem is to make it clear what we are speaking about when we apply that word “every” to the physical world. What is the range of that “every”? Finite or infinite? Remember that we are speaking of the physical world. There is also a difference between the past and the future. I can easily conceive an eternal future provided that God exists (that’s what we can heaven and hell). But what it means is that our timespan, though always finite, will endlessly grow. Which by the way is only possible if God, in some sense, is ontologically infinite. If I may, sometimes it seems to me that the mathematical infinite is less to be seen as some weird “amount” than as a RULE for generating numbers and operations according to our needs. But I may be wrong, because I have little knowledge of maths. 6. I would think equating a time line with a number line would be a pretty good analogy, I guess the only question would be is time finite (a line segment) finite in the past and infinite in the future (a ray) or infinite both ways (a line). But in any case, a line seems like the best thing we can get. As to the ontological difference making the analogy break down, there might be something there. I’m not really sure how to explore those ideas though. As far as I can tell, if time does go to infinity in both directions a line should be a good representation, but that could just mean there is some idea I’m missing. As far as the “for every pair of numbers”, since at the time I was imagining a situation in which there was infinite time, I did mean for it to be any point in the infinite amount of time. So for example, what I was claiming is that for any pair of moments, they both lie in time somewhere, and therefore they are a finite distance apart. What would be the opposite of this? That there are 2 points which are an infinite distance apart. I want to claim that this doesn’t make sense. Why? Because each point of time lies somewhere on this timeline. Since we are saying that time is infinite (we are assuming that just for this argument) then this particular moment can go back as far as it wants. Or in other words, you can never pick a point in time that it will for sure be in front of. But this moment has to lie somewhere in time. Since both moments have to lie somewhere in time, there must be a finite distance between them. I suppose this all rests on the assumption that time acts like a line (I didn’t realize I was making that assumption until your last comment). I guess I would argue that this assumption is reasonable as I can’t see what else it could be. Your last comment about infinity being more about a rule rather than being an actual quantity is interesting. It’s sorta correct, depending on the situation. Infinity actually has quite a few different meanings, and I think there is really something to that way of looking at it. 7. To Hausdorff, If we looked to the future, I would agree that it makes sense to equate a time line with a number line. But things are different as regards the past because, well, the past ACTUALLY took place, which means that it is for now ACTUALLY some specific amount of elapsed events. I mean, there is a specific amount of years between us and year 2000, or the discovery of America, or the formation of our planet. If we suppose an open future (linear time), then of course there is no specific amount of events between now and some “end” of the world, because there never was such an end. Or let’s imagine an indefinite number of sci-fi writers. One of them imagines the world in 1000 years, another in 5000 years, another in 200 000 years, another in 1 000 000 years, and so on. There is no limit set upon their imagination. This possibility to imagine any future date we want, that’s a genuine infinite. If however we looked to the past, I say: any finite amount of events we may imagine, however large, would imply a beginning somewhere. Since we are speaking of the past, we are not free to imagine any amount we want: there is one that IS true. While the future is the realm of the possible, the past is the realm of dead facts. There MUST be a time when Columbus made it to America. A number line is more like the realm of the possible (= the future) than the realm of dead facts (= the past). A more adequate analogy with the past would be a finite set. For the same reasons, believers must be careful when they speak of an “infinite” God. Some ways of understanding it are nonsensical. I tend to avoid using “infinite” in speaking of God, I prefer to use “perfect” or “supreme” or words like that. □ Teapot: Responding to your last paragraph: how do we know that God is perfect? Does the Bible say so or is this simply implied? One thing that bugs me is apologists listing all sorts of properties for God that aren’t explicitly stated in the Bible. Even the ones that are in the Bible could, in many cases, be challenged by statements made elsewhere in the Bible. But traits like omnibenevolent or prefect or infinite seem a unsupportable stretch if they’re not explicitly stated in the Bible. Your thoughts? ☆ To Bob S, Well, the Bible has Jesus say “be perfect as your celestial Father is perfect”. I don’t know the exact reference, though. The Bible also says that God is holy, which means moral ○ OK. Holy doesn’t mean moral perfection to me but rather spiritual goodness–that is, the focus is on the supernatural, while it is on interaction with your fellow man in the case of But to the bigger issue: do you see Christians extrapolating too much when they list the properties of God? That is, do they make claims about God that have no unambiguous support from the Bible? ○ To Bob S, Well, it’s true that theologians like Thomas Aquinas and his followers have lots of things to say about God, and their main support is pagan metaphysics (Aristotle, the Stoics, Plato and Plotinus). But those people really think that they can PROVE their claims (read the beginning of the Summa Theologiae) and as Christians, they hold that reason is compatible with But I agree that what theologians and philosophers is not always explicitly stated in the Bible. In the case of God’s impassibility, which is supposed to be a philosophical truth, it may even be contrary to the Bible. □ Teapot, What you are saying makes sense only if we start with the assumption that there is a finite amount of time in the past. If there were an infinite amount of time in the past, then the number line backwards would be a good analogy for it. It appears to me that you are starting by assuming that time is finite, then claiming that an infinite past causes problems. Instead, I think we should imagine an infinite past and see if there is a problem that comes up. If you can’t find a contradiction then such an infinite past is a possibility. As far as I can tell, the only contradiction you have come up with is with the other assumption that you have made, namely that there is a beginning. If we are considering the idea of an infinite past, it has to be coupled with no starting point. There is no beginning, every point in time has some other point in time preceding it. Yes, it is strange, but it does seem to make sense to me. ☆ To Hausdorff, Do we at least agree that for there to be an infinite past (whatever that may mean) there needs to be an infinite amount of particles in the universe? Because if their amount is finite, then the amount of their possible states is finite. But if we compare a finite amount of possible states with an infinite time line, we cannot escape the conclusion that there has been Let’s take an example. Suppose that the only kind of event that happens in a parallel universe is the rolling of a (fair) die. In that universe, there are only six possible states, depending on what number the die shows at each time. Isn’t it clear that the same numbers will endlessly recur in an infinite time? At least the odds are for that. ○ There is definitely something there. Things recurring over and over is certainly a possibility, although I don’t think it is necessary that we wind up in a strictly cyclical situation. Allow me to run with a few ideas for a minute here and see where it takes us. First, I like your idea of simplifying things by considering rolling a die, I’d like to take this a step further and consider flipping a coin instead. Let’s suppose we are flipping a coin an infinite number of times. You might be inclined to say that at some point, the overall pattern must recur, I say not necessarily. This is easy to see with the following HT HHT HHHT HHHHT… (I added some spaces to try to make it clear what I’m doing, just after each tails add one more heads than last time before the next tails) As you can see, this pattern can continue forever without ever repeating in it’s entirety. Furthermore, there will be certain sequences that will show up once, and only one, for example THHT. On the other hand, there are other sequences that will show up an infinite number of times, for example HTH. This must happen, if we look at any specific number of flips (in this case we looked at 3 flips) there is a finite number of possibilities, so at least one of those possibilities must happen an infinite number of times. (in this example, HHT, HTH, THH , HHH happen an infinite number of times, and THT, TTH, HTT, TTT never happen) I know this sequence is going forward in time, but we can send the same sequence backward in time and we can say a lot of the same things. I could just make the sequence: or something like that. Ok, now let’s think about the universe. As you suggested, let’s assume that we have an infinite amount of time in the past, and that we have a finite amount of particles, and I want to also assume we have a finite amount of space. Even though there is an unimaginable amount of data (huge numbers of particles, each has position and velocity with respect to each other and who knows what else) this data appears to be finite. And since there is an infinite amount of time, some organization of all of that stuff has to come up over and over again. In fact, as with our simple coin flipping example, some organization of that data has to come up an infinite amount of times. So if we think about the current organization of the universe, if there was an infinite amount of time in the past, can we say it happened before? It is certainly possible, but what if it corresponds to the THHT from the example? If that is the case, this moment in time has never been before and will never happen again. On the other hand, it could be like HHH and it has happened an infinite number of times before and will happen an infinite number of times in the future. Sorry for the super long comment. I could go on and on like this (I’m having fun here) but I think I’ll just stop for now and hope I’m making sense to you guys. ○ Yep, I’m listening! I don’t have much to add, except to note that it’s refreshing to see a discussion continue politely, as this one has. Doesn’t always happen … ○ To Hausdorff, I’m considering writing a little paper (in French) about my views about how atheism seems to imply eternal recurrence. So I find our dialogue helpful. Let’s stick to the coin which is flipped, and let it represent a simplified model for our own universe, assuming it is finite. You say that some combinations will recur an infinite amount of times. Others will recur only once. I have to disagree. If you assume that the series of H/T had a beginning and stretches infinitely forward, then yes, you would be right. That’s why heaven will not be cyclical, even though it lasts But we are dealing with something more bizarre: an infinite past. You say that sequences like THHT will happen only once. True, if the series has a beginning. Wrong, if it has no beginning. If it has no beginning, it has happened an infinite amount of times, and here is why. Suppose I granted you that THHT happened only once. Let’s say it happened one billion years ago. But since we are dealing with an infinite past, the timespan before one billion years ago is also infinite. But if we hold that THHT has a nonzero probability, then over an infinite timespan, its probability of manifestation becomes one. Which means that it must have happened more than one time, it must have happened before one billion years ago. And whenever it happened, the timespan before that time must be infinite, because it’s a property of the infinite past. If it were not a property of the infinite past, then the infinite would consist of the sum of finite amounts, which is absurd. Infinite + any finite amount = Infinite, therefore Infinite – any finite amount = Infinite. Is that correct in your view? The second equation, if correct, represents the infinite past. What I am getting at is that, over an infinite time (without a beginning), any combination of H/T with nonzero probability must have happened an infinite amount of times… And that’s what I mean by a cyclical time. ○ This is really good stuff, I’ve been thinking about what you have said, in particular the idea that any event that has a non-zero probability over an infinite amount of time actually has a probability of 1. This certainly sounds correct, but something has been tugging at my brain about it that doesn’t sound quite right. Part of the problem I was having is when we talk about flipping a coin, we are usually thinking of a fair coin, one where each flip has a 50% chance of heads and a 50% chance of tails. On the other hand, my example from before has a very regular function for what we get. That seems like it couldn’t happen. But it is one of the possible results you could get when you flip a coin an infinite number of times. (fun fact: if you list all of the ways that you could flip a coin an infinite number of times, you get a bigger kind of infinity) I feel like I’m rambling a bit, sorry. Let’s return to my previous example list, In my hypothetical, I have perfect knowledge of the flips in the future and in the past, and in this list, THT only happened once. So let’s think about what I have in this situation. I have a single object being flipped an infinite number of time. There are only a finite number of states that object can hold (2 states, H and T). And yet, we have a situation where a particular pattern only happened once. You might object that this is not a fair coin. I might agree with you (perhaps, perhaps not, a discussion for another time perhaps), but I would say that is beside the point. Because my understanding is that you were making the following claim: IF there are a finite number of states for a finite number of particles and time goes to negative infinity THEN every possible state has happened an infinite number of times. There is nothing in there about the probabilities or anything that would lead to the coin being fair. My example satisfies the if part of this proposition but breaks the conclusion. There is one other thing I want to address in your post. You talked about something making sense in an infinite future but not in an infinite past. I don’t really differentiate the 2 things, and I don’t see why we would need to. If you have a sequence of head and tails that you can see goes into the infinite future and has some property, why can’t you just run that same sequence into the past and get the exact same property in the past? Oh! I remembered one more thing I wanted to add. You have said 2 things that are very similar although I think one is right and one is not. You mentioned that if time is infinite and the number of particles is finite, then time must be cyclical. I think this is incorrect, and the reason why I think is encapsulated above. But in the most recent post, you said “seems to imply eternal recurrence”. If by this, you mean that there have to be some patterns (at least 1) that recur an infinite number of times, then I do agree. But there doesn’t necessarily have to some point where everything effectively starts all over. ○ Hi Hausdorff, I am not sure I correctly understand your latest points well, because my mathematical skills are limited. If you suppose an infinite series with a starting point, which grows following a law, then it’s perfectly correct to say that some state will happen just a limited number of times. But the eternity the atheist considers is different, because it is supposed to be open in both directions: no beginning and no end. Our actual state of affairs (we may describe it as “discussion on Bob’s blog”) is actual, therefore it had a nonzero probability. Which means it already happened an infinite amount of times in an infinite past. Suppose the last time it happened was 1000 billion years ago, in some pre-big bang world. The fact is that, assuming that the past is infinite, there was an infinite timespan BEFORE that last event. Which means that, as the discussion on Bob’s blog has a nonzero probability, it must have happened still before (maybe 2000 billion years ago). And so on ad infinitum. However, there is a further complication. If I consider people as physical systems, either deterministic or probabilistic, then that view is correct. But if I endow them with free will, things get confused, because a free choice is irreducible to chance. Someone making a choice is essentially different from a random process in nature. In fact, it has nothing to compare it with. So it means that in a cyclical world, while there would be the same physical stuff and the same laws, there would be unpredictable events due to free agency. In that sense, it is perfectly sensible to hold that, though the past is infinite, some events due to free agency only happened a finite number of times. At least if I am not mistaken. So it is possible to think that in the next cycle, there would be no World War II for example, though the laws of physics would be identical. You are also skeptical of my claim that the past is not symmetrical with the future. Of course, if you represent time by number series, with 0 as the present, negative integers as past events and positive integers as future events, you will miss what I am trying to say. Did you even have the idea that in another intelligent species, their civilization would measure time starting from the Big Bang, and in that case, there would be an absolute beginning (0) followed by an indefinite series of integers. It makes sense to me. The system is less practical than ours, but it is coherent. Why think that our system is a better model for time? Let’s posit an infinite series of negative integers. 0, -1, -2, -3, … Actually, we may just as well choose to make it represent the future. It is only by convention that such a series is made to represent the past. It’s not a bad convention, but it breaks down when we are trying to explore the philosophical meaning of time. Infinite series of numbers, by their very nature, are only fit for representing the future. Another point I would like to make is that in an infinite timespan, the place we put the zero on our time line is completely arbitrary. We may just put it now, or one billion years ago, or in a billion years from now. But since the place of the zero is completely arbitrary, we can hardly say that some patterns are unique. If I try to stick to the usual convention of using a number line to represent a time, I may make the zero represent the present and say that every new day, the zero shifts one notch forward, so that a new series begins each day, and patterns we thought were unique will recur. I hope that makes some sense, though you are really pushing me to the limits of my intelligence. ○ Hi OT, I have a number of comments to make and I might not get to them all, so if there is something I seem to be skipping over feel free to point it out again. I think we actually agree on a fair bit of this, it might not seem that way because I have been focusing on points of disagreement. For example, the cyclical universe is something that I think sounds very plausible, and I think is a definite possibility. The idea that there is an infinite amount of time so anything that has any small probability of happening must happen, and in fact must happen an infinite number of times sounds very compelling. It might be right. I’m not quite convinced it has to be though. There are several nagging thoughts that I have which seem to demonstrate counterexamples to this. I’ve tried to explain some of these ideas, but I think those explanations have been confusing at best as I am still working through the ideas myself. Also, while I feel like I have counterexamples to the infinite cycles idea, I am having some trouble punching holes in the idea itself. My gut still says that an infinite cyclical past is a possibility but not a necessity. As to the 2 directional infinity versus the 1 directional infinity. I’m not convinced that for us it makes much of a difference. Suppose for example that you can prove that an event has happened an infinity number of times in the past, can’t you use the same logic to show that it will also happen an infinity number of times in the future? If I can show that it is possible that this moment will never happen again, couldn’t the same logic be used to show it is possible for that moment to have never happened in the past? As far as the future being symmetric with the past, I think I was just using that as a quick example of a way something could happen just once. I definitely didn’t mean to imply that I thought that time was symmetric. Another example could be to do one sequence into the future and a different one into the past. Your point about where zero goes I agree with 100%. ○ (I decided to break this into 2 comments as it was getting long) Let me try to articulate one of my ideas about why I think it is possible for an event to happen only once even in an infinite amount of time. I want to return to the flipping coins analogy, it is simple enough to understand and I think it illustrates one of the points I keep returning to in my thinking. I want to think about this in the most extreme fashion I can in this setting, so I ask the following: Q: Is it possible to flip a coin an infinite number of times and get H every time? You probably want to answer “no” right now, but let me run through some ideas and let’s work up to it. First, if I flip a coin once what are the odds I get H? 50% or 1/2. Because there are 2 possibilities H or T. What if I flip a coin twice, what are the odds of getting HH? 1/4. There are 4 possibilities HH,HT,TH,TT. So one out of four chance. What if I flip a coin 50 times? There are 2^50 possibilities and only one of them is all heads, so my probability of getting all heads is 1/2^50. In general, if I flip a coin n times, then there are 2^n possibilities and only one of them is all heads, so the probability of getting all heads is 1/2^n. If n is finite, this little formula works every time. But what happens when we start to talk about flipping an infinite number of times? One thing I might do is take a limit as n goes to infinity, then this probability goes to zero. But if we actually replace n with infinity does the probability actually become zero? Unfortunately, I think the best answer to this question is “sort of”. Let’s think about this, how many ways are there to flip a coin an infinite number of times? There are an infinite number of ways to do it. So when we ask what the probability is that we get all heads, we are asking what the odds are that we get 1 particular set of rolls out of an infinite number of choices. It might make sense to say that the probability of this is zero. But that is also true for any other set of rolls. Even one that looks fairly random to us, the odds of that particular roll are the same as the one that we recognize as significant (all heads). When I say it makes sense to call this probability zero, one way to think of this is to try to figure out what the probability is. The probability of something is a number between zero and 1 where 1 means it will happen for sure, and 0 means it can’t happen. For any fraction above zero (call it epsilon), we can say for certain that the odds of rolling all heads out of an infinite number of flips is less than epsilon. We haven’t exactly shown that the probability is zero, but we have shown that it can’t be any number above zero. We can usually think of this a simply being zero, but there are times when we would call this an infinitesimal. The odds of this happening are infinitely small. But it is possible. I am having trouble telling if this is clear or super confusing. Please feel free to ask for clarification, these ideas are organized fairly well in my head, but I am having trouble putting them into text. 8. * but I agree that what theologians and philosophers SAY ABOUT GOD 9. Hi Hausdorff, I think you may be onto something. It may or may not be possible over an infinite timespan to get only heads or only tails even if the coin is fair. In fact, one crazy thought is that, as long as the probability of getting heads is not one exactly, it may be possible never to get them over an infinite timespan. The difference between improbable and impossible becomes blurred. But I don’t know much about probability, so I am not sure how compelling those ideas are. It’s odd to imagine that the odds not to get heads over an infinite timespan are neither zero nor any number above zero. Perhaps one would say that there is no possibility left after that. You are also right that there are an infinite amount of possible combinations of heads and tails over an infinite time. As long as we don’t limit the span of each cycle. If we say that each cycle lasts only enough time for ten flips, then the possibilities are limited. (to be continued) 10. To Hausdorff, Two other points: First, I must be clear about what cyclical time means. It means that time oscillates: it goes backward and forward. What is being done will be undone, and what is being undone will be done. So to be accurate, it’s not exactly correct to speak of future or past cycles. It’s just a cognitive bias we have, just as people have lots of trouble understanding the theory of relativity because it runs counter to naive physics hardwired in our brain. It is just hard to use language to explain what a cycle is, because our brains are not “designed” to understand that. We say for instance: the universe will expand, THEN it will shrink, THEN it will bounce back and expand again. But those “then” don’t really make sense in a cyclical time. They are parts of speech, not parts of the idea itself. One problem is that when we consider cycles, we imagine ourselves as divine observers outside the cycles, and we may imagine ourselves counting them or comparing them. But if EVERYTHING that there is, is within a cyclical world (no external observer) then it makes no sense to think in that way. If there is an external observer, some kind of God, then from his viewpoint, time is linear and the cycles add up. But without that observer, it would be wrong to speak as if time were linear, and there were past and future cycles. We keep speaking like that, because we can’t help ourselves, but we must be careful not to mistake the finger for the thing it points to. The other point is that in Nietzsche, there is a related argument for eternal recurrence. Nietzsche says: if the universe had a goal, it would already have reached that goal. If time is infinite, it makes sense. What would be a goal that even eternity does not suffice to reach? On the other hand, if we suppose that the universe has no goal, could its evolution be directional? Should it not be cyclical? Your thoughts? 11. “It’s odd to imagine that the odds not to get heads over an infinite timespan are neither zero nor any number above zero.” yeah, it is pretty weird. It is starting to get outside my expertise, but my gut here tells me that the issue is probability. The concepts are all grounded in the finite and when you bring them into the infinite there is some extra care that needs to be taken. Moving on to your points about cyclical time. I’m not really sure what you mean when you say “It means that time oscillates: it goes backward and forward. What is being done will be undone, and what is being undone will be done.” What this sounds like to me is that at some point time will stop, and then move backwards. So like, everything that is happening will happen in reverse until we get back to the big bang. Almost like rewinding a VCR. I’m assuming this is not what you mean as it sounds totally bizarre to me. When you talk about cycles this makes more sense to me. If you will allow me, I will expand on what I think you are saying and we can see if our ideas match up. For the sake of argument, let’s assume the “start” of a cycle is the big bang. The universe expands for a while, then stops expanding, then starts collapsing in on itself. Eventually, the universe collapses all the way into a singularity only to explode again in the “next” big bang. The “next” cycle has started. Now, if time is cyclical, and what I have described is a cycle, then the way the universe looks at 1 second after the big bang is exactly the same every time. The way the galaxies form is exactly the same every time. The way the earth forms is exactly the same every time. And the way I am writing this blog comment is exactly the same every time. One point you have made is that if this is the case, then it doesn’t necessarily make sense to think of this as the current cycle and the one that come after this as “the next one”, they are the same cycle. It’s better to think of time as a circle that has connected to itself or something. There is no reason to think of them as 2 separate events as they really are exactly the same in every way. I’ll agree with this idea. It is actually very similar to something called modular arithmetic (which we are all familiar with even though most people don’t know it, 1 o’clock and 13 o’clock are the same for example). I actually think this idea of cyclical time is interesting and possibly true. I just don’t see it is a necessity. It’s possible that the universe collapses down and each time the new universe that spawns is different. It’s possible that we will expand forever and eventually reach heat death. It is possible that each black hole in our universe spawns another universe, each one unique. It could just be that time really did start 14.6 billion years ago and this universe is the only one. All of these things seem possible to me. As far as the last thing, I don’t think “directional” and “cyclical” are opposites. Perhaps the opposite of directional is randomness. If evolution does not have a direction, I might expect there to be an incredible amount of branching, which seems to be what happened. 12. Hi Hausdorff, I think you have correctly understood what I mean by cyclical time. It’s hard to explain in words. It’s like if time were like this number series: 1, 2, 3, 2, 1, 2, 3, 2, 1, 2, 3, and so on… Or like hours on a clock, as you suggested. I certainly don’t suggest that my view of cyclical time implies an oscillating universe (big bangs and big crunches alternatively). It does not exclude it either. In fact, cyclical time is a philosophical hypothesis which science cannot directly test. It’s true that on the current scientific view, the universe is slowly heading toward a heat death. But we are here talking about events which are supposed to take place in billions of years, and it would be a little bold to hold with certainty to such a cosmological hypothesis. While it’s true that biological evolution is neither cyclical nor directional insofar as it has no goal (I accept standard evolutionary theory), cosmological evolution must be either directional or cyclical. When I say “directional” I just mean that the future is unlike the past. That the future brings novelties. But here is the trick: if the amount of possible states of the universe is limited, due to its finite amount of particles, the future cannot forever be unlike the past, it cannot forever bring unexpected things. At some point, the same things will recur. Maybe not in the same order exactly, but it will still be the same things. There is no other alternative: either the future is unlike the past (linear time: directional evolution as I understand that idea) or the future is like the past (cyclical time). In the latter case, we only speak metaphorically of the future and the past. Another complication is free will. Free will is different from chance and probabilities, though we sometimes confuse the two things. If there is free will, it’s possible that the cycles become unpredictable at the human level, though the stuff and the laws of the universe will be identical. Because free will defies order and predictability. You say that it’s possible that the universe will end up in a heat death and stay in that state forever. Now I would like to ask you: why did it not happen YET if it’s true that the universe is You have raised some interesting points. It may shake my certainties about cyclical time. □ I think we are pretty much on the same page on cyclical time. When you said that things don’t necessarily occur in the same order, I agree with that point and think it is a very important point to make. I’m not sure I really like the name cyclical as it seems to imply to me that things will repeat in exactly the same fashion as before. I’m not what I would call it instead though, recurring, or repeating maybe, although repeating has the same problem. Maybe semi-cyclical or quasi-cyclical? I dunno. The other thought I had, was that this all only works if we have a bounded number of particles, space and time in each cycle. As we have been saying, if we have a fixed amount of particles and an infinite amount of time, then some arrangement of those particles has to repeat. But this isn’t true if the amount of space grows without bound. In this case, you can have a finite amount of space at every given point in time, and yet with space always expanding, there is more room for the particles to spread out and have different organization. The same arrangement doesn’t have to come up. There is a similar thing if the amount of time you are looking at keeps growing. For example, if we are considering a contracting and expanding universe, the whole thing doesn’t ever have to repeat if the length of time between singularities can be arbitrarily large. I hope this is making some kind of sense, basically, I’m looking for ways that infinities can sneak in even though locally everything is finite. BTW, I thought it worth mentioning that the reasoning you are using is called the pigeonhole principle in mathematics. As to the question about heat death and why it hasn’t happened yet, that is a really good question. I’m not really sure how to answer it, but I do have a couple of ideas. 1. Perhaps heat death and an infinite past are not compatible. As you have argued, there is the possibility of heat death and an infinite past means it should have happened back them at some point. (I’m not a big fan of this one honestly) 2. It is possible to have an infinite past of “interesting” things happening and at some point it results in heat death. Hard to comprehend but that just the way it is. (Note sure I like this one too much either) 3. Maybe each universe spawns new universes with their black holes. So even if they go to heat death it is not really and end of things. This is a way to have both. The biggest problem I see with this one is each new universe would have fewer particles. Fun musings, but I don’t think I really have a good answer to that question. 13. To Hausdorff, Well, I feel this friendly discussion is approaching its conclusion, because I think we have covered the essentials. There are a few extra points I would like to make. You are speaking of a universe growing forever. The problem is that to me, any though of an expanding universe implies a starting point. A state in which the universe was the smallest possible. And a starting point is precisely what is excluded by the idea of an eternal universe. Believers agree the created world will exist forever. But they don’t agree that the world has existed ab If we return to our coin to be flipped, it’s true that the series of H/T could be infinitely complex if we don’t put an end to it. Still, there are only two states which can recur, and each H is identical with the others. As you say, it may not be exactly the definition of a cycle, but it’s closely related to it. If we take a real-life example, we may count the series of years of war and years of peace since the dawn of civilization, and we would get an unpredictable series. Yet there are only two possible states with a nonzero probability: war and peace. I’m not sure the idea of an infinite past of interesting events ending in a heat death makes any sense. If heat death happens, then it was a possible state, with a nonzero probability. But over an infinite timespan, it should already have happened, not yesterday, not 10 billion years ago, but an infinite time ago. My point is that the philosopher is faced with a choice: either divine creation in a finite past, or something like cyclical time, with the same set of events recurring forever, even if their order turns out to be unpredictable. □ Yeah, I agree, this conversation is starting to get cyclical =D Seriously though, I had a really good time talking about these things with you. Looking forward to more interesting conversations in the future. ☆ Thanks. The same for me.
{"url":"http://crossexaminedblog.com/2012/07/12/infinity-nothing-to-trifle-with/","timestamp":"2014-04-20T21:01:22Z","content_type":null,"content_length":"123302","record_id":"<urn:uuid:b8c16f1a-71e3-418b-a3e3-fe04c2ef1a55>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Ask Dr. Math Tour - The Archives Ask Dr. Math Tour The Dr. Math Archive is among the greatest benefits of the program. As we've answered questions we've saved the best of them, together with their answers, and have put them on the Math Forum Web site. The Archive currently contains over 3000 questions, and we're adding to it every month so that it is becoming a valuable resource for math information. The questions are organized by level (Elementary, Middle School, High School, and College and Beyond) and by subject (Addition, History, Calculus, etc.). Students can browse or use the powerful searcher to look for questions by keyword. They may find new answers to old questions, or new questions that they have not yet considered. Here we include a small sampling of the thousands of questions in the Dr. Math Archives. Elementary School Subtracting Big Numbers [Stanseski, 11/20/95] What's 245715 - 105065 ? Who Invented Decimals? [Beck, 11/8/94] We are fifth grade students and one teacher. We would like to know who invented decimals. Middle School Height of Ripped Paper [Va, 11/17/95] Rip a piece of 8 by 11 paper in half, then put that half over the other, then rip it again. What is the length (or how high, like 5 miles or something) when the paper is ripped 30 times? Multiplying Negative by Negative [Spencer, 11/6/94] I'm trying to make sense of these rules so that they'll be easier to memorize: Pos x Pos = Pos, makes sense. I've been doing it since 3rd grade. And I can even think of a situation. I get six birthday cards with $5 in each. Pos x Neg = Neg, I can think of a situation for this, too. I get four bills for $20 each so I'd owe money. But, Neg x Neg = Pos just doesn't make sense. Does it ever happen in real life? My teacher said that you could say it would be the opposite of Pos x Neg but that seems like cheating. It's not realistic. High School Complex Roots [Scott, 11/1/94] We know it is possible to look at the graph of a polynomial and tell a great deal about its real roots by looking at the x-intercepts. What can be discovered about a polynomial's complex roots by looking at the graph? There seem to be some interesting "wiggles" at locations that appear to be related to the "average" of the complex pairs. It appears that the "wiggle" of these graphs is always influenced by the complex roots. What we are trying do is develop a graphing technique that will let us find the complex roots from the real graph. (Contributions by Profs. Conway and Maurer.) Break a dowel to form a triangle [Chen, 3/8/95] A wooden dowel is randomly broken in 2 places. What is the probability that the 3 resulting fragments can be used to form the sides of a triangle? College and Beyond.... Analysis [MRI@ids.net, 11/29/94] In analysis: If f:[0,1] is continuous. Show that there is an x in [0,1] such that f(x) = x. Problem #2: If A and B are open and closed sets respectively, of R^n, show B\A is closed and A\B is open. Back to the Main Tour Page. || On to Learning About the Doctor's Office [Privacy Policy] [Terms of Use] Home || The Math Library || Quick Reference || Search || Help © 1994-2012 Drexel University. All rights reserved. The Math Forum is a research and educational enterprise of the Goodwin College of Professional Studies.
{"url":"http://mathforum.org/dr.math/office_help/archives.html","timestamp":"2014-04-18T23:39:02Z","content_type":null,"content_length":"6709","record_id":"<urn:uuid:d221916a-1e2f-45d9-bf9b-e50db9b5faf6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Parallel and Perpendicular Lines Get in Line, Get in Shape Quiz Think You Got it Down? Test your knowledge again: Parallel and Perpendicular Lines: Get in Line, Get in Shape Quiz Think you’ve got your head wrapped around Parallel and Perpendicular Lines? Put your knowledge to the test. Good luck — the Stickman is counting on you! Q. How many sets of parallel line segments are there in a regular octagon? Q. How many sets of parallel line segments are there in a regular pentagon? Q. What is the sum of all the interior angles in a dodecagon (12 sides)? Q. What is the measure of one interior angle in a regular heptagon (7 sides)? Q. Which of the following is a summary of Euclid's parallel postulate? If two lines are parallel and crossed by a transversal, their corresponding angles will be congruent If a line crosses two other lines and their two consecutive interior angles are supplementary, the two lines are parallel If a line crosses two other lines and their two consecutive interior angles are not supplementary, the two lines are not parallel If two lines are parallel and crossed by a transversal, their consecutive interior angles are supplementary If two lines are parallel and crossed by a transversal, their consecutive interior angles are congruent Q. What is the minimum number of right angles possible in a quadrilateral? Q. Which of the following shapes cannot have any parallel lines? Q. Which of the following shapes must have at least one set of parallel lines? Q. Which of the following shapes must have at least one set of perpendicular segments? Right triangle Regular pentagon Regular octagon Regular nonagon Q. Which of the following shapes must have at least one set of perpendicular segments? Regular quadrilateral Regular pentagon Regular hexagon
{"url":"http://www.shmoop.com/parallel-perpendicular-lines/quiz-3.html","timestamp":"2014-04-17T07:00:37Z","content_type":null,"content_length":"45346","record_id":"<urn:uuid:18ce9830-f03e-41b1-9229-aef0ce183e04>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Bear Science Tutor Find a Bear Science Tutor ...I am a Delaware certified computer science teacher with more than ten years experience. I have also taught several computer engineering courses as an adjunct professor. Over the past twenty years, I have successfully helped hundreds of students prepare for the math portion of the ACT, SAT, and PRAXIS exams. 39 Subjects: including chemistry, Microsoft PowerPoint, astronomy, statistics ...My experience has been in both a regular school setting and in a tutoring setting. High school math areas include pre-algebra, algebra I, algebra II, geometry, and middle school math. I've also taught 5th-12th grade English and history. 21 Subjects: including anatomy, English, reading, writing I have more years as a teacher/tutor than can be included in a simple resume. My first job, at the age of 13, was a math tutor for my school district. I am currently doing environmental chemistry research full time, so I do not often get the chance to share my knowledge and passion for science and math with young students. 14 Subjects: including ACT Science, chemistry, geometry, physical science ...For the next 12 years, I was an assistant principal in the same school. I then taught chemistry, AP chemistry, 8th grade physical science, and 6th grade Earth science at a private Catholic prep school for boys. In my teaching I have developed a strategy (not found in texts) for solving problems in chemistry and physics that will guarantee the student a correct answer. 2 Subjects: including chemistry, physical science ...I am Engineer by Training and I have taken and successfully passed Differential Equations in my Bachelors and during my Masters. My thesis Research during my Master’s also involved a lot of Differential Equation solving applied to Phase boundaries for Nano-Complexates. I have extensively worked and taught Calculus as can be seen from my other subjects and student Reviews. 18 Subjects: including chemical engineering, physics, calculus, statistics
{"url":"http://www.purplemath.com/Bear_Science_tutors.php","timestamp":"2014-04-16T19:25:46Z","content_type":null,"content_length":"23773","record_id":"<urn:uuid:1347ec30-d922-459b-99f8-fa819256af0c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Need Help, Urgent, please: What doeas "N" mean in HCl 0,5 N • one year ago • one year ago Best Response You've already chosen the best response. .5 means it is half in percentage. in a liter ofcourse. if 1M it would be 1N in HCL concentrations. Best Response You've already chosen the best response. Can you be more spesific about that 1N thing? Best Response You've already chosen the best response. The 'N' stands for 'Normal'. In acid/base chemistry, 'Normality' is used to express the concentration of protons or hydroxide ions in the solution. Here, the normality differs from the molarity by an integer value - each solute can produce n equivalents of reactive species when dissolved. Best Response You've already chosen the best response. Is there any equation to measure it? Best Response You've already chosen the best response. multiply the molarity to the value of N Best Response You've already chosen the best response. Mmmm, okay. Thanks @babybaby Best Response You've already chosen the best response. no probs Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5053bbd6e4b0a91cdf443410","timestamp":"2014-04-20T16:33:38Z","content_type":null,"content_length":"42016","record_id":"<urn:uuid:cc0b7f3d-0d09-4408-bc5a-b525cd5bce53>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
l'Hopital's rule February 22nd 2011, 02:37 PM #1 Junior Member Jan 2010 l'Hopital's rule Find $<br /> \lim_{x \to 0^{+}}\frac{x \log{x}}{\log{(1+2x)}}$ Can I do this: $<br /> \lim_{x \to 0^{+}}\frac{x \log{x}}{\log{(1+2x)}}=<br /> \frac{\lim_{x \to 0^{+}} x \log{x}}{\lim_{x \to 0^{+}} \log{(1+2x)}}=\frac{\lim_{x \to 0^{+}} \frac{\log{x}}{\frac{1}{x}}}{\lim_{x \to 0^{+}} \log{(1+2x)}}=\frac{\lim_{x \to 0^{+}} -x}{\lim_{x \to 0^{+}} \log{(1+2x)}}<br />$ where l'Hopital's rule is applied to the numerator only. Then reapply l'Hopital's rule again: $<br /> \lim_{x \to 0^+} \frac{-x}{\log (1+2x)}=\lim_{x \to 0^+} \frac {-1}{\frac{2}{1+2x}}=-\frac{1}{2}<br />$ Lemme know if my divide and conquer approach with l'Hopital's rule is legit. $\displaystyle \lim_{x \to 0^+} \frac{x}{\log(1+2x)} \cdot \lim_{x \to 0^+} \log{x}$ $\displaystyle\lim_{x \to 0^+} \frac{1+2x}{2} \cdot \lim_{x \to 0^+} \log{x}$ $\displaystyle \frac{1}{2} \cdot \lim_{x \to 0^+} \log{x}$ the limit DNE so divide and conquer with l'Hopital and reapplying it is not legal? Last edited by crossbone; February 22nd 2011 at 04:49 PM. I would say not ... the original limit does not exist. limit as x to 0 &#40;x&#42;log&#40;x&#41;&#41;&#47;&#40;log&#40;1& #43;2x&#41;&#41; - Wolfram|Alpha Here's another problem: $\displaystyle \lim_{x \to 0} x |{\log x|^a$ Rewrite it as $\displaystyle \lim_{x \to 0}\frac{|\log{x}|^a}{\frac{1}{x}}$, which is now of the form $\displaystyle \frac{\infty}{\infty}$ so you can now use L'Hospital's Rule... I knew that but this is what I got: $<br /> \displaystyle \lim_{x \to 0}\frac{|\log{x}|^a}{\frac{1}{x}}=\lim_{x \to 0} \frac{a|\log{x}|^{a-1}}{\frac{-1}{x}}<br /> <br />$ keep differentiating until a-n=b<0: $<br /> \displaystyle \lim_{x \to 0} {(-)}^n k \frac{|\log{x}|^b}{\frac{1}{x}}=\lim_{x \to 0} {(-)}^n k x|\log{x}|^b=0<br />$ $<br /> k=a\cdot(a-1)\cdot(a-2)\cdot (a-3).....(a-n+1)<br />$ Is this right? The derivative of $\displaystyle |\log{x}|^a$ is not $\displaystyle a|\log{x}|^{a-1}$. You need to use the chain rule... I would say not ... the original limit does not exist. limit as x to 0 &#40;x&#42;log&#40;x&#41;&#41;&#47;&#40;log&#40;1& #43;2x&#41;&#41; - Wolfram|Alpha What exactly does this approach violate? I'm looking at the rules for L`Hopitals and I don't quite see where this goes wrong, but it obviously does violate something: http://bradley.bradley.edu/ Any ideas? Find $<br /> \lim_{x \to 0^{+}}\frac{x \log{x}}{\log{(1+2x)}}$ Can I do this: $<br /> \lim_{x \to 0^{+}}\frac{x \log{x}}{\log{(1+2x)}}=<br /> \frac{\lim_{x \to 0^{+}} x \log{x}}{\lim_{x \to 0^{+}} \log{(1+2x)}}=\frac{\lim_{x \to 0^{+}} \frac{\log{x}}{\frac{1}{x}}}{\lim_{x \to 0^{+}} \log{(1+2x)}}=\frac{\lim_{x \to 0^{+}} -x}{\lim_{x \to 0^{+}} \log{(1+2x)}}<br />$ where l'Hopital's rule is applied to the numerator only. Right. With this, you prove $\displaystyle \lim_{x \to 0^{+}}\dfrac{f(x)}{g(x)}=\ldots=\dfrac{0}{0}$ Then reapply l'Hopital's rule again: $<br /> \lim_{x \to 0^+} \frac{-x}{\log (1+2x)}=\lim_{x \to 0^+} \frac {-1}{\frac{2}{1+2x}}=-\frac{1}{2}<br />$ Lemme know if my divide and conquer approach with l'Hopital's rule is legit. But now, you find: $\displaystyle \lim_{x \to 0^{+}}\dfrac{f''(x)}{g'(x)}$ Fernando Revilla I did. it appears I didn't cos I simplified it after I used l'Hopital's rule: $<br /> \displaystyle \lim_{x \to 0}\frac{|\log{x}|^a}{\frac{1}{x}}=\lim_{x \to 0} \frac{\frac{a}{x}|\log{x}|^{a-1}}{\frac{-1}{x^2}}=\lim_{x \to 0}\frac{a|\log{x}|^{a-1}}{\frac{-1}{x}}<br /> <br February 22nd 2011, 04:20 PM #2 February 22nd 2011, 04:32 PM #3 Junior Member Jan 2010 February 22nd 2011, 06:18 PM #4 February 22nd 2011, 07:35 PM #5 Junior Member Jan 2010 February 22nd 2011, 07:43 PM #6 February 22nd 2011, 07:57 PM #7 Junior Member Jan 2010 February 22nd 2011, 08:49 PM #8 February 22nd 2011, 09:27 PM #9 February 22nd 2011, 10:01 PM #10 February 22nd 2011, 11:11 PM #11 Junior Member Jan 2010
{"url":"http://mathhelpforum.com/calculus/172254-l-hopital-s-rule.html","timestamp":"2014-04-19T17:11:29Z","content_type":null,"content_length":"69164","record_id":"<urn:uuid:781e0895-f004-4305-8c51-fabb2448fd7c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
STEPHEN ANCO, Brock University Symmetry analysis and exact solutions of semilinear Schrodinger equations [PDF] A novel symmetry method is used to obtain exact solutions to Schrodinger equations with a power nonlinearity in multi-dimensions. The method uses a separation technique to solve an equivalent first-order group foliation system whose independent and dependent variables consist of the invariants and differential invariants of the point symmetry generators admitted by the Schrodinger equation. Many explicit new solutions are obtained which have interesting analytical behavior connected with blow-up and dispersion. These solutions include new similarity solutions and other new group-invariant solutions, as well as new solutions that are not invariant under any point symmetries of the Schrodinger equation. In contrast, standard symmetry reduction leads to nonlinear ODEs for which few if any explicit solutions can be derived by familiar integration methods. ALEXANDER BIHLO, Centre de recherches mathématiques, Université de Montréal Invariant discretization schemes [PDF] Geometric numerical integration is a recent field in the numerical analysis of differential equations. It aims at improving the quality of the numerical solution of a system of differential equations by preserving qualitative features of that system. Such qualitative feature can be conservation laws, a Hamiltonian or variational structure or a nontrivial point symmetry group. While quite some effort has been put in the construction of conservation law preserving and Hamiltonian discretization schemes, the problem of finding invariant numerical integrators is more recent and less investigated. The main obstacle one faces when constructing symmetry-preserving approximations for evolution equations is that these discretizations generally require the usage of moving meshes. Grids that undergo an evolution in the course of numerical integration pose several theoretical challenges, especially in the multi-dimensional case. In this talk we will present three possible strategies to overcome the problem with invariant moving meshes and thus address the practicability of symmetry-preserving discretization schemes. These ways are the discretization in computational coordinates, the use of invariant interpolation schemes and the formulation of invariant meshless schemes. The different strategies will be illustrated by presenting the results obtained from invariant numerical schemes constructed for the linear heat equation, a diffusion equation and the system of shallow-water equations. ALEXEI CHEVIAKOV, University of Saskatchewan On Symmetry Properties of a Class of Constitutive Models in Two-dimensional Nonlinear Elastodynamics [PDF] We consider the Lagrangian formulation of the nonlinear equations governing the dynamics of isotropic homogeneous hyperelastic materials. For two-dimensional planar motions of Ciarlet–Mooney–Rivlin solids, we compute equivalence transformations that lead to a reduction of the number parameters in the constitutive law. Further, we classify point symmetries in a general dynamical setting and in traveling wave coordinates. A special value of traveling wave speed is found for which the nonlinear Ciarlet–Mooney–Rivlin equations admit an additional infinite set of point symmetries. A family of essentially two-dimensional traveling wave solutions is derived for that case. ALFRED MICHEL GRUNDLAND, Centre de Recherches Mathematiques and Universite du Quebec a Trois-Rivieres Soliton surfaces and zero-curvature representation of differential equations [PDF] A new version of the Fokas-Gel'fand formula for immersion of 2D surfaces in Lie algebras associated with three forms of matrix Lax pairs for either PDEs or ODEs is proposed. The Gauss-Mainardi-Codazzi equations for the surfaces are infinitesimal deformations of the zero-curvature representation for the differential equations. Such infinitesimal deformations can be constructed from symmetries of the zero-curvature representation considered as PDE in the matrix variables or of the differential equation itself. The theory is applied to zero-curvature reprentations of the Painleve equations P1, P2 and P3. Certain geometrical aspects of surfaces associated with these Painleve equations are discussed. Based on joint work with S. Post (University of Hawaii, USA) VERONIQUE HUSSIN, Université de Montréal Grassmannian sigma models and constant curvature solutions [PDF] We discuss solutions of Grassmannian models $G(m,n)$ and give some general results. We thus concentrate on such solutions with constant curvature. For holomorphic solutions, we give some conjectures for the admissible constant curvatures which are verified for the cases, $G(2,4)$ and $G(2,5)$. The study is extended to the case of non holomorphic solutions with constant curvatures and we show that in the case of the Veronese sequence, such curvatures are always smaller than the ones of the holomorphic solutions. This work has been done in collaboration with L. Delisle (UdM) and W. Zakrzewski (Durham, UK). WILLARD MILLER JR., University of Minnesota Contractions of 2D 2nd order quantum superintegrable systems and the Askey scheme for hypergeometric orthogonal polynomials [PDF] A quantum superintegrable system is an integrable $n$-dimensional Hamiltonian system on a Riemannian manifold with potential: $H=\Delta_n+V$ that admits 2n-1 algebraically independent partial differential operators commuting with the Hamiltonian, the maximum number possible. A system is of order $L$ if the maximum order of the symmetry operators, other than $H$, is is $L$. For $n=2$, $L=2$ all systems are known. There are about 50 types but they divide into 12 equivalence classes with representatives on flat space and the 2-sphere. The symmetry operators of each system close to generate a quadratic algebra, and the irreducible representations of this algebra determine the eigenvalues of $H$ and their multiplicity. All the 2nd order superintegrable systems are limiting cases of a single system: the generic 3-parameter potential on the 2-sphere, $S9$ in our listing. Analogously all of the quadratic symmetry algebras of these systems are contractions of $S9$. The irreducible representations of $S9$ have a realization in terms of difference operators in 1 variable. It is exactly the structure algebra of the Wilson and Racah polynomials! By contracting these representations to obtain the representations of the quadratic symmetry algebras of the other less generic superintegrable systems we obtain the full Askey scheme of orthogonal hypergeometric polynomials. This relationship provides great insight into the structure of special function theory and directly ties the structure equations to physical phenomena. Joint work with Ernie Kalnins and Sarah Post ROMAN POPOVYCH, Brock University Potential symmetries in dimension three [PDF] Potential symmetries of partial differential equations with more than two independent variables are considered. Possible strategies for gauging potential are discussed. A special attention is paid to the case of three independent variables. As illustrating examples, we present gauges of potentials and nontrivial potential symmetries for the (1+2)-dimensional linear heat, Schrödinger and wave equations, the three-dimensional Laplace equation and generalizations of these equations. SARAH POST, U. Hawaii Contractions of superintegrable systems and limits of orthogonal polynomials [PDF] In two dimension, all second-order superintegrable systems are limits of a generic system on the sphere. These limits in the physical systems correspond to contraction of the symmetry algebras generated by the integrals of the motion as well their function space representations. The action of these limits on the representation of the models gives the well known Askey-tableau of hypergeometric polynomials. In this talk, we focus on the top of the tableau. That is, we will discuss in depth the contractions of the generic system on the sphere to the singular isotropic oscillator of Smorodinsky and Winternitz. These limits give the limits of Wilson polynomials to Hahn, dual Hahn and Jacobi polynomials. The physical limit gives a deeper understanding of the connection between the Hahn and dual Hahn polynomials. The general theory and outline of the tableau will be discussed in a later talk of W. Miller Jr. This is joint work with W. Miller Jr. and E. Kalnins RAPHAËL REBELO, Université de Montréal Symmetry preserving discretization of partial differential equations [PDF] A definition of discrete partial derivatives on non orthogonal and non uniform meshes will be given. This definition permits the application of moving frames to partial difference equations and will be used to generate invariant numerical schemes for a heat equation with source and for the spherical Burgers' equation. The numerical precision of those schemes will be displayed for two particular solutions. Superintegrable systems on non Euclidean spaces [PDF] A Maximally Superintegrable (M.S.) system is an integrable n-dimensional Hamiltonian system which has 2n-1 integrals of motion. The (M.S.) systems share nice properties such as periodic trajectories for classical systems and degenerate spectrum for quantum mechanical systems. Aim of the talk is providing a complete classification of classical and quantum M.S. systems characterized by a radial symmetry and defined on n-dimensional non Euclidean manifold. We will achieve this result considering the only systems which are eligible to be M.S. namely all the classical radial systems which admit stable closed orbits and whose classification is given by the non-Euclidean generalization of the well known Bertrand's theorem. As in the Euclidean case the generalized Bertrand theorem still gives us two families of exactly solvable M.S. but, in contrast with the flat case, they exhibit extra integral of motion which have the remarkable property of being of higher order in the momenta. ZORA THOMOVA, SUNY Institute of Technology Contact transformations for difference equations [PDF] Contact transformations for ordinary differential equations are transformations in which the new variables $(\tilde x, \tilde y)$ depend not only on the old variables $(x, y)$ but also on the first derivative of $y$. The Lie algebra of contact transformations can be integrated to a Lie group. The purpose of this talk is to extend the definition of contact transformations to ordinary difference equations. We will provide an example showing that these transformations do exist. This is a joint work with D. Levi and P. Winternitz. SASHA TURBINER, Nuclear Science Institute, UNAM $BC_2$ Lame polynomials [PDF] $BC_2$ elliptic Hamiltonian is two-dimensional Schroedinger operator with double-periodic potential of a special form which does not admit separation of variables. In space of orbits of double-affine $BC_2$ Weyl group the similarity-transformed Hamiltonian takes the algebraic form of the second order differential operator with polynomial coefficients. This operator has a finite-dimensional invariant subspace in polynomials which is a finite-dimensional representation space of the algebra gl(3). This space is invariant wrt $2D$ projective transformations. $BC_2$ Lame polynomials are the eigenfunctions of this operator, supposedly, their eigenvalues define edges of the Brillouin zones (bands). FRANCIS VALIQUETTE, Dalhousie University Group foliation of differential equations using moving frames [PDF] We incorporate the new theory of equivariant moving frames for Lie pseudo-groups into Vessiot’s method of group foliation of differential equations. The automorphic system is replaced by a set of reconstruction equations on the pseudo-group jets. The result is a completely algorithmic and symbolic procedure for finding invariant and non-invariant solutions of differential equations admitting a symmetry group. Joint work with Robert Thompson. PAVEL WINTERNITZ, Universite de Montreal Symmetry preserving discretization of ordinary differential equations [PDF] We show how one can approximate an Ordinary Differential Equation by a Difference System that has the same Lie point symmetry group as the original ODE. Such a discretization has many advantages over standard discretizations. In particular it provides numerical solutions that are qualitatively better, specially in the neighborhood of singularities. THOMAS WOLF, Brock
{"url":"http://cms.math.ca/Events/winter12/res/sdd","timestamp":"2014-04-16T13:17:38Z","content_type":null,"content_length":"25185","record_id":"<urn:uuid:08ba6fb7-51f5-4336-9c82-1cc15e2aa03e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
capping at z-near plane [Archive] - OpenGL Discussion and Help Forums I wonder if anyone tried capping at z-near plane? There's an old technique in "Red Book" but it fails for non-convex objects. This would be useful whith stencil shadows (capping the volume). I have idea how to do that, but want to know others experience.
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-151752.html","timestamp":"2014-04-17T16:10:35Z","content_type":null,"content_length":"9212","record_id":"<urn:uuid:d63c64a2-18d8-48a6-9fc6-3fbc8719626c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
www.fractual.co.zaFractual | Informing South African individuals and communities about exploitative gas drilling Last month saw the publication of Hansen et al's paper, "Global Surface Temperature Change" in Reviews of Geophysics. Yesterday, we received James Hansen's comments on his team's paper. It's available for download here. Those comments are intended to explain to lay people why we are experiencing extremes of weather such as the Moscow heatwave and the Texas/Oklahoma drought and what that means for the future. Whether he succeeds in his aim to bring the science home to the ordinary person is questionable. So here we try to explain the crux of the problem he sees. This graph is from Fig. 4 in his notes and describes the temperature anomalies for the whole world for the months of June, July and August (JJA). By anomaly he means the difference between the average temperature for those months compared with those of same months in the 30 years 1981 to 2010. Those years were chosen because they coincide with satellite (more accurate) measuring of temperature. Temperatures higher than normal have positive values, those below normal have negative values. They first plot a statistical device known as a 'normal distribution', often called a 'bell curve', for the JJA temperatures in 1981-2010. This is the black plot that you see on the graph and it represents a useful and common statistical method of measuring probablities. For example, if you threw a dice 100 times and added up the values you got, then repeated the test (say) ten thousand times, you would get a range of values between 100 (100x1) and 600 (100x6). As you can imagine, the chance of throwing a one 100 times in succession is pretty slim. The same is true for 100 sixes. If you then plotted the number of times a particular total occured on the vertical axis and the value itself (100 to 600) on the horizontal axis, you would end up with a 'normal distribution' plotted on your graph. The peak would occur at the most common value which will be the average - 350. By arithmetic, they derived from their data a value for what is called the standard deviation (σ), which is a measure of the variability in the data. The measure is used in the graph along the horizontal axis. 2 means 2xσ above normal; -3 means 3xσ below normal. Now they plot the temperature anomalies for each deacde (the coloured plots). The earlier decades show taller plots. That means that they contained less variablity than 1981-2010. The later ones show increasing variability decade by decade. So global temperatures are showing wider fluctuations in these decades. Also, you can see that the peaks are moving to the right. That shows that global warming is occurring. Finally, you can see that a 3σ event is most improbable (nearly zero chance) in the early years, but is increasingly likely as time moves on. The Moscow heatwave was assessed as a 3σ event (once in 1000 years). Science, derived purely from measuring temperatures, is showing us that the likelihood of extreme temperature events is increasing and will continue to increase as the century unfolds. If global warming approaches 3°C by the end of the century, it is estimated that 21-52% of species will be committed to extinction. We are on track for 6°C. Scientists are increasingly worried that their message is being ignored. Here at fractual we believe that we must forego our dependence on fossil fuels as quickly as we can. Natural gas is another fossil fuel and fracking for it probably makes it as dirty in GHG terms as coal. Ian Perrin. 6.1.2012
{"url":"http://www.fractual.co.za/variability.php","timestamp":"2014-04-20T03:09:02Z","content_type":null,"content_length":"16804","record_id":"<urn:uuid:75607672-a3bd-416a-96c5-1e06c4502f98>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
The Lemniscate The symbol of an "eight on its side" is sometimes known as the lemniscate and is a glyph for infinity. The English mathematician John Wallis (1616-1703) introduced the symbol to represent mathematical infinity in his Arithmetica Infinitorum of 1655. The term lemniscate refers to the shape itself, and the Swiss mathematician Jacob Bernoulli (1654-1705) first called the shape a lemniscus (Latin for ribbon) in an article in Acta Eruditorum in 1694. In spiritual terms, the lemniscate represents eternity, the numinous and the higher spiritual powers. The Magus, the first card in the Major Arcana of the Tarot, is often depicted with the lemniscate above his head or incorporated into a wide-brimmed hat, signifying the divine forces he is attempting to control. The use of a figure eight to represent infinity is an interesting choice, as eight is linked to pre-creational infinity through the Ogdoad and to the cyclical sense of infinity through the eight pagan festivals of the year and the octagram.
{"url":"http://www.byzant.com/Mystical/Symbols/Lemniscate.aspx","timestamp":"2014-04-19T17:01:14Z","content_type":null,"content_length":"11304","record_id":"<urn:uuid:71e930bf-07be-462b-ab95-f47599f2db10>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Generate a butterworth filter. Default is a discrete space (Z) filter. [b,a] = butter(n, Wc) low pass filter with cutoff pi*Wc radians [b,a] = butter(n, Wc, 'high') high pass filter with cutoff pi*Wc radians [b,a] = butter(n, [Wl, Wh]) band pass filter with edges pi*Wl and pi*Wh radians [b,a] = butter(n, [Wl, Wh], 'stop') band reject filter with edges pi*Wl and pi*Wh radians [z,p,g] = butter(...) return filter as zero-pole-gain rather than coefficients of the numerator and denominator polynomials. [...] = butter(...,'s') return a Laplace space filter, W can be larger than 1. [a,b,c,d] = butter(...) return state-space matrices Proakis & Manolakis (1992). Digital Signal Processing. New York: Macmillan Publishing Company.
{"url":"http://octave.sourceforge.net/signal/function/butter.html","timestamp":"2014-04-16T21:52:53Z","content_type":null,"content_length":"6692","record_id":"<urn:uuid:70e5233a-9632-4db0-af1d-8d3efbacdd10>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Frequency analysis method A frequency analysis method comprises using a window function to evaluate a emporal input signal present in the form of discrete sampled values. The windowed input signal is subsequently subjected to Fourier transformation for the purpose of generating a set of coefficients. In order to develop such a method so that the characteristics of the human ear are simulated not only with respect to the spectral projection in the frequency range, but also with respect to the resolution in the temporal range, a set of different window functions is used to evaluate a block of the input signal in order to generate a set of blocks, weighted with the respective window functions, of sampled values whose Fourier transforms have different bandwidths, before each of the simultaneously generated blocks of sampled values is subjected to a dedicated Fourier transformation in such a way that for each window function at least respectively one coefficient is calculated which is assigned the bandwidth of the Fourier transforms of this window function, and that the coefficients are chosen such that the frequency bands assigned to them essentially adjoin one another. Inventors: Kapust; Rolf (Stegaurach, DE), Seltzer; Dieter (Erlangen, DE) Assignee: Fraunhofer-Gesellschaft zur Forderung der Angewandten Forschung e.V. (DE) Appl. No.: 08/241,851 Filed: May 12, 1994
{"url":"http://patents.com/us-5583784.html","timestamp":"2014-04-21T02:06:19Z","content_type":null,"content_length":"61291","record_id":"<urn:uuid:d9954e47-5388-4a08-bca7-03594bc36453>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
problem with bearings April 25th 2009, 06:18 PM #1 Apr 2009 problem with bearings From home George jogs 3.5km NE and then 8.2km SE. Find the bearing of his final position from the starting position. The problem about all bearing questions is that i don't know how to find the conventional like sometimes u have to subtract 360 degrees and then add on 270. Its CONFUSINGG And a diagram would be reallly goood please a diagram would help. Thanks a lot Last edited by mr fantastic; April 25th 2009 at 06:25 PM. Reason: Deleted potentially offensive word and excessive smilies From home George jogs 3.5km NE and then 8.2km SE. Find the bearing of his final position from the starting position. The problem about all bearing questions is that i don't know how to find the conventional like sometimes u have to subtract 360 degrees and then add on 270. Its CONFUSINGG And a diagram would be reallly goood please a diagram would help. Thanks a lot Check your dictionary. Almost all will have a picture showing the compass points. From your question, the NE bearing is 90 degrees to the SE bearing, thus you have a right angle.The distance back to the initial position is the square root of ( 3.5^2 + 8.2^2). Even without a diagram you can visualize that the return bearing will be in the NorthWest quadrant. It actually will be the NW bearing + the Arctan(3.5/8.2). After you lookup and determine where the compass points are, you can simply graph (or draw) the lengths given [of course you should draw it to a reduced scale so that it will fit on a sheet of paper, as opposed to drawing the line 3.5 km long. The real problem with drawing the line to scale is the number of pencils required, but you are free to persue that method.] 3*3 = 9 and 4*4 = 16 3.5^2 is approximately halfway between 9 and 16 or about 12. 8*8 = 64 and 9*9 = 81, difference 81-64 = 17 0.2 or 1/5 * the difference of 17 is about 3 so 8.2^2 is about 67 = 64+3 67+12 = 79 which is about equal to 81. Square root of 81 is 9 The returning distance is going to be about 9 km. As for the bearing, 3.5 divided by 8.2 is roughly 1/2 The Arctan of 1 is 45 degrees, so half of that is 22 degrees. That means the return bearing will be NW or 45 degrees + 22 degrees or ROUGHLY 67 degrees West of North. You can use a calculator to determine more precise values for the answer. no problem! All you gotta do is put this problem on the grid in standard position and convert to rectangular coordinates: NE is the same thing as saying 45 degrees Right? So you got an angle and the hypotenuese of a right triangle. So if this guy left the origin and went along the hypoteneuse For 3.5, how far along the x- axis is he? Then you could figure the heght above the x-axis he is by figuring out what the opposite side is... Then your going to have (x,y). Do that again with the guy making a 90degree turn SE and then find the distance from the origin. use the distance formula. It should be real easy because one of your points will be (0,0). then all ya gotta do is find the angle off of the x-axis and you got your "bearing"! Hello, jonomantran! Why are you confused? You know which way North is, and which way East is. And I assume you know which way NorthEast is . . . From home George jogs 3.5km NE and then 8.2km SE. Find the bearing of his final position from the starting position. N B | o - - - - | * * 45° |45°* 3.5 * | * * 8.2 A o * * * * * * * o C Note that $\angle B = 90^o$ George starts at $A$, facing North. He turns 45° clockwise and jobs 3.5 km to point $B.$ Then he turns 90° clockwise and jogs 8.2 km to point $C.$ In right triangle $ABC\!:\;\;\tan A \:=\:\frac{8.2}{3.5} \:=\:2.342857143$ . . Hence: . $\angle A \:\approx\:67^o$ Therefore, the bearing of $AC$ is: . $\angle NAC \:=\:45^o + 67^o \;=\;112^o$ April 25th 2009, 07:18 PM #2 Super Member Jan 2009 April 25th 2009, 07:30 PM #3 April 26th 2009, 04:24 AM #4 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/trigonometry/85651-problem-bearings.html","timestamp":"2014-04-18T09:03:49Z","content_type":null,"content_length":"44501","record_id":"<urn:uuid:e679c0af-961f-4f7a-a262-7ea56dcca7aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help December 31st 2007, 09:41 AM #1 Nov 2006 I have this estimator T= (X_1 + X_2 +X_3 +......+ X_n)/n now the variance of T should be Var(X)/(n^2) ? right? because my teacher has written Var(X)/n where am I wrong? where? No I think I got it it is because (Var[ X_1+ ...+X_n])/n^2= Var X_1+....+VarX_n/n^2= n Var(X) /n^2 = VarX/n because it's not that X_1=X_2 but their variance. Okay. Do not consider thsi post. December 31st 2007, 10:31 AM #2 Nov 2006
{"url":"http://mathhelpforum.com/advanced-statistics/25392-variance.html","timestamp":"2014-04-17T03:57:53Z","content_type":null,"content_length":"30852","record_id":"<urn:uuid:c68f42d0-e09c-48d8-88d3-65a980d92c2f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionPixel StructureDynamic Range Expansion by Partial Charge TransferPrinciple of Charge TransferWhole Charge TransferPartial Charge TransferNon-Linearity due to Current DiffusionNon-Linearity Due to Initial Condition of a Photodiode and Its Influences on the Dynamic Range ExpansionConclusionsReferences and NotesFigures and Tables Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s91209452 sensors-09-09452 Article Non-Linearity in Wide Dynamic Range CMOS Image Sensors Utilizing a Partial Charge Transfer Technique ShafieSuhaidi^1^* KawahitoShoji^2 HalinIzhal Abdul^1 HasanWan Zuha Wan^1 Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 UPM Serdang, Selangor, Malaysia; E-Mails: izhal@eng.upm.edu.my (I.A.H.); wanz@eng.upm.edu.my (W.Z.W.H.) Research Institute of Electronics, Shizuoka University, 3-5-1 Johoku, Nakaku, Hamamatsu 432-8011, Japan; E-Mail: kawahito@idl.rie.shizuoka.ac.jp Author to whom correspondence should be addressed; E-Mail: suhaidi@eng.upm.edu.my; Tel.: +603-8946-6307; Fax: +603-8946-6327. 2009 26 11 2009 9 12 9452 9467 22 9 2009 27 10 2009 4 11 2009 © 2009 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. 2009 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity with respect to the incident light. In this paper, an analysis of the non-linearity in partial charge transfer technique has been carried, and the relationship between dynamic range and the non-linearity is studied. The results show that the non-linearity is caused by two factors, namely the current diffusion, which has an exponential relation with the potential barrier, and the initial condition of photodiodes in which it shows that the error in the high illumination region increases as the ratio of the long to the short accumulation time raises. Moreover, the increment of the saturation level of photodiodes also increases the error in the high illumination region. electronic imaging CMOS image sensor wide dynamic range partial charge transfer non-linearity Due to their ability to automatically produce clear images of an object plane that has extremely varying illumination levels, wide dynamic range image sensors are required for many applications such as cameras for security systems, automobiles and industry. There have been various approaches to enhance the dynamic range of CMOS image sensors [1-6]. However, the non-linear response type wide dynamic range CMOS image sensor are not preferable for color images [1,2], while the linear response type such as CMOS image sensor with in-pixel lateral overflow integration capacitor is not appropriate for small pixels due to their complex pixel structure [3]. The multiple exposures technique is one of the solutions for dynamic range enhancement. However the conventional technique of multiple exposures has a problem with the motion artifact due to signal loss during integration [4-6]. One of the most recent techniques to enhance the dynamic range of CMOS image sensors is the partial charge transfer technique [7,8]. It provides a signal at short accumulation in addition to a signal at long accumulation by continuing charge accumulation. In this technique, there is no degradation of the sensitivity in the wide dynamic range operation [7]. The difference of two charge accumulation time in one frame period expands the dynamic range of the sensor. This technique uses a normal four transistors active pixel sensor (4T-APS) with an adjustable midpoint drive for the transfer gate. By using two types of accumulation time and two different drives to control the transfer gate during accumulation and readout, two set of output signals can be obtained from a single photodiode. The wide dynamic range signal can be synthesized from the two types of signals mentioned. Although it enhances the dynamic range of the sensor, the partial charge transfer technique also contributes to less sensitivity loss as compared to the conventional dynamic range expansion technique because it makes efficient use of full exposure by continuation of charge integration and maximize the fill factor. However, the short accumulation signal obtained from partial transfer operation is non-linear with respect to the incident light [9]. In this paper, an analysis of the nonlinearity due to the partial charge transfer in the synthesized signals has been carried, and the relationship between the dynamic range and the non-linearity is deliberated by discussing out the pixel structure, the dynamic range expansion by partial charge transfer, the principle of charge transfer, the non-linearity due to current diffusion and the non-linearity due to initial condition of photodiode. The simplified layout of the pixel and its cross-section at line aa' are shown in Figures 1a,b, respectively. The pixel is a normal 4T-APS. However, the supply voltage for the transfer gate driver is designed to be adjustable for partial and whole charge transfer. To achieve high performance, the photodiode in the pixel has to be optimized. The shape of photodiode layout, the structure of photodiode, and the layout have significant influences on the performance of the whole imager [10,11]. In the pixel, the photodiode (PD) is designed with an octagonal shape with a small width, D that is placed on the PD side for the purpose of improving the imager's performance. The use of buried PD is aimed to reduce the dark current of the pixel. However, the n layer near the transfer gate is not totally covered by p^+ layer and still contributes to the dark current. Therefore, by using a small D, the dark current can be reduced because the area of uncovered n layer is decreased. The use of octagonal shape for PD can increase the speed of charge flow during charge transfer operation, preventing image lag [12]. In rectangular PD, the accumulated charge remains at the corners of PD during charge transfer operation resulting in a slower charge transfer. Moreover, the octagonal shape of photodiode dedicated to 12% lower interconnection surface and has a better spectral response compared to the rectangular photodiode [13]. Besides, using the normal 4T-APS can be an advantage for this technique because high field factor can easily be obtained. Table 1 shows the characteristics of the simulated pixel. The principle of the dynamic range expansion by the partial charge transfer technique is discussed in this section. In this sensor, one frame of accumulation time is divided into two sub-frames. The next steps explain the operation of the wide dynamic range image sensor with partial transfer: When a strong light is irradiated on the pixel, the accumulated charge in the PD reaches the saturation level (Q[max]) in short time. Since the signal is saturated, it cannot be read out at this time, therefore, the accumulated charge is partially drained and charge accumulation is repeated. The newly accumulated charge is partially transferred and read out. As a result, a short accumulation time signal is obtained. In the final sub-frame, the accumulated charge is partially drained and charge accumulation is repeated. Finally, a whole charge transfer operation is done and the transferred signal to the floating diffusion (FD) is read out. From the operation, two set of output signals is obtained from a single photodiode, the long and short accumulation time signals. A Wide dynamic range image can be synthesized from the two setS of acquired signals because the difference of charge accumulation time can sufficiently expand the dynamic range of the sensor. The signal from wholly transferred charge in the final sub-frame determines which signals have to be used, whether the long accumulation or the partially transferred short accumulation time signals. A method to judge which signals to be used is proposed. If the quantity of accumulated charge reaches a threshold value, Q[T] at the end of final sub-frame and it is read out, the short accumulation signal is selected, if it is less than Q[T], the long accumulation time signal is used. The most important task in operating this sensor is identifying the value of Q[T]. In the case of a weak light irradiated on the pixel, the same operation (1)∼(3) is performed. However, the read data at the end of first sub-frame is 0 because the accumulated charge in photodiode does not exceed threshold level, Q[T]. In the final sub-frame, the accumulated charge also does not exceed Q[T]. Therefore, only the long accumulation signal is used in synthesized wide dynamic range image prior to the output signal in final sub-frame does not exceed the s threshold value, Q[T]. The charge transfer mechanism plays an important role in this sensor. Hence, two type of charge transfer namely the whole charge transfer and partial charge transfer mechanism are described in this The whole charge transfer is the same as a normal charge transfer in conventional 4T APS CMOS image sensors [14]. As shown in Figure 2(a), the signal charge is accumulated in the photodiode starting from the initial state. The accumulation period is one frame for a normal 4T-APS which is equal to two sub-frames in the image sensor proposed in this work. The whole charge transfer operation is done after the accumulation period end and the transferred signal charge is read out subsequently as shown in Figure 2(b). Then the next frame with a new accumulation starts all over again as in Figure 2(c). To obtain a perfect charge transfer, the transfer gate voltage must be able to increase the potential barrier under itself to be higher than the potential of photodiodes within an appropriate time until accumulated charge is perfectly transferred because the potential of photodiode increases as the accumulated charge is transferred to the FD. The saturated accumulated signals charge can cause smear and blooming [15] in the regenerated image for the conventional four transistors APS. To prevent this problem, some sensors come with a shorter accumulation time, but this will decrease the accumulated signal in low illumination region and results in a lower signal-to-noise ratio (SNR). Therefore, the proposed technique may solve this problems. The partial charge transfer mechanism is described in Figure 3. If in the whole charge transfer all accumulated charges are transferred at once, it is different in the case of the partial charge transfer. As its name suggests, only a part of the accumulated charges are drained, transferred and read out, subsequently. In Figure 3(a), the signal charge is accumulated in the photodiode starting from the initial state followed by partial charge transfer for draining purpose, Figure 3(b). The charge accumulation process starts once again, Figure 3(c), followed by partial charge transfer for read out, Figure 3(d). Then charge accumulation process start once again, Figure 3(e) and finally the whole charge transfer operation take place in the final sub-frame, Figure 3(f). To assure partial charge transfer works properly, an appropriate transfer gate voltage must be applied to increase the potential barrier under the transfer gate to be higher than the potential of the photodiode until part of the accumulated charge is transferred. After a short time, the charge transfers stop because the increment in photodiode potential as the accumulated charge is transferred to the FD. As a result, only a part of accumulated charge is transferred. This mechanism is the same in the case of partial charge transfer for draining or signal reading out purposes. The frequency of the partial charge transfer operation for read out purpose as illustrated in Figure 3 is once, followed by the whole charge transfer operation in the final sub-frame. A simulation to check the characteristics of partially transferred electrons, N[T] with respect to the accumulated electrons in PD and V[TX] has been done using the SPECTRA, a simulator built especially for simulating CMOS image sensor's pixel and CCD. The first step for simulation starts with drawing the layout in cadence and the file are transferred to the SPECTRA input file. Then, the parameters are specified and followed by running the SPECTRA in transient mode. In the simulation, the transfer time is set to 0.5 μs. The simulation results are shown in Figure 4. From the figure, it can be said that if the accumulated electrons in PD is greater than threshold value, Q[T], the transferred electrons has a linear response. However, in the region near to the Q[T], it has a non-linear response due to carrier diffusion. The non-linear relation between transferred charge and potential barrier under transfer gate can be calculated by the principle of current diffusion. Figure 5 shows the pixels cross section and potential profile of a photodiode. The diffusion current or sub-threshold current in a semiconductor is given by: J = − q D n δ n p δ x ⋅ Awhere: A = W × d In the equation, D[n], n[p], x, A and d are the diffusion coefficient, minority carrier density, variable of diffusion length at x axis, area and depth of the channel for current flow, respectively. The D[n] is given by Einstein's relation: D n = k T q μ nδn[p]/δ[x] in Equation (1) is calculated as: − δ n p δ x = n p ( 0 ) − n p ( L ) Lwhere, L is the diffusion length and: n p ( 0 ) = n p 0 exp ( Φ S V T )and: n p ( L ) = n p 0 exp ( Φ S − V D V T ) ≈ 0 Calculating Equations (1)–(6), the diffusion current can be written as: J = μ n W L kTd n p 0 exp ( Φ S V T ) Note that Φ[S] is the surface potential of silicon at transfer gate and can be calculated as: Φ S = Φ b i − Φ B By substituting the Equation (8) into (7), the diffusion current can be rewritten as: J = μ n W L kTd n p 0 exp ( Φ b i − Φ B V T )and equation (9) can thus be simplified to: J = J 0 exp ( − Φ B V T )where: J 0 = μ n W L kTdn p 0 exp ( Φ b i V T ) In the equations: V T = k T q Next, the equivalent circuit shown in Figure 6 is considered. From the figure, the current flow is given by: J = C S δ Φ B δ t Differentiation of Equation (10) is: δ J δ t = J 0 ( − 1 V T ) exp ( − Φ B V T ) ⋅ δ Φ B δ t = − 1 V T δ Φ B δ t ⋅ J By substituting the Equation (13) into (14), Equation (10) is rewritten as: δ J δ t = − 1 V T C S J 2 Calculate the Equation (15): J = J ( 0 ) 1 + t / τwhere: τ = C S V T J 0and J(0) is current at t = 0. From Equation (9), by assuming that J(0) = J(Φ[bi]-Φ[B] = 0), J(0) is given by: J ( 0 ) = μ n W L kTdn p 0 Using the parameters in Table 2: J ( 0 ) = 1.31 × 10 − 20 [ A ] Within the readout time, the transferred charge is given by: Q Trans = ∫ o t R J δ t = J ( 0 ) τ ln ( 1 + t R / τ ) If 1 ≫ t[R]/τ: Q Trans = J ( 0 ) τ t R / τ = J ( 0 ) t R If t[R] = 0.5 [μs], the number of transferred electrons are: N Trans = J ( 0 ) t R q If one electron is transferred, then, from Equation (18): J 1 = 3.2 × 10 − 13 [ A ] At this time, J[0] is renamed as J[1]: From Equations (9) and (18): J 1 J ( 0 ) = exp ( Φ b i − Φ B V T ) Substituting Equations (19) and (23) in Equation (24): Φ b i − Φ B = 0.442 [ V ]and since: Φ b i = k T q ln ( N D N A n i 2 )by substituting the values in Table 2: Φ b i = 0.897 [ V ] Therefore, from Equations (25) and (27): Φ B = 0.455 [ V ] From the above calculations, it is clear that the charge is transferred continuously until the Φ[B] reach 0.455 V, then the charge transfer process stops. It also significant that the diffusion current has an exponential relationship with Φ[B] which suggests a non-linear relationship between transferred charge and potential barrier under the transfer gate. The initial condition of the photodiode can influence the partially transferred charge for the short accumulation time signal. The Initial condition is indicated by the number of initially accumulated electrons in PD, N[I1]. Figure 7 illustrates the partial charge transfer with different initial condition of photodiode. From the figure, when a photodiode with initial condition of 17,000 electrons is partially reset, and the number of drained electron, N[R1] is 2,200, some electrons above threshold value Q[T], N[RES] still remains in photodiode. Then, the re-accumulation operation takes place again and if the number of re-accumulated electron N[a] is 2,200, the read out short accumulation time signal, N[R2] in Figure 7 (c) should be the same as N[a]. Therefore, the relation between N[R1], N[a] and N[R2] can be conclude as: if N R 1 = N a → N R 2 = N aand: if N R 1 ≠ N a → N R 2 ≠ N a The relationship between N[R1], N[a] and N[R2] of Equation (30) can contribute to the readout error in the short accumulation signal of the sensor. An analysis of the non-linearity due to the partial charge transfer has been conceded. A simulation is carried to check the relations of re-accumulated electrons and partially transferred electrons with respect to the initial conditions of the photodiode. In the simulation, the transfer gate drive voltage and charge transfer time is set to 0.5 V and 1.0 μs, respectively. The simulation results are shown in Figure 8, which shows the relationship between N[R2] and N[a], deviates from the ideal curve. As the number of initially accumulated electrons in PD, N[I1] is increased, the N[R2] - N[a] curve moves upward which means the error for low number of accumulated electron is increased. The simulated relationship between N[R2] and N[a] which deviates from the ideal curve can contribute to the non-linearity in the synthesized wide dynamic range signal. Figure 9 shows some conditions of incident light, I[0], I[T], I[LM], I[M] and 2I[M] during the accumulation of one frame. From the figure, some equations can be derived as: q N T = I T ⋅ T L q N L M = I L M ⋅ T L q ( N L M − N T ) = I L M ⋅ T S q N L M = I L M ⋅ T Lwhere, T[L] and T[S] are the long and short accumulation time, respectively. Then, the short accumulated signal, qN[a], and the saturation level of accumulated signal, N[max] for incident light of I[M] can be written as: q N a = I M ⋅ T S q N max = I M ( T L − T S ) From Equations (35) and (36), N[a] can be calculated as: N a = T S T L − T S ⋅ N max For example, if N[max] is set to 17,000 electrons and the ratio of T[L] and T[S] is set to 21:1, the N[a] is equal to 850 electrons and by referring to the simulation results in Figure 9, the read out electron, N[R2], at N[a] equals 850 electrons, is simulated to be 1,400 electrons, different from the ideal value of 850 electrons by almost 65%. A calculation has been done from the simulation data of Figure 8 to study the effects of changing the ratio of T[L] to T[S], and the saturation level of the photodiode, N[max] to the linearity in short accumulation signal region of wide dynamic range signals. Since, the wide dynamic range signal is synthesized from the two sets of long and short accumulation time signals using the equations: N out = X L ( i f , X L < N T )and: N out = X S × T L T S ( i f , X L ≥ N T )the non-linearity only affects the short accumulation signal obtained from partial transferred read out. Figure 10 shows the photo-electric conversion characteristics of the synthesized wide dynamic range signals with the parameter of the ratio of T[L] to T[S] is set to 17:1, 21:1 and 31:1. In the calculation, N[max] is set to 17,000 electrons. As the ratio of T[L] to T[S] is greater, non-linearity at high illumination region becomes worst. The error explained in percentages is shown in Figure 11. The error is increased as the ratio of T[L] to T[S] is increased because N[a] is decreased due to shorter accumulation period and from the simulated results in Figure 8, the error at lower numbers of N[a] is higher than the error at high numbers of N[a]. Figure 12 shows the photo-electric conversion characteristics of the synthesized wide dynamic range signals with the parameter of N[max] set to 15,500 electrons, 16,250 electrons and 17,000 electrons. In the calculation, the ratio of T[L] to T[S] is set to 31:1. When N[max] is large, non-linearity at high illumination region becomes worst. The error explained in percentages is shown in Figure 13. As N[max] increases, the line in Figure 8 moves upwards, therefore despite the same N[a], the error is larger for N[a] with higher N[max]. From this analysis, it is concluded that the error increases as N[max] increases. The dynamic range expansion ratio, R[DE], in this technique is given by: R D E = T L T S Therefore, the dynamic range in this technique can be expanded either by using a photodiode with higher N[max] or by increasing the ratio of T[L] to T[S]. However, as discussed, the higher N[max] and ratio of T[L] to T[S] contributes to higher error in high illumination region, result in non-linearity in the synthesized wide dynamic range signals. Thus, the optimized N[max] and the ratio of T[L] to T[S] must be considered to reduce this error for obtaining a linear response in the synthesized wide dynamic range signals. Furthermore, solutions such as double mid-point shutter technique [7] may be applicable for reducing this error. The partial charge transfer technique is a countermeasure to improve dynamic range of CMOS image sensors and at the same time maintains a high fill factor because only one photodiode is integrated in each pixel. The dynamic range expansion in this sensor is controlled by partial charge transfer and if a very wide dynamic range is required, it can be achieved by taking a large accumulation ratio of the long to the short accumulation time signals. However, the technique suffers from non-linearity in the output of the synthesized wide dynamic range signals especially if a large accumulation ratio is taken. An analysis of the non-linearity utilizing this technique has been done and discussed. The calculation and simulation results show that non-linearity can be caused by two factors that are current diffusion from the potential well and initial conditions of photodiode. From the calculations, it is shown that the diffusion current has an exponential relationship with the potential barrier suggesting a non-linear relationship between the transferred charge and potential barrier under the transfer gate. The simulation results show that the error in the high illumination region is increases as the ratio of the long to the short accumulation time is increases. Furthermore, increasing the saturation level of photodiodes also increases the error in the high illumination The authors would like to thank the members of imaging device laboratory, Shizuoka University, for their effort in the calculation, simulation and design progression. KavadiasS.DierickxB.SchefferD.AlaertsA.UwaertsD.BogaertsJ.A logarithmic response CMOS image sensor with on-chip calibration2000351146115210.1109/4.859503 DeckerS.McGrathS.D.BrehmerK.SodiniC.G.A 256 × 256 CMOS imaging array with wide dynamic range pixels and column-parallel digital output1998332081209110.1109/4.735551 SugawaS.AkahaneN.AdachiS.MoriK.IshiuchiT.MizobuchiK.A 100 dB dynamic range CMOS image sensor using a lateral overflow integration capasitorProceedings of IEEE International Solid-State Circuits ConferenceSan Francisco, CA, USAFebruary 6–10, 2005352353 ParkJ.H.MaseM.KawahitoS.SasakiM.WakamoriY.OhtaY.Detailed evaluation of a wide dynamic range CMOS image sensor20062795100 MaseM.KawahitoS.SasakiM.WakamoriY.FurutaM.A wide dynamic range CMOS image sensor with multiple exposure-time signal outputs and 12-bit column-parallel cyclic A/D converters2005402787279510.1109/JSSC.2005.858477 Yadid-PechtO.FossumE.R.Wide intrascene dynamic range CMOS APS using dual sampling1997441721172310.1109/16.628828 OikeY.TodaA.TauraT.KatoA.SatoH.KasaiM.NarabuT.A 121.8 dB dynamic range CMOS image sensor using pixel-variation-free midpoint potential drive and overlapping multiple exposuresProceeding of International Image Sensor WorkshopOgunquit, ME, USAJune 6–10, 20073033 EgawaY.KoikeH.OkamotoR.YamashitaH.TanakaN.HosokawaJ.ArakawaK.IshidaH.HarakawaH.SakaiT.GotoH.A 1/2.5 inch 5.2 Mpixel, 96 dB dynamic range CMOS image sensor with fixed pattern noise free, double exposure time read-out operationProceedings of IEEE Asian Solid-State Circuits ConferenceSan Francisco, CA, USAFebruary 5–9, 2006135138 ShafieS.A Study on Dynamic Range Expansion Techniques with Reduced Motion Blur for CMOS Image SensorsShizuoka UniversityShizuoka, Japan2008125149 ShcherbackI.BelenkyA.Yadid-PechtO.Empirical dark current modeling for complementary metal oxide semiconductor active pixel sensor2002411216121910.1117/1.1475995 ShcherbackI.Yadid-PechtO.Photoresponse analysis and pixel shape optimization for CMOS active pixel sensors200350121810.1109/TED.2002.806966 WakamoriT.A Study on Wide Dynamic Range CMOS Image Sensors with Partial Charge TransferShizuoka UniversityShizuoka, Japan20071819 DuboisJ.GinhacD.PaindavoineM.HeyrmanB.A 10,000 fps CMOS sensor with massively parallel image processing20084370671710.1109/JSSC.2007.916618 FossumE.Active pixel sensors: Are CCD's dinosaurs?Proceedings of the SPIE Charge-Coupled Devices and Solid State Optical Sensors IIISan Jose, CA, USAFebruary 2, 1993Vol. 1900113 NakamuraJ.Taylor & Francis Group PublisherLondon, UK20067778 (a) Simplified layout and (b) cross section at line aa', of the pixel. The principle of Whole Charge Transfer, (a) charge accumulation (b) charge transfer (c) new charge accumulation. The principle of Partial Charge Transfer. Partially transferred electrons vs accumulated electrons in PD. The pixels cross section and potential profile of a photodiode. The equivalent circuit. Initial condition influents the read out signals in partial transfer technique. Partial transferred electrons for read out, N[R2] versus accumulated electrons within short accumulation time, N[a]. Charge accumulation in one frame. Photo-electric conversion characteristics of the synthesized wide dynamic range signals. Non-linearity in high illumination region (error in %). Photo-electric conversion characteristics of the synthesized wide dynamic range signals. Non-linearity in high illumination region (error in %). Pixel characteristics. Technology 0.18 μm CIS 1P4M Pixel size 7.5 μm × 7.5 μm No. of photodiode 1 Photodiode shape Octagonal Fill factor 14% Device parameters. Parameter Value W 1.5 [μm] L 0.7 [μm] μ[n] 700 [cm^2/V·S] d 0.1 [μm] N[D] 2 × 10^17 [cm^−3] N[A] 10^18 [cm^-3] n[i] 1.45 × 10^10 [cm^−3]
{"url":"http://www.mdpi.com/1424-8220/9/12/9452/xml","timestamp":"2014-04-19T12:01:38Z","content_type":null,"content_length":"73293","record_id":"<urn:uuid:bbeae01b-d3c7-4015-af5f-1a1f5b9b04e7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about news on Serious Stats All posts in category news Posted by Thom Baguley on July 2, 2012 For an hour or so earlier today Serious Stats was #1 in the amazon.co.uk sales rank for the category: Books > Health, Family & Lifestyle > Psychology & Psychiatry > Methodology > Statistics As of writing the rank has dropped to #3 (but I’m still quite excited – even though I know this may not imply large numbers of pre-orders). They have also increased the discount on pre-orders to 36%. The book should also be available for pre-order in other countries (though I’ve only checked the US store), but for some reason the discount is not as generous there. If you can’t get hold of the item in your country, bookdepository.com does free worldwide delivery (to most countries as far as I can tell). I’ve used them to ship gifts to friends and family overseas and they seem pretty reliable (and also offer pretty good discounts). Posted by Thom Baguley on May 29, 2012 Whilst writing the book the latest version of R changed several times. Although I started on an earlier version, the bulk of the book was written with 2.11 and it was finished under R 2.12. The final version of the R scripts were therefore run and checked using R 2.12 and, in the main, the most recent packages versions for R 2.12. When it came to proof read R 2.13 was already out and therefore most of the examples were also checked with version, but I stuck with R 2.12 on my home and work machines until last week. In general I don’t see the point of updating to a new version number if everything is working fine. One advantage of this approach is that the version I install will usually have bugs from the initial release already ironed out. That said, new versions of R have (in my experience) been very stable. I tend to download the version only when I fall several versions behind or if it is a requirement for a new package or package version. On this occasion it turned out that the latest version of the ordinal package (for fitting ordered logistic regression and multilevel ordered logistic regression models). There are two main drawbacks with updating. The first is reinstalling all your favourite package libraries (and generally getting it set up how you like it). The second is dealing with changes in the way R behaves. For re-installing all my packages I use a very crude system. For any given platform (Mac OS, Windows or Linux) there are cleverer solutions (that you can find via google). My solution works across cross-platform and is fairly robust, if inelegant. I simply keep an R script with a number of install.packages() commands such as: install.packages(‘lme4′, ‘exactci’, ‘pwr’, ‘arm’) I run these in batches after installing the new R version. I find this useful because I’m forever installing R on different machines (so far Mac OS or Windows) at work (e.g., for teaching or if working away from the office or on a borrowed machine). I can also comment the file (e.g., to note if there are issues with any of the packages under a particular version of R). This usually suffices for me as I usually run a ‘vanilla’ set-up without customization. It would be more efficient for me to customize my set-up, but for teaching purposes I find it helps not to do that. Likewise, I tend to work with a clean workspace (and use a script file to save R code that creates my workspaces). I should stress that this isn’t advice – and I would work differently myself if I didn’t use R so much for teaching. One of the first things that happened after installing R 2.15 was that some of my own functions started producing warnings. R warnings can be pretty scary for new users but are generally benign. Some of them are there to detect behaviour associated with common R errors or common statistical errors (and thus give you a chance to check your work). Others alert you to non-standard behaviour from a function in R (e.g., changing the procedure it uses when sample sizes are small). Yet others offer tips on writing better R code. Only very rarely are they an indication that something has gone badly Thus most R warnings are slightly annoying but potentially useful. In my case R 2.15 disliked a number of my functions of the form: The precise warning was: Warning message: mean() is deprecated. Use colMeans() or sapply(*, mean) instead. All the functions worked just fine, but (after my initial irritation had receded) I realize that colMeans() is a much better function. It is more efficient but, even better, it is obvious that it calculates the means of the columns of a data frame or matrix. With the more general mean() function it is not immediately obvious what will happen when called with a data frame as an argument. It is also trivial to infer that rowMeans() calculates the row means. I have now re-written a number of functions to deal with this problem and to make a few other minor changes. The latest version of my functions can be loaded with the call: I will try and keep this file up-to-date with recent versions of R and correct any bugs as they are detected. The functions can be downloaded as a text file from: Posted by Thom Baguley on May 27, 2012 UPDATE: Some problems arose with my previous host so I have now updated the links here and elsewhere on the blog. The companion web site for Serious StatsR scripts for each chapter. This contains examples of R code and and all my functions from the book (and a few extras). This is a convenient form for working through the examples. However, if you just want to access the functions it is more convenient to load them all in at once. The functions can be downloaded as a text file from: More conveniently, you can load them directly into R with the following call: In addition to the Serious Stats functions, a number of other functions are contained in the text file. These include functions published on this blog for comparing correlations or confidence intervals for independent measures ANOVA and functions my paper on confidence intervals for repeated measures ANOVA. Posted by Thom Baguley on March 26, 2012 Posted by Thom Baguley on March 23, 2012 This is a blog to accompany my forthcoming book “Serious stats” published by Palgrave. Baguley, T. (2012, in press). Serious stats: A guide to advanced statistics for the behavioral sciences. Basingstoke: Palgrave. The book is available for pre-order (e.g., via ) and instructors should be able to pre-order inspection copies via in the US (or in the UK). The proofs have been checked and returned and I am hoping for a publication date of May 2012. Posted by Thom Baguley on February 1, 2012
{"url":"http://seriousstats.wordpress.com/category/news/","timestamp":"2014-04-20T05:42:41Z","content_type":null,"content_length":"50825","record_id":"<urn:uuid:d42af562-9696-4e05-91ac-85b9accdd9ba>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
64-bit floats with the Propeller as pairs of 32-bit ones [Archive] - Parallax Forums 07-20-2009, 04:08 AM 64-bit Double precision float arithmetic can be done with· pairs of 32-bit Single precision floats using the Propeller's Floating point software package. Each 64-bit Double operand is the unevaluated sum of two 32-bit IEEE 754 Singles of which the first represents the leading digits, and the second the trailing digits, of the format's value. Its exponent range is almost the same as Single's.·I show here, in SPIN like pseudocodes, how the Pi·can be·represented, and how the double precision addition is carried out with (Single-Single)s on the Propeller. Each arithmetic operation (12 altogether) or intermediate result in the second pceudocode is, of course, a 32- bit, single precision float operation or value: PUB DS_Pi(dsA_) 'This returns Pi to Double precision 'Result dsA is given by reference '··········· dsA[0] represents high-order Single, '··········· dsA[1] represents low-order Single dsA[0] :=· 3.141593E+00 dsA[1] := -3.464102E-07 PUB DS_Add(dsA_, dsB_, dsC_)·| t1, t2, e 'Computes (Single-Single) = (Single-Single) + (Single-Single) '······················ dsC·· ···· =···· ··· dsA·· ···· +···· ·· dsB 'Parameters dsA, dsB and result dsC are given by reference '··········· dsA[0] represents high-order Single, '··········· dsA[1] represents low-order Single 'Order of operations, defined by the brackets, counts here t1 := dsA[0] + dsB[0] e· := t1 - dsA[0] t2 := ((dsB[0] - e) + (dsA[0] - (t1 - e))) + dsA[1] + dsB[1] 'The result is t1 + t2, after normalization dsC[0] := t1 + t2 dsC[1] := t2 - (dsc[0] - t1) The idea of improving floating point precision this way goes back to the 1960s. Nowadays, dedicated hardware, like GPUs in graphic cards or the Cell processor in PS3, run Single precision float operations so fast, that they can provide great potential for medical imaging, aerospace and defense. The Cell is hardware optimized toward vectorized, Single precision floating point computation and goes with a peak Single precision performance of 204 Gflops/sec. Scientific, CAD or·defence·computing needs, however, higher precision than 7.5 digits in many situations. So, effective software implementations of Double or Quad precision arithmetic on these cheap but capable hardwares are of current interest. Some types of Cells·can do Double precision calculations by hardware, but with an order of magnitude performance penalty. Software implementations can compete with this, by greatly eliminating the performance penalty by clever programming. Which means here, to use single precision math wherever possible, especially for the most compute-intensive part of the code, and then fall back to Double precision, only when necessary. This goes without demolishing the Double precision of the results of many basic algorithms of high performance computing. To take advantage of this mixed Single/Double approach systematically, the code changes have to be done by hand, since algorithm is beyond the intelligence of today's compiler technology. A software library, that contains 32/64-bit twin-float-procedures makes these changes available and comfortable to the programmer. To allow for similar software tricks with the Propeller, I am coding these 'Single-Single' algorithms to·enhance Prop's capabilities in software implemented floating point calculations. A Propeller object using (Single-Single) Doubles is in preparation. This object will be placed on OBEX, if any interest shows up on the forum. My questions to the Forum members are: Do embedded applications with the Prop need Double precision (15 digits) at all? (IBM came out lately with a hardware implemented Double precision float version of Cell, aimed for embedded applications in cooperation with other firms...) Does someone know of a downloadable, ready-made, bug free and better solution for the Propeller to do DP float math? (A free, OBEX quality SPIN file that compiles and works correctly, will do...) Will the four basic operations be enough in Double precision? (Maybe for a lite DP package...) Which functions are sufficient and necessary in Double precision for an enhanced, but basic math package? (SQRT, SIN, LOG, ..., ?) Is it worth to sacrifice one or two (or) more COGs to make it fast? What about to make a software based, but high speed Single/Double/Quad precision versatile FPU from a single(!) Propeller with large EEPROM for accurate tables? (With SPI interface, In SixBladeProp, or so,...?) Should the solution be somewhat 'Navigation' oriented? (ATAN2, WGS84/ECEF,... ?) Will someone use such code written to the Propeller/uM-FPU combination, too? (Only one COG consumed, three-five times the speed, much less HUB RAM needed..., ?) Post Edited (cessnapilot) : 7/19/2009 9:20:24 PM GMT
{"url":"http://forums.parallax.com/archive/index.php/t-114595.html","timestamp":"2014-04-16T16:01:08Z","content_type":null,"content_length":"34584","record_id":"<urn:uuid:90c9a1e9-a354-4390-bc2a-b73c0e70622a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Kylie on Wednesday, August 24, 2011 at 3:36pm. What is 2 + 2? = 4 What is 2 * 2? = 4 So, the sum and the product of these two numbers is the same. Are there other numbers that have this same property? Yes Determine all number pairs x and y such that the product of x and y and the sum of x and y are equal. Make sure to show your work and to explain and justify your solutions. Thanks so much, • Math - Creative, Wednesday, August 24, 2011 at 3:57pm Hi Kylie, The answer is no. Remember that multiplication is different than addition. When you add duplicates it is the sum of itself in ones. When you multiply duplicates you are adding the number by the number of times by itself. 1+1= 1+1 or 2 2+2=4 equals 1+1,1+1 2x2=4 equals 2+2 or 4 3+3= 6 or 1+1+1, 1+1+1, 1+1+1 3x3= 3+3+3 or nine 4+4= 8 1+1+1+1, 1+1+1+1, 1+1+1+1, 1+1+1+1 4x4= 4+4+4+4 etc. Do you see the pattern? So each number up from one will be more when the number is multiplied by its self. Make sense? Related Questions math - all i can say is math stinx dont mess with math. without math, a lot of ... Math...For All Math Tutors - I would like to know how many of the people ... math - I just do not understand how to figure this out. Find the number of ... English Grammar - A: What time is your math/Math class? B: At eleven forty-five... math math math math - a. what is the general term of the following series? 60/... What Makes A Good Math Teacher? - I think a good math teacher is someone who can... 3rd grade Math - Actual question on homework."EXPLAIN HOW YOU CAN BREAK APART ... Math - All of my life I have gotten a C average in math. I have been trying to ... English - Revise the folloing sentences to create parallel constructions. (... Math - Is IB Math Studies the same as Pre-Calculus? I heard that IB Math Studies...
{"url":"http://www.jiskha.com/display.cgi?id=1314214608","timestamp":"2014-04-21T04:18:50Z","content_type":null,"content_length":"9037","record_id":"<urn:uuid:202cc973-d570-4428-ba6e-33f1771eaaa2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
The Reflective Educator So as expected, Apple announced their new textbooks for the iPad. Looking over the specs and what is possible to create with the iPad, it doesn't look like they've offered a complete set of features for their book, but buried in their authoring features is the ability to embed HTML widgets into pages. There are some things I'd like to see improved about their digital textbook, but most schools will find the fact that they can subscribe to multiple textbook publishing companies through the same system pretty attractive. Some flaws I spotted: • The textbook does not seem to build in the ability to translate or look up definitions of words. • No discussion on the adjusting the readability (in terms of word choice and reading level) of the texts. • No discussion on interacting with other users of the textbook, either through comments, or even sharing anotations. It might be possible to share annotations, but can you share books? Can you deep link to a portion of a textbook to share a thought with someone else? • The interactivity they have included seems somewhat limited to pseudointeractivity. Being able to manipulate an image and move it around is not as big a deal (in terms of effect on student learning) as they seem to be making it out to be. You may be able to build in games and simulations, but you'll have to build them yourself as HTML 5 widgets. I'd like to see a textbook which includes the ability to graph data, manipulate it, and run simulations within the text itself. • The textbooks will be in a proprietary format which can only be created on a Mac. This means that it will be sometime before authoring tools come out for other OS, and then getting your textbook onto the iPad via those authoring tools looks very much like it will have to go through the iTunes store. Good luck trying to get a book that doesn't meet the somewhat stringent requirements of the iTunes store into the app. I can imagine that courses on human sexuality and gender may find themselves using paper textbooks for some time to come, for example. • A typical complaint with traditional mathematics textbooks is that the examples given earlier in the textbook are then replicated in the exercises the students do, and the exercise becomes not about doing mathematics, but about recognizing (and memorizing the solution to) problem types. I don't see any evidence that this will be fixed with the new textbook, especially given the companies with whom they've partnered. Maybe because the technology is improved, the pedagogy will improve? I'm not sure... • One of the comments from the video advertising the new iPad textbooks said that students wouldn't even have to think about what information they've bookmarked or annotated in the textbook. Doesn't this seem somewhat problematic, given that a purpose of education is to get students to think? I don't disagree with digital textbooks per say. For schools that can afford this option, they do have a lot of benefits. I just think we should continue to ask ourselves, how can we improve the textbook? It's been fundamentally the same for so long, and I don't see a huge benefit in spending extra money for the reading device for a textbook (aside from reduced weight in students' backpacks) if we can't also fix some of the pedagogical problems in traditional textbooks. Update: An important observation for Canadian markets - the Apple digital textbooks are not yet licensed for use in Canada, and the software to manage distribution locally of the textbooks is not yet available here. This is a great piece of Submitted by on Thu, 01/19/2012 - 23:32. This is a great piece of news, appreciate the sharing there. Peace.
{"url":"http://davidwees.com/content/apple-ipad-textbooks?page=0%2C2","timestamp":"2014-04-17T18:24:43Z","content_type":null,"content_length":"55920","record_id":"<urn:uuid:689141af-96f4-4a44-9ea7-3a06bc0f2ffa>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
meshgrid (MATLAB Functions) MATLAB Function Reference Generate X and Y matrices for three-dimensional plots [X,Y] = meshgrid(x,y) transforms the domain specified by vectors x and y into arrays X and Y, which can be used to evaluate functions of two variables and three-dimensional mesh/surface plots. The rows of the output array X are copies of the vector x; columns of the output array Y are copies of the vector y. [X,Y] = meshgrid(x) is the same as [X,Y] = meshgrid(x,x). [X,Y,Z] = meshgrid(x,y,z) produces three-dimensional arrays used to evaluate functions of three variables and three-dimensional volumetric plots. The meshgrid function is similar to ndgrid except that the order of the first two input and output arguments is switched. That is, the statement Because of this, meshgrid is better suited to problems in two- or three-dimensional Cartesian space, while ndgrid is better suited to multidimensional problems that aren't spatially based. meshgrid is limited to two- or three-dimensional Cartesian space. The following example shows how to use meshgrid to create a surface plot of a function. See Also griddata, mesh, ndgrid, slice, surf mesh, meshc, meshz methods © 1994-2005 The MathWorks, Inc.
{"url":"http://matlab.izmiran.ru/help/techdoc/ref/meshgrid.html","timestamp":"2014-04-16T22:33:48Z","content_type":null,"content_length":"5901","record_id":"<urn:uuid:4eb56642-998b-4861-94c8-ce45a5d1cea1>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: An easy way to memorize the differentiations of trig functions? • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Can this kinda chain help? sin(x) | cos(x) | -sin(x) | -cos(x) | SIN(X) Best Response You've already chosen the best response. there is no easy way. it comes with enough practice and problems. Thats how i got them. sin=cos, cos=-sin, tan = sec^2, sec = sec*tan...thats what i know...the other 2 dont get used that often. the chain/circular reference does kinda work Best Response You've already chosen the best response. Yeah, but where's the tan(x)? Best Response You've already chosen the best response. And the hyperbolic ones? Best Response You've already chosen the best response. cos = -sin ^ Best Response You've already chosen the best response. ive never had to use the hyperbolic ones, and the inverse ones (i.e. sin^-1) are very difficult to memorize because they all look the same Best Response You've already chosen the best response. Hmm... okay. Best Response You've already chosen the best response. But still, is there a way to memorize all of this stuff? Best Response You've already chosen the best response. do lots of problems. Eventually you'll memorize them because you see them so often Best Response You've already chosen the best response. How about the sec and cosec? Best Response You've already chosen the best response. sec = sec*tan, csc = -cscx*cotx. theyre basically opposites, so if you know sec, you can get csc Best Response You've already chosen the best response. You can refer to such thing, bind sin and cos, tan and sec, cot and cosec. They are together in different formulas. Best Response You've already chosen the best response. yup practice Best Response You've already chosen the best response. I would really like to know this too. Memorizing diff trig variables. Best Response You've already chosen the best response. just remember: derivative of a cofunction is negative i.e. d/dx of cos x is -sinx d/dx of csc x is -csc x cot x d/dx of cot x is -csc^2 x Best Response You've already chosen the best response. would help alot actually Best Response You've already chosen the best response. and most importantly: hate math Best Response You've already chosen the best response. I've never had to memorize them. After doing them a few times, you just know them. I'm only speaking for myself, but I just know all the differentiation rules, trig identities by heart. Actually pretty much any formula, I've never had to really memorize, but rather just understand the justification behind them. I can't explain it but I pretty much know any mathematical formula, rules, etc, by heart. I guess that's why I love MATH! Best Response You've already chosen the best response. Hmm. I'd try that way. Thanks @calculusfunctions Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508b999ae4b077c2ef2e9283","timestamp":"2014-04-18T20:48:21Z","content_type":null,"content_length":"78619","record_id":"<urn:uuid:8765e239-61f7-445a-9c3a-f25bc84f707d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Non Linear Pharmacokinetics Ppt Presentation Non linear pharmacokinetics Presentation Description No description available. By: aashishweta20 (30 month(s) ago) sir please allow me to download this presentation By: vharika (31 month(s) ago) presentation is so god sir please mail this presentation to my email By: pip4pica (31 month(s) ago) send to this email also immadrply@yahoo.co.in
{"url":"http://www.authorstream.com/Presentation/vicky_5593-605791-non-linear-pharmacokinetics/","timestamp":"2014-04-19T15:13:27Z","content_type":null,"content_length":"199314","record_id":"<urn:uuid:eed62543-02b5-4fb1-9864-8b3387e0d5b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
For a circle with center point P, cord XY is the Author Message For a circle with center point P, cord XY is the [#permalink] 08 Feb 2012, 17:44 This post received 55% (medium) Question Stats: (02:33) correct Status: Preparing for the 4th time -:( 57% (02:35) Joined: 25 Jun 2011 wrong Posts: 563 based on 138 sessions Location: United Kingdom Attachment: International Business, Untitled.png [ 4.33 KiB | Viewed 1986 times ] For a circle with center point P, cord XY is the perpendicular bisector of radius AP (A is a point on the edge of the circle). What is the length of cord XY? GMAT Date: 06-22-2012 (1) The circumference of circle P is twice the area of circle P. GPA: 2.9 (2) The length of Arc XAY = WE: Information Technology (Consulting) \frac{2\pi}{3} Followers: 11 . Kudos [?]: 129 [2] , How come the answer is D? I have drawn these pictures as they were not provided with the questions. Even though with my guess work I have selected A which is incorrect. Can given: 217 someone please let me know how to solve this? Also, I understand this will include a concept of 30-60-90 degree triangle - any idea which angles to assign 30 and 60 Spoiler: OA Best Regards, MGMAT 1 --> 530 MGMAT 2--> 640 MGMAT 3 ---> 610 Last edited by on 17 Jul 2013, 09:22, edited 1 time in total. Edited the question. Re: Length of a Chord [#permalink] 08 Feb 2012, 18:33 This post received Expert's post Chord.PNG [ 24.43 KiB | Viewed 4147 times ] For a circle with center point P, cord XY is the perpendicular bisector of radius AP (A is a point on the edge of the circle). What is the length of cord XY? From the diagram and the stem: AZ=ZP=r/2. In a right triangle ZPX ratio of ZP to XP is 1:2, hence ZPX is a 30-60-90 right triangle where the sides are in ratio: . The longest leg is ZX which corresponds with and is opposite to 60 degrees angle. Thus <XPY=60+60=120 (1) The circumference of circle P is twice the area of circle P --> Bunuel r=1 Math Expert --> Joined: 02 Sep 2009 XZ=\frac{\sqrt{3}}{2} Posts: 17317 --> Followers: 2874 XY=2*XZ=\sqrt{3} Kudos [?]: 18388 [3] , . Sufficient. given: 2348 (2) The length of Arc XAY = 2pi/3 --> , the same as above. Sufficient. Answer: D. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Status: Preparing for the Re: For a circle with center point P, cord XY is the [#permalink] 11 Feb 2012, 11:29 4th time -:( Sorry Bunuel - in your explanation, how come longest leg be ZX? I think it should be XP because that's opposite to 90 degree angle. Also, do you mind telling me how did you Joined: 25 Jun 2011 find out which side will correspond to 60 degree and 30 degree angle? Posts: 563 _________________ Location: United Kingdom Best Regards, International Business, MGMAT 1 --> 530 Strategy MGMAT 2--> 640 MGMAT 3 ---> 610 GMAT Date: 06-22-2012 GPA: 2.9 WE: Information Technology Followers: 11 Kudos [?]: 129 [0], given: Re: For a circle with center point P, cord XY is the [#permalink] 11 Feb 2012, 11:45 This post received Expert's post enigma123 wrote: Sorry Bunuel - in your explanation, how come longest leg be ZX? I think it should be XP because that's opposite to 90 degree angle. Also, do you mind telling me how did you find out which side will correspond to 60 degree and 30 degree angle? XP is hypotenuse, which obviously is the longest side but the longest leg Bunuel is ZX (so the second longest side). Math Expert In a right triangle where the angles are 30°, 60°, and 90° the sides are always in the ratio Joined: 02 Sep 2009 1 : \sqrt{3}: 2 Posts: 17317 . Notice that the smallest side (1) is opposite the smallest angle (30°), and the longest side (2) is opposite the largest angle (90°). Since the ratio of the leg ZP to the hypotenuse XP is 1:2, then ZP (the shortest side) corresponds to 1 and thus is the opposite of the smallest angle 30°, which means that another leg ZX corresponds to Followers: 2874 Kudos [?]: 18388 [1] , given: 2348 . Hope it's clear. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Status: Preparing for the Re: For a circle with center point P, cord XY is the [#permalink] 11 Feb 2012, 12:52 4th time -:( Sorry Bunuel - still struggling. How did you get XZ = sqrt3/2? Apologies for been a pain. Joined: 25 Jun 2011 Posts: 563 Best Regards, Location: United Kingdom E. Concentration: MGMAT 1 --> 530 International Business, MGMAT 2--> 640 Strategy MGMAT 3 ---> 610 GMAT Date: 06-22-2012 GPA: 2.9 WE: Information Technology Followers: 11 Kudos [?]: 129 [0], given: Re: For a circle with center point P, cord XY is the [#permalink] 11 Feb 2012, 13:35 Math Expert Joined: 02 Sep 2009 This post received Posts: 17317 KUDOS Followers: 2874 Expert's post Kudos [?]: 18388 [3] , given: 2348 Re: For a circle with center point P, cord XY is the [#permalink] 09 Mar 2012, 23:20 Joined: 23 Feb 2012 This post received Posts: 216 KUDOS Location: India enigma123 wrote: Concentration: Finance, (2) The length of Arc XAY = 2p/3. Schools: Said Why does it read 2p/3. Shouldn't it be 2pi/ 3 atleast, if you aren't putting the symbol for pi? That completely threw me off and I was left wondering, "does he mean 2 (perimeter)/3? What does p stand for?" GMAT 1: 710 Q44 V44 GPA: 2.9 If you like it, Kudo it! WE: Marketing (Computer Software) "There is no alternative to hard work. If you don't do it now, you'll probably have to do it later. If you didn't need it now, you probably did it earlier. But there is no escaping it." Followers: 2 710 Debrief. Crash and Burn Kudos [?]: 36 [1] , given: Math Expert Re: For a circle with center point P, cord XY is the [#permalink] 09 Mar 2012, 23:29 Joined: 02 Sep 2009 Expert's post Posts: 17317 Followers: 2874 Re: Length of a Chord [#permalink] 17 Jul 2013, 08:16 Bunuel wrote: For a circle with center point P, cord XY is the perpendicular bisector of radius AP (A is a point on the edge of the circle). What is the length of cord XY? From the diagram and the stem: AZ=ZP=r/2. In a right triangle ZPX ratio of ZP to XP is 1:2, hence ZPX is a 30-60-90 right triangle where the sides are in ratio: . The longest leg is ZX which corresponds with and is opposite to 60 degrees angle. Thus <XPY=60+60=120 (1) The circumference of circle P is twice the area of circle P --> Joined: 12 Mar 2013 Posts: 12 Followers: 0 . Sufficient. (2) The length of Arc XAY = 2pi/3 --> , the same as above. Sufficient. Answer: D. Hi Bunuel, How did you assume ZPX is a 30-60-90 right triangle just from the ratio of ZP to XP (1:2). How can we assume in any triangle if the two sides are in the ratio 1:2, it will be a 30-60-90 triangle? I thought we have to know we have to know that the triangle is 30-60-90 triangle beforehand to calculated the third side based on the ratio of two given sides. Re: Length of a Chord [#permalink] 17 Jul 2013, 09:31 Expert's post keenys wrote: Bunuel wrote: For a circle with center point P, cord XY is the perpendicular bisector of radius AP (A is a point on the edge of the circle). What is the length of cord XY? From the diagram and the stem: AZ=ZP=r/2. In a right triangle ZPX ratio of ZP to XP is 1:2, hence ZPX is a 30-60-90 right triangle where the sides are in ratio: 1:\sqrt {3}:2. The longest leg is ZX which corresponds with \sqrt{3} and is opposite to 60 degrees angle. Thus <XPY=60+60=120 (1) The circumference of circle P is twice the area of circle P --> 2\pi{r}=2*\pi{r^2} --> r=1 --> XZ=\frac{\sqrt{3}}{2} --> XY=2*XZ=\sqrt{3}. Sufficient. (2) The length of Arc XAY = 2pi/3 --> \frac{2\pi}{3}=\frac{120}{360}*2\pi{r} --> r=1, the same as above. Sufficient. Answer: D. Hi Bunuel, How did you assume ZPX is a 30-60-90 right triangle just from the ratio of ZP to XP (1:2). How can we assume in any triangle if the two sides are in the ratio 1:2, it will Bunuel be a 30-60-90 triangle? Math Expert I thought we have to know we have to know that the triangle is 30-60-90 triangle beforehand to calculated the third side based on the ratio of two given sides. Joined: 02 Sep 2009 Notice that since XY is the perpendicular to AP, then Posts: 17317 ZPX is a right triangle with right angle at Z Followers: 2874 . So, we have that side:hypotenuse=1:2, which means that we have 30-60-90 triangle, where the ratio of the sides is NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests keenys Re: For a circle with center point P, cord XY is the [#permalink] 17 Jul 2013, 09:50 Intern Thanks for you reply Bunuel. Joined: 12 Mar 2013 However, I still did not understand. Here we have angle XZP=90, XP=r and ZP=r/2. Posts: 12 We do not know that the other angles are 60 and 30 respectively. How can we use the ratio of two sides not three to conclude that it is a 30-60-90 triangle? Followers: 0 Should we know beforehand that it is a 30-60-90 triangle to use two sides to calculate the third one? Math Expert Re: For a circle with center point P, cord XY is the [#permalink] 17 Jul 2013, 09:54 Joined: 02 Sep 2009 Expert's post Posts: 17317 Followers: 2874 Re: For a circle with center point P, cord XY is the [#permalink] 17 Jul 2013, 09:57 Bunuel wrote: keenys wrote: Thanks for you reply Bunuel. However, I still did not understand. Here we have angle XZP=90, XP=r and ZP=r/2. We do not know that the other angles are 60 and 30 respectively. How can we use the ratio of two sides not three to conclude that it is a 30-60-90 triangle? Should we know beforehand that it is a 30-60-90 triangle to use two sides to calculate the third one? When we know two sides in a right triangle the third one is fixed. Joined: 12 Mar 2013 We have side:hypotenuse=1x:2x --> third side = Posts: 12 Followers: 0 , so the sides are in the ratio: --> 30-60-90 triangle. Does this make sense? Thanks Bunuel. Now it makes complete sense. I missed the last part in calculating the third side using Pythagoras. Re: For a circle with center point P, cord XY is the [#permalink] 02 Sep 2013, 23:27 obs23 We have side:hypotenuse=1x:2x --> third side = \sqrt{(2x)^2-x^2}=\sqrt{3}*x Manager I wonder if this is just today...that I am looking at this perfectly clear explanation and still do not get it. I did a couple of minutes later. So first of - thanks for detailed explanations Bunuel and others. I just wanted to add that Joined: 06 Feb 2013 (2x)^2-x^2 = 3x^2 Posts: 60 for those who look at the formula with a predetermined mind so focused on that formula and as a result forget to calculate this basic stuff, perhaps wondering where that Followers: 1 Kudos [?]: 1 [0], given: 33 came from. It is possible it is just me, but it often appears to me that it is not. This is one of those..."duuuhhh"s There are times when I do not mind kudos...I do enjoy giving some for help Re: For a circle with center point P, cord XY is the [#permalink] 03 Sep 2013, 04:19 Bunuel wrote: keenys wrote: Thanks for you reply Bunuel. However, I still did not understand. Here we have angle XZP=90, XP=r and ZP=r/2. We do not know that the other angles are 60 and 30 respectively. How can we use the ratio of two sides not three to conclude that it is a 30-60-90 triangle? Should we know beforehand that it is a 30-60-90 triangle to use two sides to calculate the third one? maaadhu When we know two sides in a right triangle the third one is fixed. Manager We have side:hypotenuse=1x:2x --> third side = Joined: 04 Apr 2013 \sqrt{(2x)^2-x^2}=\sqrt{3}*x Posts: 149 , so the sides are in the ratio: Followers: 1 1:\sqrt{3}:2 Kudos [?]: 20 [0], given: --> 30-60-90 triangle. Does this make sense? If 2 pi r = 2 pi r^2 then either r=0 or r=1 Since radius is always +ve, its safe to assume that r=1. Is that correct? MGMAT1 - 540 ( Trying to improve ) Re: For a circle with center point P, cord XY is the [#permalink] 03 Sep 2013, 04:23 Expert's post maaadhu wrote: Bunuel wrote: keenys wrote: Thanks for you reply Bunuel. However, I still did not understand. Here we have angle XZP=90, XP=r and ZP=r/2. We do not know that the other angles are 60 and 30 respectively. How can we use the ratio of two sides not three to conclude that it is a 30-60-90 triangle? Should we know beforehand that it is a 30-60-90 triangle to use two sides to calculate the third one? When we know two sides in a right triangle the third one is fixed. We have side:hypotenuse=1x:2x --> third side = , so the sides are in the ratio: Bunuel 1:\sqrt{3}:2 Math Expert --> 30-60-90 triangle. Joined: 02 Sep 2009 Does this make sense? Posts: 17317 Bunuel, Followers: 2874 If 2 pi r = 2 pi r^2 then either r=0 or r=1 Since radius is always +ve, its safe to assume that r=1. Is that correct? Yes, because we obviously have a circle. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Re: For a circle with center point P, cord XY is the [#permalink] 28 Feb 2014, 07:19 Could someone please check my calculations? I keep getting a wrong answer (I tried to solve it in a slightly different way but nonetheless the solution should be the same) for statement 1: damamikus so, according to statement 1 --> 2pir=2pir² <=> r=1 ; for the following calculations, please see the attached image below. Intern (1) m°+n°=90° --> n°=90°-m° Joined: 10 Jan 2014 (2) w°+p°=90° Posts: 24 (3)n°+p°=90° Followers: 0 --> (1) in (2): 90°-m°+p°=90° --> m°=p°, similarly: n°=w° ----> AXZ and XZB are similar triangles, hence, their side ratios will be equal. Kudos [?]: 0 [0], given: 6 --> (XZ/0.5)=(0.75/XZ) <=>2XZ=(3/4XZ) <=> XZ²=3/8 <=> XZ=0.5(3/2)^(1/2) --> XZ=2XZ=(3/2)^(1/2) I tried the calculations again and again, but i keep getting the same wrong answer and not 3^(1/2). What did I do wrong? I know that the 30-60-90 approach is easier and probably quicker but I am still confused about what error I made in my calculations/ approach. If someone can help, please do so chord-problem.jpg [ 34.19 KiB | Viewed 546 times ] Re: For a circle with center point P, cord XY is the [#permalink] 28 Feb 2014, 07:42 This post received Expert's post damamikus wrote: Could someone please check my calculations? I keep getting a wrong answer (I tried to solve it in a slightly different way but nonetheless the solution should be the same) for statement 1: so, according to statement 1 --> 2pir=2pir² <=> r=1 ; for the following calculations, please see the attached image below. (1) m°+n°=90° --> n°=90°-m° (2) w°+p°=90° --> (1) in (2): 90°-m°+p°=90° --> m°=p°, similarly: n°=w° ----> AXZ and XZB are similar triangles, hence, their side ratios will be equal. --> (XZ/0.5)=(0.75/XZ) <=>2XZ=(3/4XZ) <=> XZ²=3/8 <=> XZ=0.5(3/2)^(1/2) --> XZ=2XZ=(3/2)^(1/2) I tried the calculations again and again, but i keep getting the same wrong answer and not 3^(1/2). What did I do wrong? I know that the 30-60-90 approach is easier and probably quicker but I am still confused about what error I made in my calculations/ approach. If someone can help, please do so \frac{XZ}{AZ} = \frac{ZB}{XZ} Bunuel --> Math Expert AZ = 0.5 Joined: 02 Sep 2009 and Posts: 17317 ZB = 1.5 Followers: 2874 , not 0.75. Kudos [?]: 18388 [1] , \frac{XZ}{0.5} = \frac{1.5}{XZ} given: 2348 XZ^2 = \frac{3}{4} Hope it helps. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Intern Re: For a circle with center point P, cord XY is the [#permalink] 28 Feb 2014, 07:57 Joined: 10 Jan 2014 Thanks a lot Bunuel! I totally missed that number-error! Posts: 24 Followers: 0 Kudos [?]: 0 [0], given: 6 Re: For a circle with center point P, cord XY is the [#permalink] 01 Mar 2014, 04:17 enigma123 wrote: adymehta29 Untitled.png Intern For a circle with center point P, cord XY is the perpendicular bisector of radius AP (A is a point on the edge of the circle). What is the length of cord XY? Joined: 12 May 2013 (1) The circumference of circle P is twice the area of circle P. Posts: 8 (2) The length of Arc XAY = Followers: 0 \frac{2\pi}{3} Kudos [?]: 0 [0], given: 2 . How come the answer is D? I have drawn these pictures as they were not provided with the questions. Even though with my guess work I have selected A which is incorrect. Can someone please let me know how to solve this? Also, I understand this will include a concept of 30-60-90 degree triangle - any idea which angles to assign 30 and 60 hi bunuel ! can u please explain the 2nd condition how did we get 120 degree ? gmatclubot Re: For a circle with center point P, cord XY is the [#permalink] 01 Mar 2014, 04:17 Similar topics Author Replies Last post Do the points P and Q lie on the same circle with center MA 13 17 Feb 2005, 23:33 In the figure, point P and Q lie on the circle with center mbunny 11 03 Sep 2007, 08:07 4 In the xy-plane, the point (-2, -3) is the center of a circl rite2deepti 9 18 Nov 2010, 04:25 3 In the xy plane, the point (-2,-3) is the center of a circle saishankari 2 02 May 2012, 17:55 2 A circle with center O is inscribed in square WXYZ.Point P iwantto 4 16 Apr 2013, 15:41
{"url":"http://gmatclub.com/forum/for-a-circle-with-center-point-p-cord-xy-is-the-127286.html","timestamp":"2014-04-19T07:16:32Z","content_type":null,"content_length":"309260","record_id":"<urn:uuid:8ef957a1-da10-40f8-97fd-ca158632c838>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimum Classroom Size and Number of Students Per Classroom: C. Kenneth Tanner The University of Georgia School Design and Planning Laboratory April, 2000 Revised Findings and Conclusions: September 1, 2009 This document is protected by U. S. Copyright Laws © and may not be reproduced in any form without written permission of the C. Kenneth Tanner (cktanner@uga.edu). Overview of the Problem One of the most frequently asked questions that I get from individuals interested in the schools' physical environment is: What size should the classroom be? This is is a difficult question because there are many social, educational, and cultural variables that come into the equation. Instead of answering this question directly, lets look at the main problem. Size and specifications are adequately addressed in the classic works of Hawkins and Lilly (1998) and Castaldi (1994). However, as I review schools and achievement of students from a research standpoint, my conclusion is that the major problem may not be size, but density. How many students should we place within a given space? That is the research question. We assume that an important factor in achievement is the number of square feet per student. So we should plan for large media centers, dining halls, and courtyards that can serve as important meeting places for students and teachers and help establish identities for schools. Special areas such as science rooms, art rooms, and shops also require more space than the equation we are going to explore in this article. Most importantly of all, the curriculum (activities for learning) should be the dictator of space needs for a classroom. Because the issue of space is complex, the findings presented here should be applied only as minimum guidelines for traditional classroom activities such as lecture and small group activities, with computer terminals arranged along the walls of the classroom. In addition, evidence is pointing to natural light and outdoor learning areas adjacent to classrooms (especially in elementary schools) as factors in learning. For example, the basic classroom should have at least 72 Square feet (6.70 Square Meters) of windows for natural light, These classrooms should have views overlooking life and an exit door to the outside learning environments (Tanner, 2000). Recent research on daylighting is provided by the Heschong Mahone Group (2000). Ample egress makes sense in light of the trends in school violence (the students and teachers need to be able to get out of harms way quickly). Research Based on the Concept of Social Distance What do researchers say about space needs? Abramson (1991) found higher achievement in schools with adequate space and further noted that if those larger spaces were used for instructional purposes the achievement was even greater. The lesson is clear. Students need ample space because crowding causes problems. For example, a high-density school influences achievement negatively. The effects of high density were summarized by Wohlwill and van Vliet (1985). "It appears as though the consequences of high density conditions that involve either too many children or too little space are: excess levels of stimulation; stress and arousal; a drain on resources available; considerable interference; reductions in desired privacy levels; and loss of control (pp. 108-109). If we conclude that students need space and crowding is bad, it is our job as school planners and designers to provide an equation for architects and decision makers. This issue may be viewed through the psychological implications from the study of territoriality of place according to Banghart and Trull (1973). We know that the student is always dependent on the environment for psychological and sociological clues. The student is always interacting with the physical environment. Since the school is a social system within the cultural environment, we may consider social distance as a means for calculating minimum size of the classroom. The lower middle range for social distance in man and woman is 7 feet (Banghart & Trull, 1973, p. 233). With this guideline of social distance we can develop a chart that provides a guide for design and planning. The square footage shown in Tables 1 is not measured in terms of architectural or gross square feet, but the actual number of square feet or meters needed by the student within the bounds of the indoor classroom. The calculations for elementary school students were determined according to social distance research findings by using the factor of 49 square feet per person (The lower middle range). Larger students, according to the social distance concept require 64 square feet (The upper limit of the middle range for social distance). Table 1 also reveals the minimum standard according to social distance research for upper school students. Table 1 A Minimum Standard for Classroom Size │ Number of Students │ Elementary School │ Secondary School │ │ plus 1 Teacher │ [Square Feet (Meters)] │ [Square Feet (Meters)] │ │ 10 │ 539 (50.13) │ 704 (65.47) │ │ 11 │ 564 (52.45) │ 768 (71.42) │ │ 12 │ 637 (59.24) │ 832 (77.38) │ │ 13 │ 686 (63.80) │ 896 (83.33) │ │ 14 │ 735 (68.36) │ 960 (89.28) │ │ 15 │ 784 (72.91) │ 1024 (95.23) │ │ 16 │ 833 (77.47) │ 1088 (101.18) │ │ 17 │ 882 (82.03) │ 1152 (107.14) │ │ 18 │ 931 (86.58) │ 1216 (113.09) │ │ 19 │ 980 (91.14) │ 1280 (119.04) │ │ 20 │ 1029 (95.70) │ 1344 (124.99) │ With the trend toward smaller classes we should consider social distance as a major factor and adjust the size of classes accordingly. For example, the recommended size of the elementary school classroom in the United States is approximately 900 Square feet. If state policy allows 20 students per teacher, then with social distance as a guide, we expect to find a 1029 square feet per classroom (a deficit of 129 square feet by current standards). Unfortunately these findings regarding social distance (from the field of psychology) come in conflict with educational policy of 20 plus students per classroom in most schools (The classrooms are too small and the result is high density). From the above chart, I can conclude that no more than 17 students per average classroom is the correct number for elementary schools. This straightforward research-based calculation is also supported by the well publicized work of Achilles, Finn, and Bain (1998). The average class size for secondary schools is 1024 square feet and should house approximately 14 - 15 students. These findings have strong implications for government policy. If smaller is better, then fewer students per existing classroom is the answer. We cannot simply put smaller classes in smaller spaces by dividing the spaces we already have. Such action will compound the density problem by having more students in less space. This is not educationally or psychologically sound. Revised September 1, 2009 Revised Findings and Conclusions: It is one of the great advantages of the Internet that I am able to review my previous work and adjust the findings and conclusions based on reflective thinking and perhaps some common sense. To this end, I suggest that the reader examine the social distance issue with the formula Area = (pi) x (r squared). [The ratio of a circle's circumference to its diameter is pi or 3.14159 units]. Here we are concerned with "how much space does it take for people to be comfortable;" notwithstanding, there are many cultural differences to be considered in these rough calculations. If the extent of social distance for interactions among acquaintances is an average of seven (7) feet, with a range of from 4 to 12 feet, then consider developing a chart with the minimum, average and maximum distances as a guide to planning space for interactions in classrooms. For one person the calculations are: Four feet: A = 3.14159 times 4 (squared) = 50.265 square feet. Seven Feet: A = 3.14159 times 7 (squared) = 153. 938 square feet. 12 feet: A = 3.14159 times 12 (squared) = 452.389 square feet. Now weigh these calculations with the reality of building a facility for social interaction. The issue is not as straightforward as I indicated in 2000. Further, let's examine the issue with other definitions for "distance" as a guideline. Personal distance in Caucasian culture is 1.5 feet to 2.5 feet to 4 feet. You can do the calculations. What we have here to quote "Cool Hand Luke, 1977" is "failure to communicate" how much learning is accomplished within boundaries defined as intimate distance, personal distance, social distance, or public What we may comfortably conclude is that no one has completed definitive research on the relationship of distance among students and the amount of learning that takes place in defined spaces. One thing is for certain, crowding is a negative factor for student outcomes. I have added two references by Sommer *, and Tanner and Lackney **(see Chapter 4). Revised September 1, 2009 Achilles, C. M., Finn, J. D., & Bain, H. P. (1998). Using class size to reduce the equity gap. Educational Leadership. 55(4), 40-43. Abramson, P. (1991). "Making the Grade", Architectural Review, 29 (4) 91-93. Banghart, F. W. & Trull, Albert, Jr. (1973), Educational Planning, New York.:The Macmillan Company. Castaldi, B. (1994). Educational Facilities Planning (4 th ed). Boston: Allyn and Bacon. Hawkins, H. L. & Lilly, H. E. (1998). Guide for School Facility Appraisal. Phoenix, AZ: CEFPI. Heschong Mahone Group (2000). Retrieved fro the World Wide Web [ http://h-m-g.com/default.htm]. * Sommer, R. (1969). Personal Space. Englewood Cliffs, N.J: Prentice-Hall. Tanner, C. K. (2000) Essential aspects of designing a school. Retrieved from the World Wide Web [http://sdpl.coe.uga.edu/research/principlesofdesign.html]. ** Tanner, C. K., & Lackney, J. (2006). Educational Facilities Planning: Leadership, Architecture, and Management. Boston, MA: Pearson, Allyn and Bacon. Wohlwill, J. F., & van Vliet, W. (1985). Habitats for Children: The Impacts of Density. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. < < SDPL - FPDM > >
{"url":"http://sdpl.coe.uga.edu/research/territoriality.html","timestamp":"2014-04-16T04:10:23Z","content_type":null,"content_length":"22083","record_id":"<urn:uuid:2d7c150c-ad76-4e0a-919c-f7b551c944d0>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
the speed of a bus is 54 kmph and including stoppages • Login • Register • Forget Challenger of the Day Ann Theressa Time: 00:02:06 Placed User Comments (More) Lekshmi Narasimman MN 5 Days ago Thanks ton for this site . This site is my main reason for clearing cts written which happend on 5/4/2014 in chennai . Tommorrw i have my interview. Hope i will tel u all a good news :) Thanks to almighty too :) !! abhinay yadav 10 Days ago thank you M4maths for such awesome collection of questions. last month i got placed in techMahindra. i prepared for written from this site, many question were exactly same as given here. bcz of practice i finished my written test 15 minutes before and got it. thanx allot for such noble work... 14 Days ago coz of this site i cud clear IBM's apti nd finally got placed in tcs thanx m4maths...u r a wonderful site :) 17 Days ago thank u m4maths and all its user for posting gud and sensible answers. Nilesh singh 19 Days ago finally selected in TCS. thanks m4maths 21 Days ago Thank you team m4maths.Successfully placed in TCS. Deepika Maurya 22 Days ago Thank you so much m4maths.. I cleared the written of IBM.. :) very good site.. thumps up !! Rimi Das 1 Month ago Thanks to m4maths I got selected in Tech Mahindra.I was preparing for TCS 1st round since last month.Got interview call letter from there also...Really m4maths is the best site for placement Stephen raj 1 Month ago prepare from r.s.aggarwal verbal and non verbal reasoning and previous year questions from m4maths,indiabix and chetanas forum.u can crack it. Stephen raj 1 Month ago Thanks to m4maths:) cracked infosys:) 1 Month ago i have been Selected in Tech Mahindra. All the quanti & reasoning questions are common from the placement papers of m4maths. So a big thanks to m4maths team & the people who shares the placement papers. Amit Das 1 Month ago I got selected for interview in TCS.Thank you very much m4maths.com. 1 Month ago I got placed in TCS :) Thanks a lot m4maths :) Syed Ishtiaq 1 Month ago An Awesome site for TCS. Cleared the aptitude. 1 Month ago I successfully cleared TCS aptitude test held on 8th march 2014.Thanks a lot m4maths.com plz guide for the technical round. mounika devi mamidibathula 1 Month ago got placed in IBM.. this site is very useful, many questions repeated.. thanks alot to m4maths.com Anisha Lakhmani 1 Month ago I got placed at infosys.......thanx to m4maths.com.......a awesum site...... Anisha Lakhmani 1 Month ago I got placed at infosys.......thanx to m4maths.com.......a awesum site...... Kusuma Saddala 1 Month ago Thanks to m4maths, i have place at IBM on feb 8th of this month 2 Months ago thanks to m4 maths because of this i clear csc written test mahima srivastava 2 Months ago Placed at IBM. Thanks to m4maths. This site is really very helpful. 95% questions were from this site. Surya Narayana K 2 Months ago I successfully cleared TCS aptitude test.Thanks a lot m4maths.com. Surya Narayana K 2 Months ago I successfully cleared TCS aptitude test.Thanks a lot m4maths.com. prashant gaurav 2 Months ago Got Placed In Infosys... Thanks of m4maths.... 3 Months ago iam not placed in TCS...........bt still m4maths is a good site. 4 Months ago Thanx to m4 maths, because of that i able to crack aptitude test and now i am a part of TCS. This site is best for the preparation of placement papers.Thanks a lotttttt............ 4 Months ago THANKS a lot m4maths. Me and my 2 other roomies cleared the tcs aptitude with the help of this site.Some of the questions in apti are exactly same which i answered without even reading the whole question completely.. gr8 work m4maths.. keep it up. 5 Months ago m4maths is one of the main reason I cleared TCS aptitude. In TCS few questions will be repeated from previous year aptis and few questions will be repeated from the latest campus drives that happened in various other colleges. So to crack TCS apti its enough to learn some basic concepts from famous apti books and follow all the TCS questions posted in m4maths. This is not only for TCS but for all other companies too. According to me m4maths is best site for clearing apti. Kuddos to the creator of m4maths :) 5 Months ago THANKS A LOT TO M4MATHS.due to m4maths today i am the part of TCS now.got offer letter now. 5 Months ago Hai friends, I got placed in L&T INFOTECH and i m visiting this website for the past 4 months.Solving placemetn puzzles from this website helped me a lot and 1000000000000s of thanks to this website.this website also encouraged me to solve puzzles.follw the updates to clear maths aps ,its very easy yar, surely v can crack it if v follow this website. 5 Months ago 2 days before i cleared written test just because of m4maths.com.thanks a lot for this community. 6 Months ago thanks for m4maths!!! bcz of which i cleared apti of infosys today. 6 Months ago Today my written test of TCS was completed.I answered many of the questions without reading entire question.Because i am one of the member in the m4maths. No words to praise m4maths.so i simply said thanks a lot. 7 Months ago I am very grateful to m4maths. It is a great site i have accidentally logged on when i was searching for an answer for a tricky maths puzzle. It heped me greatly and i am very proud to say that I have cracked the written test of tech-mahindra with the help of this site. Thankyou sooo much to the admins of this site and also to all members who solve any tricky puzzle very easily making people like us to be successful. Thanks a lotttt Abhishek Ranjan 7 Months ago me & my rooom-mate have practiced alot frm dis site TO QUALIFY TCS written test.both of us got placed in TCS :) do practice n u'll surely succeed :) Sandhya Pallapu 1 year ago Hai friends! this site is very helpful....i prepared for TCS campus placements from this site...and today I m proud to say that I m part of TCS family now.....dis site helped me a lot in achieving this...thanks to M4MATHS! vivek singh 2 years ago I cracked my first campus TCS in November 2011...i convey my heartly thanks to all the members of m4maths community who directly or indirectly helped me to get through TCS......special thanks to admin for creating such a superb community Manish Raj 2 years ago this is important site for any one ,it changes my life...today i am part of tcs only because of M4ATHS.PUZZLE Asif Neyaz 2 years ago Thanku M4maths..due to u only, imade to TCS :D test on sep 15. Harini Reddy 2 years ago Big thanks to m4maths.com. I cracked TCS..The solutions given were very helpful!!! 2 years ago HI everyone , me and my friends vish,sube,shaf placed in TCS... its becoz of m4maths only .. thanks a lot..this is the wonderful website.. unless your help we might not have been able to place in TCS... and thanks to all the users who clearly solved the problems.. im very greatful to you :) 2 years ago Really thanks to m4maths I learned a lot... If you were not there I might not have been able to crack TCS.. love this site hope it's reputation grows exponentially... 2 years ago Hello friends .I was selected in TCS. Thanx to M4Maths to crack apti. and my hearthly wishes that the success rate of M4Math grow exponentially. Again Thanx for all support given by M4Math during my preparation for TCS. and Best of LUCK for all students for their preparation. 2 years ago thanks to M4MATHS..got selected in TCS..thanks for providing solutions to TCS puzzles :) 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of m4maths... 3 cheers for m4maths 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of m4maths... 3 cheers for m4maths 2 years ago Thank U ...I'm placed in TCS..... Continue this g8 work 2 years ago thank you m4maths.com for providing a web portal like this.Because of you only i got placed in TCS,driven on 26/8/2011 in oncampus raghu nandan 2 years ago thanks a lot m4maths cracked TCS written n results are to be announced...is only coz of u... :) V.V.Ravi Teja 3 years ago thank u m4maths because of you and my co people who solved some complex problems for me...why because due to this only i got placed in tcs and hcl also........ Veer Bahadur Gupta 3 years ago got placed in TCS ... thanku m4maths... Amulya Punjabi 3 years ago Hi All, Today my result for TCS apti was declared nd i cleared it successfully...It was only due to m4maths...not only me my all frnds are able to crack it only wid the help of m4maths.......it's just an osum site as well as a sure shot guide to TCS apti......Pls let me know wt can be asked in the interview by MBA students. Anusha Alva 3 years ago a big thnks to this site...got placed in TCS!!!!!! Oindrila Majumder 3 years ago thanks a lot m4math.. placed in TCS Pushpesh Kashyap 3 years ago superb site, i cracked tcs Saurabh Bamnia 3 years ago Great site..........got Placed in TCS...........thanx a lot............do not mug up the sol'n try to understand.....its AWESOME......... Gautam Kumar 3 years ago it was really useful 4 me.................n finally i managed to get through TCS Karthik Sr Sr 3 years ago i like to thank m4maths, it was very useful and i got placed in tcs Lekshmi Narasimman MN 5 Days ago Thanks ton for this site . This site is my main reason for clearing cts written which happend on 5/4/2014 in chennai . Tommorrw i have my interview. Hope i will tel u all a good news :) Thanks to almighty too :) !! abhinay yadav 10 Days ago thank you M4maths for such awesome collection of questions. last month i got placed in techMahindra. i prepared for written from this site, many question were exactly same as given here. bcz of practice i finished my written test 15 minutes before and got it. thanx allot for such noble work... manasi 14 Days ago coz of this site i cud clear IBM's apti nd finally got placed in tcs thanx m4maths...u r a wonderful site :) arnold 17 Days ago thank u m4maths and all its user for posting gud and sensible answers. Nilesh singh 19 Days ago finally selected in TCS. thanks m4maths MUDIT 21 Days ago Thank you team m4maths.Successfully placed in TCS. Deepika Maurya 22 Days ago Thank you so much m4maths.. I cleared the written of IBM.. :) very good site.. thumps up !! Rimi Das 1 Month ago Thanks to m4maths I got selected in Tech Mahindra.I was preparing for TCS 1st round since last month.Got interview call letter from there also...Really m4maths is the best site for placement preparation... Stephen raj 1 Month ago prepare from r.s.aggarwal verbal and non verbal reasoning and previous year questions from m4maths,indiabix and chetanas forum.u can crack it. Stephen raj 1 Month ago Thanks to m4maths:) cracked infosys:) Ranadip 1 Month ago i have been Selected in Tech Mahindra. All the quanti & reasoning questions are common from the placement papers of m4maths. So a big thanks to m4maths team & the people who shares the placement papers. Amit Das 1 Month ago I got selected for interview in TCS.Thank you very much m4maths.com. PRAVEEN K H 1 Month ago I got placed in TCS :) Thanks a lot m4maths :) Syed Ishtiaq 1 Month ago An Awesome site for TCS. Cleared the aptitude. sara 1 Month ago I successfully cleared TCS aptitude test held on 8th march 2014.Thanks a lot m4maths.com plz guide for the technical round. mounika devi mamidibathula 1 Month ago got placed in IBM.. this site is very useful, many questions repeated.. thanks alot to m4maths.com Anisha Lakhmani 1 Month ago I got placed at infosys.......thanx to m4maths.com.......a awesum site...... Kusuma Saddala 1 Month ago Thanks to m4maths, i have place at IBM on feb 8th of this month sangeetha 2 Months ago thanks to m4 maths because of this i clear csc written test mahima srivastava 2 Months ago Placed at IBM. Thanks to m4maths. This site is really very helpful. 95% questions were from this site. Surya Narayana K 2 Months ago I successfully cleared TCS aptitude test.Thanks a lot m4maths.com. prashant gaurav 2 Months ago Got Placed In Infosys... Thanks of m4maths.... vishal 3 Months ago iam not placed in TCS...........bt still m4maths is a good site. sameer 4 Months ago Thanx to m4 maths, because of that i able to crack aptitude test and now i am a part of TCS. This site is best for the preparation of placement papers.Thanks a Sonali 4 Months ago THANKS a lot m4maths. Me and my 2 other roomies cleared the tcs aptitude with the help of this site.Some of the questions in apti are exactly same which i answered without even reading the whole question completely.. gr8 work m4maths.. keep it up. Kumar 5 Months ago m4maths is one of the main reason I cleared TCS aptitude. In TCS few questions will be repeated from previous year aptis and few questions will be repeated from the latest campus drives that happened in various other colleges. So to crack TCS apti its enough to learn some basic concepts from famous apti books and follow all the TCS questions posted in m4maths. This is not only for TCS but for all other companies too. According to me m4maths is best site for clearing apti. Kuddos to the creator of m4maths :) YASWANT KUMAR CHAUDHARY 5 Months ago THANKS A LOT TO M4MATHS.due to m4maths today i am the part of TCS now.got offer letter now. ANGELIN ALFRED 5 Months ago Hai friends, I got placed in L&T INFOTECH and i m visiting this website for the past 4 months.Solving placemetn puzzles from this website helped me a lot and 1000000000000s of thanks to this website.this website also encouraged me to solve puzzles.follw the updates to clear maths aps ,its very easy yar, surely v can crack it if v follow this website. MALLIKARJUN ULCHALA 5 Months ago 2 days before i cleared written test just because of m4maths.com.thanks a lot for this community. Madhuri 6 Months ago thanks for m4maths!!! bcz of which i cleared apti of infosys today. DEVARAJU 6 Months ago Today my written test of TCS was completed.I answered many of the questions without reading entire question.Because i am one of the member in the m4maths. No words to praise m4maths.so i simply said thanks a lot. PRATHYUSHA BSN 7 Months ago I am very grateful to m4maths. It is a great site i have accidentally logged on when i was searching for an answer for a tricky maths puzzle. It heped me greatly and i am very proud to say that I have cracked the written test of tech-mahindra with the help of this site. Thankyou sooo much to the admins of this site and also to all members who solve any tricky puzzle very easily making people like us to be successful. Thanks a lotttt Abhishek Ranjan 7 Months ago me & my rooom-mate have practiced alot frm dis site TO QUALIFY TCS written test.both of us got placed in TCS :) IT'S VERY VERY VERY HELPFUL N IMPORTANT SITE. do practice n u'll surely succeed :) Sandhya Pallapu 1 year ago Hai friends! this site is very helpful....i prepared for TCS campus placements from this site...and today I m proud to say that I m part of TCS family now.....dis site helped me a lot in achieving this...thanks to M4MATHS! vivek singh 2 years ago I cracked my first campus TCS in November 2011...i convey my heartly thanks to all the members of m4maths community who directly or indirectly helped me to get through TCS......special thanks to admin for creating such a superb community Manish Raj 2 years ago this is important site for any one ,it changes my life...today i am part of tcs only because of M4ATHS.PUZZLE Asif Neyaz 2 years ago Thanku M4maths..due to u only, imade to TCS :D test on sep 15. Harini Reddy 2 years ago Big thanks to m4maths.com. I cracked TCS..The solutions given were very helpful!!! portia 2 years ago HI everyone , me and my friends vish,sube,shaf placed in TCS... its becoz of m4maths only .. thanks a lot..this is the wonderful website.. unless your help we might not have been able to place in TCS... and thanks to all the users who clearly solved the problems.. im very greatful to you :) vasanthi 2 years ago Really thanks to m4maths I learned a lot... If you were not there I might not have been able to crack TCS.. love this site hope it's reputation grows exponentially... vijay 2 years ago Hello friends .I was selected in TCS. Thanx to M4Maths to crack apti. and my hearthly wishes that the success rate of M4Math grow exponentially. Again Thanx for all support given by M4Math during my preparation for TCS. and Best of LUCK for all students for their preparation. maheswari 2 years ago thanks to M4MATHS..got selected in TCS..thanks for providing solutions to TCS puzzles :) GIRISH 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of m4maths... 3 cheers for m4maths girish 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of m4maths... 3 cheers for m4maths Aswath 2 years ago Thank U ...I'm placed in TCS..... Continue this g8 work JYOTHI 2 years ago thank you m4maths.com for providing a web portal like this.Because of you only i got placed in TCS,driven on 26/8/2011 in oncampus raghu nandan 2 years ago thanks a lot m4maths cracked TCS written n results are to be announced...is only coz of u... :) V.V.Ravi Teja 3 years ago thank u m4maths because of you and my co people who solved some complex problems for me...why because due to this only i got placed in tcs and hcl also........ Veer Bahadur Gupta 3 years ago got placed in TCS ... thanku m4maths... Amulya Punjabi 3 years ago Hi All, Today my result for TCS apti was declared nd i cleared it successfully...It was only due to m4maths...not only me my all frnds are able to crack it only wid the help of m4maths.......it's just an osum site as well as a sure shot guide to TCS apti......Pls let me know wt can be asked in the interview by MBA students. Anusha Alva 3 years ago a big thnks to this site...got placed in TCS!!!!!! Oindrila Majumder 3 years ago thanks a lot m4math.. placed in TCS Pushpesh Kashyap 3 years ago superb site, i cracked tcs Saurabh Bamnia 3 years ago Great site..........got Placed in TCS...........thanx a lot............do not mug up the sol'n try to understand.....its AWESOME......... Gautam Kumar 3 years ago it was really useful 4 me.................n finally i managed to get through TCS Karthik Sr Sr 3 years ago i like to thank m4maths, it was very useful and i got placed in tcs Latest User posts (More) Maths Quotes (More) "Infinity is a floorless room without walls or ceiling." Unknown "MATHEMATICS is a great motivator for all humans.. Because its career starts with "ZERO" but it never end(INFINITY).. " Kumar Purnendu "But mathematics is the sister, as well as the servant, of the arts and is touched with the same madness and genius." Harold Marston Morse "The person who can solve mathematical problems,can lead life easily ,Maths tellu that every problem as solution...we can find number of solutions for one problem ...Aplly it in the life also we can lead our life happily..." swetha "Maths is like a proverb where there is a will there is a way.If you go deep into it you find someway" Faisal Rizwan "Mathematics is the brain's sharpener." Angeliza Ampeloquio, PH "If there is a God, he's a great mathematician." Paul Dirac Latest Placement Puzzle (More) "The average age of 8 men is increased by 2 years.while two of them were 20 and 24 replaced by two women..what is the avg age of two women?" UnsolvedAsked In: Bank Exam "The average age of 11 cricket player decreased by 2 months.when new two players are added in the team replacing two players of age 17 and 20 years.the average age of new player" UnsolvedAsked In: ACIO "The least number which is to be added to the greater number of 4 digits so that sum may be divisible by 345." UnsolvedAsked In: ACIO "Infinity is a floorless room without walls or ceiling." Unknown "MATHEMATICS is a great motivator for all humans.. Because its career starts with "ZERO" but it never end(INFINITY).. " Kumar Purnendu "But mathematics is the sister, as well as the servant, of the arts and is touched with the same madness and genius." Harold Marston Morse "The person who can solve mathematical problems,can lead life easily ,Maths tellu that every problem as solution...we can find number of solutions for one problem ...Aplly it in the life also we can lead our life happily..." swetha "Maths is like a proverb where there is a will there is a way.If you go deep into it you find someway" Faisal Rizwan "If there is a God, he's a great mathematician." Paul Dirac "The average age of 8 men is increased by 2 years.while two of them were 20 and 24 replaced by two women..what is the avg age of two women?" UnsolvedAsked In: Bank Exam "The average age of 11 cricket player decreased by 2 months.when new two players are added in the team replacing two players of age 17 and 20 years.the average age of new player" UnsolvedAsked In: "The least number which is to be added to the greater number of 4 digits so that sum may be divisible by 345." UnsolvedAsked In: ACIO 3i-infotech (285) Accenture (258) ADITI (46) Athenahealth (38) CADENCE (30) Capgemini (227) CMC (29) Cognizant (42) CSC (462) CTS (811) Dell (41) GENPACT (503) Google (29) HCL (119) Hexaware (67) Huawei (39) IBM (1160) Infosys (1612) L&T (58) Microsoft (41) Miscellaneous (149) Oracle (38) Patni (193) Sasken (25) Company Exam Self (26) Syntel (433) TCS (6579) Tech (141) Wipro (1073) HR Interview(1) Logical Reasoning(31) Blood Making Relations Coding Cryptography and (10) Decoding(4) (5) Problem Direction General Letter Logical Mathematical Number Sense(2) Mental Arrangement Sequences Reasoning(1) Series(1) Ability(2) (1) (1) Numerical Ability(48) Age Area and Averages Problem Algebra(5) Volume(1) (4) Clocks Data Permutation Profit Quadratic Ratio and Sequence Simple & Time Time and Sufficiency LCM and HCF Number Percentage and and Equations Proportion and Compound and Distance Calendars (4) (2) System(2) (1) Combination Loss (1) (5) Series Interest Work and (2) (1) (5) (2) (1) (5) Speed(2) Verbal Ability(4) Antonyms Sentence (2) Completion Synonyms(1)
{"url":"http://www.m4maths.com/3731-Excluding-stoppages-the-speed-of-a-bus-is-54-kmph-and-including-stoppages-it-is-45-kmph-For-how-many.html?SOURCE=CDS","timestamp":"2014-04-17T04:11:51Z","content_type":null,"content_length":"110717","record_id":"<urn:uuid:b5455ac1-07be-4b97-9325-377e5fb38f8c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Key transmission parameters of an institutional outbreak during the 1918 influenza pandemic estimated by mathematical modelling • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Theor Biol Med Model. 2006; 3: 38. Key transmission parameters of an institutional outbreak during the 1918 influenza pandemic estimated by mathematical modelling To estimate the key transmission parameters associated with an outbreak of pandemic influenza in an institutional setting (New Zealand 1918). Historical morbidity and mortality data were obtained from the report of the medical officer for a large military camp. A susceptible-exposed-infectious-recovered epidemiological model was solved numerically to find a range of best-fit estimates for key epidemic parameters and an incidence curve. Mortality data were subsequently modelled by performing a convolution of incidence distribution with a best-fit incidence-mortality lag distribution. Basic reproduction number (R[0]) values for three possible scenarios ranged between 1.3, and 3.1, and corresponding average latent period and infectious period estimates ranged between 0.7 and 1.3 days, and 0.2 and 0.3 days respectively. The mean and median best-estimate incidence-mortality lag periods were 6.9 and 6.6 days respectively. This delay is consistent with secondary bacterial pneumonia being a relatively important cause of death in this predominantly young male population. These R[0 ]estimates are broadly consistent with others made for the 1918 influenza pandemic and are not particularly large relative to some other infectious diseases. This finding suggests that if a novel influenza strain of similar virulence emerged then it could potentially be controlled through the prompt use of major public health measures. The 1918 influenza pandemic reached New Zealand with an initial wave between July and October [1]. This was relatively mild with only four deaths out of 3048 reported cases for the population of military camps [1]. The second wave in late October was much more severe and spread throughout the country causing over 8000 deaths [2]. One large military camp near Featherston (a town in the south of the North Island) also suffered from exposure to the second wave of the 1918 pandemic at approximately the same time as the rest of the country. Influenza cases were reported in the camp from 28 October to 22 November 1918, and reported mortality occurred between 7 November and 11 December 1918, with both incidence and mortality peaking in November 1918 [2]. A unique feature of this military camp outbreak was the systematic collection by medical staff of morbidity data as well as mortality data. We undertook modelling of these data to understand better the transmission dynamics of the 1918 influenza pandemic in New Zealand. The population of the Featherston Military Camp was that of a large regional town, comprising approximately 8000 military personnel of whom 3220 were hospitalised [3]. The camp policy was to hospitalise all those with diagnosed influenza and so we have used these hospitalisation data as the basis for the incidence of pandemic influenza in this population. An official report indicated a total of 177 deaths attributable to the outbreak [4]. However, this figure was actually the total number of men who died in the camp in 1918 from all causes as reported by the Principal Medical Officer at the camp [3]. Further examination of data on the cause of death and date-of-death suggests the total mortality attributable to this outbreak was 163 [5]. This revision gives a fairly conservative figure for the mortality impact and it is the one that we have used in this analysis. Mathematical modelling approach A susceptible-exposed-infectious-recovered (SEIR) model for infectious diseases can be applied to a hypothetical isolated population, to investigate local infection dynamics [6,7]. The SEIR model allows a systematic method by which to quantify the dynamics, and derive epidemiological parameters for disease outbreaks. In this model, individuals in a hypothetical population are categorized at any moment in time according to infection status, as one of susceptible, exposed, infectious, or removed from the epidemic process (either recovered and immune or deceased). If an infected individual is introduced into the population, rates of change of the proportion of the population in each group (s, e, i, and r, respectively) can be described by four simultaneous differential equations: $dsdt=−βsi (1) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= $dedt=βsi−νe (2) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= $didt=νe−γi (3) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= $drdt=γi (4) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= where β, ν and γ are rate constants for transformation of individuals from susceptible to exposed, from exposed to infectious, and from infectious to recovered and immune states, respectively. Once the above equations have been solved, the parameters β and γ can be utilized to calculate the basic reproduction number (R[0]) for the particular virus strain causing the outbreak. (The basic reproduction number represents the number of secondary cases generated by a primary case in a completely susceptible population). R[0 ]and the average latent period (T[E]), and average infectious period (T[I]), can be calculated using the following relationships: $R0=βγ (5) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= $TE=1ν (6) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= $TI=1γ (7) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= Other factors that are likely to affect the observed incidence of disease in a pandemic include the following: (i) the initial proportion of population that is susceptible (P[is]); (ii) the proportion of infected cases who develop symptoms (P[ids]); (iii) the infectivity of asymptomatic people relative to the infectivity of symptomatic people (Inf[as]); and (iv) the proportion of symptomatic cases who present (P[sp]). In this study, the factors listed above were incorporated into an SEIR model to generate incidence and subsequent mortality models for the influenza pandemic that swept through this military camp. These specific models and the resulting estimates of R[0 ]and T[E ]and T[I ]are described below. SEIR model of incidence When the SEIR model was applied in this study, assumptions about additional factors that might influence the observed incidence were made. The parameters associated with these assumptions are summarised for 3 possible scenarios (Table (Table1).1). Parameters in Scenarios 1, 2, and 3 were chosen so that models would yield estimates of R[0 ]at the lower, mid-range and higher ends of a likely spectrum, respectively. Parameters used in the SEIR incidence model*. Equations 1 and 2 were modified to take the above parameters into account, as follows: $dsdt=−β(Pids+(1−Pids)Infas)si (8) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= $dedt=β(Pids+(1−Pids)Infas)si−νe (9) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= Equations 3, 4, 8 and 9 are a system of non-linear differential equations, amenable to solution by the Runge-Kutta fourth order fixed step numerical method [8]. The population size was taken to be N = 8000. The initial value for s was P[is ]- 1/N, and initial values of e, i, and r were set at 0, 1/N and 1-P[is ]respectively. The differential equation system solutions were used to calculate daily incidence, taking into account parameters in Table Table1,1, using the following equation: Incidence = P[sp]P[ids]N(s(t - 1) - s(t)) (10) in which s(t) and s(t-1) are the proportion of susceptible individuals at t and t-1 days respectively after the introduction of a single symptomatic individual into the population. For each scenario in Table Table1,1, modelled incidence was compared to observed incidence over 26 days, and goodness of fit of the models was evaluated using sum of squared error (SSE) between modelled and empirical data. Optimum possible β, ν and γ values to one decimal place, in the range 0.1 to 20, were determined by finding values corresponding to a minimum SSE, utilizing an algorithm written in Mathcad [9]. The asymptotic variance-covariance matrix of the least squares estimates of β, ν and γ, was computed using the method described by Chowell et al. [10]. Equations 5, 6, and 7, together with elements of the variance-covariance matrix, and a Taylor series approximation for variance of quotients [11], were subsequently used to estimate best-fit values of R[0], T[E ]and T[I], with associated standard deviations and confidence intervals. Associated mortality model As morbidity and mortality data are not linked at the individual level, case-fatality lag was modelled by using convolution. A least-squares gamma distribution was fitted to the observed incidence curve. A gamma distribution with the same scale parameter was then fitted to mortality data. Utilising these distributions and the convolution formula, a gamma distributed incidence-mortality lag distribution, with the same scale parameter, was obtained. Gamma distributions with the same scale parameter were then fitted to the best-fit deterministic models of daily incidence. These distributions, convolved with the incidence-mortality lag distribution, yielded daily mortality distributions for each of Scenarios 1 to 3. A common scale parameter was used in the above convolutions in order to obtain closed-form (gamma) probability density functions. Best-fit incidence curves from the SEIR model for the three scenarios are shown in Figure Figure1.1. The corresponding best-fit β, ν and γ, and corresponding R[0], T[E ]and T[I ]values, are shown in Table Table2.2. The R[0 ]values ranged between 1.3, and 3.1, and corresponding average latent period and infectious period estimates ranged between 0.7 and 1.3 days, and 0.2 and 0.3 days, Observed and best-fit modelled incidence (ill cases per day) for Scenarios 1 to 3, and best-fit gamma distribution. Rate constants and epidemiological parameters corresponding to the best-fit models shown in Figure 1 (associated standard deviation or 95% confidence interval is given in brackets). The gamma distribution of incidence-mortality lag time obtained by convolution is shown in Figure Figure2.2. The mean, median, mode and variance of this distribution are 6.9, 6.6, 6.0 and 6.3 days Incidence-mortality lag time distribution. Observed mortality data, shown in Figure Figure3,3, indicate more variability around a best-fit gamma distribution than observed incidence data (see Figure Figure1).1). Mortality curves for each of Scenarios 1 to 3, obtained by convolution, all agree well with the best-fit gamma distribution of observed data. Observed and best-fit modelled mortality (deaths per day) for Scenarios 1 to 3. This analysis has demonstrated the potential for using historical disease epidemic data to derive plausible, and potentially useful, pandemic influenza parameter estimates. This is the first time that these parameters have been reported for the 1918 pandemic outside of Europe, the USA and Brazil. Limitations of this analysis This work is limited by the very nature of using data from an event that occurred over eight decades ago. For example, the estimate of the camp's population was only approximate (at 8000). The mortality burden of this particular outbreak (at 20.4 per 1000) was also somewhat higher than that for the general male population of New Zealand (ie, at 10.0 per 1000 for 20–24 year olds) [2]. It was, however, similar to the pandemic influenza mortality burden of the armed forces as a whole (at 23.5 per 1000) and for other military camps at 22.0 and 23.5 (for Awapuni and Trentham camps respectively) [2]. It is plausible that higher death rates in military camps may have been related to both higher risk of infection (e.g. via crowding) and the poor living conditions involved (i.e. the extensive use of tents). Crowded troop trains may also have contributed to disease spread and in the weekend prior to the main outbreak in the camp many of the recruits had been away on leave, and were transported to and from the camp by troop trains. Furthermore, a severe storm struck the Featherston camp on 7 November (the day that influenza incidence peaked) and flattened many tents. This event placed additional stresses on accommodating men in huts that were already full and with some huts (and all institute buildings such as the YMCA, for example) being used as overflow wards to the main camp hospital to which the most severe cases were admitted. Less severe cases were admitted to makeshift wards in the so-called institute buildings, and the huts were used for convalescence. In his report, the Principal Medical Officer commented that this storm was likely to have exacerbated the impact of the outbreak and this is certainly plausible [3]. In addition to data limitations, the parameters used for the SEIR model also involve uncertainties; for example, we have no good data on the proportion of the young male population who were likely to be susceptible to this strain in 1918 (e.g. based on the possible residual immunity from the first wave of the pandemic or from previous influenza epidemics and pandemics). Also, the SEIR model involves a number of simplifying assumptions, including a single index case, homogeneous mixing, exponentially distributed residence times in infectious status categories, and isolation of the military camp. Estimating R[0] The estimates for R[0 ]in the range from 1.3 to 3.1 are the first such estimates for the 1918 pandemic outside Europe, the United States and Brazil, so far as we are aware. However, given the unique aspects of the military camp (crowded conditions and a young population with low immunity) it is quite likely that the R[0 ]values estimated in our analysis might tend to over-estimate those for the general population. Nevertheless, this effect may have been partly offset by the camp policy of immediate hospitalisation upon symptoms, effectively reducing infective contacts. Our estimated range for R[0 ]is broadly consistent with estimates for this pandemic in the United States (a median R[0 ]of 2.9 for 45 cities) [12]. Other comparable figures for the 1918 pandemic are: 1.7 to 2.0 for the first wave for British city-level mortality data [13]; 2.0, 1.6 and 1.7 for the first, second and third waves in the UK respectively [14]; 1.5 and 3.8 in the first and second waves in Geneva respectively [15]; and 2.7 for Sao Paulo in Brazil [16]. The upper end of our estimated range (R[0 ]= 3.1) may reflect the differences between disease transmission in the general population (as per the above cited studies) and transmission in a crowded military camp with a predominance of young males. Considered collectively, these R[0 ]estimates for pandemic influenza in various countries are not particularly high when compared to the R[0 ]estimates for various other infectious diseases [17]. This observation provides some reassurance that if a strain of influenza with similar virulence were to emerge, then there would be scope for successful control measures. Indeed, one model, using R[0 ]values in the 1.1 to 2.4 range, has suggested the possibility of successful influenza pandemic control [18]. This was also the case for a model using R[0 ]= 1.8 [19]. Nevertheless, at the upper end of the estimated range for R[0], control measures may be more difficult, especially if public health authorities are slow to respond and they have insufficient access to antivirals and pandemic strain vaccines. The latent and infectious periods The average latent and infectious periods were estimated to be in the range between 0.7 to 1.3 days, and 0.2 to 0.3 days, respectively. The infectious period is short compared to the period of peak virus shedding known to occur in the first 1 to 3 days of illness [20]. Other modelling work has used longer estimates, e.g. a mean of 4.1 days used by Longini et al. [18]. The fast onset and subsequent decline of the outbreak in the Featherston Military Camp, as compared to a national or city-wide outbreak, might possibly be due to relatively close habitation and a high level of mixing. The average time for infection between a primary and secondary case (the serial interval) is greatly shortened in this case. This could explain a short apparent infectious period, and a relatively large proportion of the serial interval in the latent state. Another possible explanation of the relatively short apparent infectious period for this outbreak is that it may reflect the limited transmission that occurred once symptomatic individuals were hospitalised on diagnosis – which was the policy taken in this military camp for all cases. The lag period from diagnosed illness to death This analysis was able to estimate an approximate seven-day delay from reported symptomatic illness to the date of death at a population level. This result is suggestive that even in this relatively young population (largely of military recruits), an important cause of death was likely to have been from secondary bacterial pneumonia – as opposed to the primary influenza viral pneumonia or acute respiratory distress syndrome (for which death may have tended to occur more promptly). This finding is consistent with other evidence that a large proportion of deaths from the 1918 pandemic was attributable to bacterial respiratory infections [21]. This picture is also somewhat reassuring as it suggests that much of this mortality could be prevented (with antibiotics) if a novel strain with similar virulence emerged in the future. The R[0 ]estimates in the 1.3 to 3.1 range are broadly consistent with others made for the 1918 influenza pandemic and are not particularly large relative to some other infectious diseases. This finding suggests that if a novel influenza strain of similar virulence emerged then it could potentially be controlled through the prompt use of major public health measures. These results also suggest that effective treatment of pneumonia could result in better outcomes (lower mortality) than was experienced in 1918. Competing interests The author(s) declare that they have no competing interests. Authors' contributions Three of authors were involved in initial work in identifying the data and analysing it from a historical and epidemiological perspective (PN, NW and MB). The other two authors worked on developing and running the mathematical model (GS, MR). GS did most of the drafting of the first draft of the manuscript with assistance from NW. All authors then contributed to further re-drafting of the manuscript and have given approval of the final version to be published. We thank the following medical students for their work in gathering information on the outbreak in the Featherston military camp: Abdul Al Haidari, Abdullah Al Hazmi, Hassan Al Marzouq, Melinda Parnell, Diana Rangihuna, Yasotha Selvarajah. We also thank the journal's reviewers for their helpful comments. Some of the work on this article by two of the authors (NW & MB) was part of preparation for a Centers for Disease Control and Prevention (USA) grant (1 U01 CI000445-01). • Maclean FS. Challenge for Health: A history of public health in New Zealand. Wellington, Government Printer; 1964. • Rice GW. Black November: The 1918 influenza pandemic in New Zealand. Christchurch, Canterbury University Press; 2005. • Carbery AD. The New Zealand Medical Service in the Great War 1914-1918. Whitcombe & Tombs Ltd. pp. 506-509; 1924. • Henderson RSF. Journal of the House of Representatives, 1919. Wellington, Marcus F. Mark; 1919. New Zealand Expeditionary Force. Health of the Troops in New Zealand for the year 1918. • Al Haidari A, Al Hazmi A, Al Marzouq H, Armstrong M, Colman A, Fancourt N, McSweeny K, Naidoo M, Nelson P, Parnell M, Rangihuna-Winekerei D, Selvarajah Y, Stantiall S. Death by numbers: New Zealand mortality rates in the 1918 influenza pandemic. Wellington, Wellington School of Medicine and Health Sciences; 2006. • Anderson RM, May RM. Infectious diseases of humans: dynamics and control. Oxford, Oxford University Press; 1991. • Diekmann O, Heesterbeek JAP. Mathematical epidemiology of infectious diseases: model building, analysis and interpretation. Chichester, Wiley; 2000. • Zill DG. Differential Equations with Boundary-Value Problems. Boston, PWS-Kent Publishing Company; 1989. • Mathsoft . Mathcad version 13. Cambridge, MA, Mathsoft Engineering & Education, Inc; 2005. • Chowell G, Shim E, Brauer F, Diaz-Duenas P, Hyman JM, Castillo-Chavez C. Modelling the transmission dynamics of acute haemorrhagic conjunctivitis: application to the 2003 outbreak in Mexico. Stat Med. 2006;25:1840–1857. doi: 10.1002/sim.2352. [PubMed] [Cross Ref] • Mood AM, Graybill FA, Boes DC. Introduction to the Theory of Statistics (3rd Ed) Chichester, McGraw-Hill; 1982. • Mills CE, Robins JM, Lipsitch M. Transmissibility of 1918 pandemic influenza. Nature. 2004;432:904–906. doi: 10.1038/nature03063. [PubMed] [Cross Ref] • Ferguson NM, Cummings DA, Fraser C, Cajka JC, Cooley PC, Burke DS. Strategies for mitigating an influenza pandemic. Nature. 2006;442:448–452. doi: 10.1038/nature04795. [PubMed] [Cross Ref] • Gani R, Hughes H, Fleming D, Griffin T, Medlock J, Leach S. Potential Impact of Antiviral Drug Use during Influenza Pandemic. Emerg Infect Dis. 2005;11:1355–1362. [PMC free article] [PubMed] • Chowell G, Ammon CE, Hengartner NW, Hyman JM. Transmission dynamics of the great influenza pandemic of 1918 in Geneva, Switzerland: Assessing the effects of hypothetical interventions. J Theor Biol. 2006;241:193–204. doi: 10.1016/j.jtbi.2005.11.026. [PubMed] [Cross Ref] • Massad E, Burattini MN, Coutinho FA, Lopez LF. The 1918 influenza A epidemic in the city of Sao Paulo, Brazil. Med Hypotheses. 2006;Sep 28; [Epub ahead of print] • Fraser C, Riley S, Anderson RM, Ferguson NM. Factors that make an infectious disease outbreak controllable. Proc Natl Acad Sci U S A. 2004;101:6146–6151. doi: 10.1073/pnas.0307506101. [PMC free article] [PubMed] [Cross Ref] • Longini IM, Jr., Nizam A, Xu S, Ungchusak K, Hanshaoworakul W, Cummings DA, Halloran ME. Containing pandemic influenza at the source. Science. 2005;309:1083–1087. doi: 10.1126/science.1115717. [ PubMed] [Cross Ref] • Ferguson NM, Cummings DA, Cauchemez S, Fraser C, Riley S, Meeyai A, Iamsirithaworn S, Burke DS. Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature. 2005;437 :209–214. doi: 10.1038/nature04017. [PubMed] [Cross Ref] • WHO Writing Group Non-pharmaceutical interventions for pandemic influenza, international measures. Emerg Infect Dis. 2006;12:81–87. [PMC free article] [PubMed] • Brundage JF. Interactions between influenza and bacterial respiratory pathogens: implications for pandemic preparedness. Lancet Infect Dis. 2006;6:303–312. doi: 10.1016/S1473-3099(06)70466-2. [ PubMed] [Cross Ref] Articles from Theoretical Biology & Medical Modelling are provided here courtesy of BioMed Central • MedGen Related information in MedGen • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1693548/","timestamp":"2014-04-17T02:25:41Z","content_type":null,"content_length":"91654","record_id":"<urn:uuid:f3b88a3b-b1ac-4f0f-adca-4ba4a645a186>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Majorana Ghosts: From topological superconductor to the origin of neutrino mass, three generations and their mass mixing The existence of three generations of neutrinos and their mass mixing is a deep mystery of our universe. On the other hand, Majorana's elegant work on the real solution of Dirac equation predicted the existence of Majorana particles in our nature, unfortunately, these Majorana particles have never been observed. In this talk, I will begin with a simple 1D condensed matter model which realizes a T^2=-1 time reversal symmetry protected superconductors and then discuss the physical property of its boundary Majorana zero modes. It is shown that these Majorana zero modes realize a T^4=-1 time reversal doubelets and carry 1/4 spin. Such a simple observation motivates us to revisit the CPT symmetry of those ghost particles--neutrinos by assuming that they are Majorana zero modes. Interestingly, we find that a topological Majorana particle will realize a P^4=-1 parity symmetry as well. It even realizes a nontrivial C^4=-1 charge conjugation symmetry, which is a big surprise from a usual perspective that the charge conjugation symmetry for a Majorana particle is trivial. Indeed, such a C^4=-1 charge conjugation symmetry is a Z_2 gauge symmetry and its spontaneously breaking leads to the origin of neutrino mass. We further attribute the origin of three generations of neutrinos to three distinguishable types of topological Majorana zero modes protected by CPT symmetry. Such an assumption leads to an S3 symmetry in the generation space and uniquely determines the mass mixing matrix with no adjustable parameters! In the absence of CP violation, we derive \ theta_12=32degree, \theta_23=45degree and \theta_13=0degree, which is intrinsically closed to the current experimental results. We further predict an exact mass ratio of the three mass eigenstate with m_1/m_3~m_2/m_3=3/\sqrt{5}.
{"url":"https://perimeterinstitute.ca/videos/majorana-ghosts-topological-superconductor-origin-neutrino-mass-three-generations-and-their","timestamp":"2014-04-16T04:49:24Z","content_type":null,"content_length":"30524","record_id":"<urn:uuid:0b1c2056-2335-465c-98a6-0ff4b08133c2>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
Rebuilding the Tower of Hanoi That second part of the aforementioned algorithm modification seems a bit overly elaborate. It is simple enough to write up an inorder traversal function, but it would be so much easier if we wrote out the moves in such a way that the final printing is done by running through the array in index order. Using that idea, the root of the Tower of Hanoi tree goes into the middle array element. The roots of child subtrees will be placed into the middle position of the elements to the left and right of the (parent) root. After this it becomes a bit trickier to figure out where the next level of child nodes must go. For example, the right child of the left child of the root needs to be assigned to the middle position of the subarray between the root and the root's left child. I was hoping there would be a simple formula to use in computing the index of the child nodes that could be done independently of the computation of index values for all other nodes in the tree. No such luck. However, when I sketched out a 2- and 3-level perfect binary tree and labeled each node with the index value that it must occupy, it was obvious how to compute the indices of child nodes given the index of the parent node and the level of node in the tree. (See if you can figure it out for yourself before you look at the code.) As with the variation using the heap structure on the array described above, we can add a parameter to the tower() function that holds the array index of the current node and change the level parameter to an offset value that can also determine when the recursion is no longer needed. void tower(char src, char dest, char temp, int idx, int offset) if (offset > 0) { cilk_spawn tower(src, temp, dest, idx-offset, offset/2); plan[idx][0] = src; plan[idx][1] = dest; cilk_spawn tower(temp, dest, src, idx+offset, offset/2); else { plan[idx][0] = src; plan[idx][1] = dest; The initial value for idx is set to 2**(numDisks – 1) and the starting value for offset is half of that. Obviously, plan is a shared 2-dimesional array that is used to hold the peg notations for each move that corresponds to the node of the Tower of Hanoi tree. Once the computation is complete, because we populated the plan array with an inorder traversal indexing scheme, we simply print out the moves starting from plan[1][*]. To me, this is a more intuitive way to save the moves of the Tower of Hanoi solution than to do the inorder traversal of the saved results. Is the parallel version faster than the serial version? Is there some number of disks where the parallel version takes less time than the serial execution? I don't know. I didn't run timings on the two versions, but then relative performance wasn't the point of this exercise. The point was to show that there are some computations whose results may appear to require exclusively serial execution. However, sometimes those seemingly serial computations may be coded with parallel algorithms if you can figure out how to present the final results in the proper order. You may not need something as complex as a binary tree stuffed into an array. More likely, you will encounter situations where several linearly ordered tasks can be executing in parallel with the requirement that the results be "output" in the original linear order. A queue (with random access allowed for threads adding elements) can be used with a dedicated thread pulling out completed results in the required order. I guess what I've been trying to say in all of this is to give some thought to how ordered output could be organized by multiple tasks before you dismiss a computation as being strictly sequential.
{"url":"http://www.drdobbs.com/parallel/rebuilding-the-tower-of-hanoi/232500363?pgno=3","timestamp":"2014-04-17T13:11:55Z","content_type":null,"content_length":"94365","record_id":"<urn:uuid:200ae0b5-dd59-437a-b1b8-71d12b6a35c2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Jinc function This afternoon I ran across the jinc function for the first time. The sinc function is defined by sinc(t) = sin(t) / t. The jinc function is defined analogously by jinc(t) = J[1](t) / t where J[1] is a Bessel function. Bessel functions are analogous to sines, so the jinc function is analogous to the sinc function. Here’s what the sinc and jinc functions look like. The jinc function is not as common as the sinc function. For example, both Mathematica and SciPy have built-in functions for sinc but not for jinc. [There are actually two definitions of sinc. Mathematica uses the definition above, but SciPy uses sin(πt)/πt. The SciPy convention is more common in digital signal processing.] As I write this, Wikipedia has an entry for sinc but not for jinc. Someone want to write one? For small t, jinc(t) is approximately cos(t/2) / 2. This approximation has error O(t^4), so it’s very good for small t, useless for large t. For large values of t, jinc(t) is like a damped, shifted cosine. Specifically, with an error that decreases like O( |t|^-2 ). Like the sinc function, the jinc function has a simple Fourier transform. Both transforms are zero outside the interval [-1, 1]. Inside this interval, the transform of sinc is a constant, √(π/8). On the same interval, the transform of jinc is √(2/π) √(1 – ω^2). Update: How to compute jinc(x) Related posts: How to visualize Bessel functions Diagram of Bessel function relationships This brings to mind that the Fourier transform of a sinc function takes the value one for all frequencies below a cutoff, and zero for higher frequencies. Hence, sinc is used to construct an ideal low-pass filter (well, I think it’s mostly used in conjunction with a window function, like the Blackman window, and therefore somewhat less than ideal). It may be obvious to others, coming from different backgrounds, but this post has made me curious about the jinc function as it relates to signal processing. I wonder if someone could whip up a Bode plot. The jinc function is mainly used in 2D Fourier transforms of a circular aperture, like the sinc function for a square aperture. First time I see this function and I’m already in love with it. Hi, thanks for this post (and the rest of the blog, and your math-related twitter accounts, that a huge amount of work!). I found it interesting. In my mathematical methods for physics course, our professor told us that sine and consine are functions defined by their properties such as being solution of a certain differential equation (harmonic oscillator, for instance). The same applies to Bessel’s functions, since they are solution to other differential equation (Bessel’s equation of course). He kind of told us that both differential equations are similiar in a certain sense, and that this fact explains, in a certain sense, the similarity between Bessel’s and circular trigonometric functions Tagged with: Math Posted in Math
{"url":"http://www.johndcook.com/blog/2012/02/01/jinc-function/","timestamp":"2014-04-16T13:04:52Z","content_type":null,"content_length":"33957","record_id":"<urn:uuid:be7d3a01-bfb1-4a24-a6ea-280a5b10aed9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Please help me with this differential calculus question? December 8th 2012, 12:47 PM #1 Nov 2012 Please help me with this differential calculus question? If the total cost (in £s) function is given by TC = 2Q^2 + 158Q - 12000 where Q is the quantity produced (a) What Q would minimise total costs? Give your answer to 2 decimal places. (b) Use your value in (a) to find the minimum value for total costs. Give your answer to the nearest Please show me how to get the answer. I would be extremely grateful. Thank you. Re: Please help me with this differential calculus question? Set the derivative to zero to find critical points $\frac{d(TC)}{dQ} = 4Q + 158 = 0$...this only makes sense if we can produce a negative amount. If we restrict Q to non-negative numbers, then Q = 0 minimizes cost, which turns out to be negative £12000. (also doesn't make much sense). Re: Please help me with this differential calculus question? thanks very much for the reply, its not 4Q though it's 2Q^2, I'm confused about what the answer is, will you please help me? Re: Please help me with this differential calculus question? Re: Please help me with this differential calculus question? Is the answer 0 for part A? Will you please help me with part B as well? Re: Please help me with this differential calculus question? Re: Please help me with this differential calculus question? please do not post the same problem twice in different forums. Please help me with this differential calculus question also, this is not a problem in differential equations. Re: Please help me with this differential calculus question? December 8th 2012, 01:08 PM #2 Super Member Jun 2012 December 8th 2012, 01:15 PM #3 Nov 2012 December 8th 2012, 01:20 PM #4 Super Member Jun 2012 December 8th 2012, 01:30 PM #5 Nov 2012 December 8th 2012, 01:50 PM #6 Nov 2012 December 8th 2012, 02:01 PM #7 December 8th 2012, 02:39 PM #8 Super Member Jun 2012
{"url":"http://mathhelpforum.com/differential-equations/209361-please-help-me-differential-calculus-question.html","timestamp":"2014-04-17T09:04:18Z","content_type":null,"content_length":"51005","record_id":"<urn:uuid:137d8a22-0300-4ca4-8855-7b7d98674990>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Hexadecimal and decimal convert code Author Hexadecimal and decimal convert code please someone can teach me how to write a java code on convert hexadecimal to decimal and the decimal to hexadecimal. Joined: Aug 17, which is the code not using following tools: 2004 int i = 29; Posts: 10 String octal = Integer.toOctalString(i); String hex = Integer.toHexString(i); String binary = Integer.toBinaryString(i); Ranch Hand Joined: Jun 02, How would you convert the decimal 132 to hex by hand/ on paper? 2003 How would you convert a7 to decimal without pc? Posts: 1923 I like... http://home.arcor.de/hirnstrom/bewerbung on paper that I know Joined: Aug 17, convert 132 to hex is: 2004 132/16 = 8 remainder 4, so the hex will be 84 Posts: 10 convert a7 to decimal is: 10(a)* 16 + 7 * 1= 167 I was confused if the number like 24032 and by hand the steps will be: 24032/16=1502 r 0 1502/16 = 93 r 14 (E) 93/16 = 5 r 13 (D) 5/16 = 0 r 5 the hex will be 5DE0 in the java code how the loop divide 16 untill decimal=0 and also the output can display like 5DE0 in order? (I start learn java on 3 weeks ago so did I'm right on calculate? thanks for your help. [ August 18, 2004: Message edited by: Qing Tian ] Ranch Hand Hi there! Joined: Nov 22, 2003 I was looking into the same problem myself not so long ago! Hard to find the solution, isn't it?!!! Behind the .toHexString etc. all that's happening is: Posts: 36 Convert int to Hex: To go back to int from hex: Similarly to convert from int to Binary String: To go back to int from hex: Hope this helps... By the way, i find it easier (less cumbersome) to convert to Binary before converting to Hex. When you get the binary number, you're then just converting every four bits to the Hex equivalent... We've heard that a million monkeys at a million keyboards would eventually reproduce the entire works of Shakespeare. Now, thanks to the Internet, we know this is not true.<br />- Robert Wilensky Joined: Aug 18, 2004 Qing, thats the right method to convert from hex to dec. Posts: 3 Joined: Aug 17, 2004 thank you all for help, that will help me alot! Posts: 10 Ranch Hand Here's a 'manual' way Joined: Jun 09, 2003 for args[] enter eg Posts: 4632 24032 10 16 5DE0 16 10 Joined: Aug 17, 2004 thank you Michael that you give me more idea on these codes. Posts: 10 subject: Hexadecimal and decimal convert code
{"url":"http://www.coderanch.com/t/396987/java/java/Hexadecimal-decimal-convert-code","timestamp":"2014-04-16T14:02:16Z","content_type":null,"content_length":"36930","record_id":"<urn:uuid:394ca974-ae7b-48de-afc4-e37355df54d2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Solitons from the Korteweg-de Vries Equation John Scott Russell, a Scottish naval engineer, reported in 1834 on his observation of a remarkable solitary water wave moving a considerable distance down a narrow channel. Korteweg and de Vries (1895) developed a theory to describe weakly nonlinear wave propagation in shallow water. The standard form of the Korteweg-de Vries (KdV) equation is usually written (in some references with 6). Kruskal and Zabusky (1965) discovered that the KdV equation admits analytic solutions representing what they called "solitons"—propagating pulses or solitary waves that maintain their shape and can pass through one another. These are evidently waves that behave like particles! Several detailed analyses suggest that the coherence of solitons can be attributed to a compensation of nonlinear and dispersive effects. A 1-soliton solution to the KdV equation can be written . This represents a wavepacket with amplitude and wave velocity , depending on a parameter , one of the constants of In this Demonstration, the function is plotted as a function of for values of which you can choose and vary. A simulation of the corresponding wave is also shown as a three-dimensional plot. Multiple-soliton solutions of the KdV equation have also been discovered. These become increasingly complicated and here only a 2-soliton generalization is considered. Detailed analysis shows that in the 2-soliton collision shown, the individual solitons actually exchange amplitudes, rather than passing through one another. Solitons provide a fertile source of inspiration in several areas of fundamental physics, including elementary-particle theory, string theory, quantum optics and Bose-Einstein condensations. For nice animations of soliton propagation and collisions, use autorun with the time variable in slow motion. Snapshot 1. The shape and speed of a soliton depends on the parameter . Snapshot 2. Two solitons moving to the right approach one another. Snapshot 3. And the larger one apparently overtakes and passes the smaller one. Actually, they are exchanging shapes and speeds.
{"url":"http://www.demonstrations.wolfram.com/SolitonsFromTheKortewegDeVriesEquation/","timestamp":"2014-04-20T21:01:52Z","content_type":null,"content_length":"45680","record_id":"<urn:uuid:b24a80cf-d5ea-4de5-8a59-2bb8b4bab5de>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Black holes Black holes are a general relativitic result. 'Hyperspace' is a result from Heim theory, which has little to no experimental validation in the area of gravity. Hence, asking what a black hole can do with hyperspace isn't a valid question, they are different theories. What would a supermassive black hole collapse into? It's already a zero sized point mass, it can't get any smaller! A black hole will last as long as it absorbed the same or more matter/energy than it emits. Eventually a black hole, no matter it's size, will become warmer than the universe (since the universe is slowly cooling) and so will begin radiating energy faster than it absorbs it. This is a run away effect (since black holes get hotter as they give shrink) and the black hole will evaporate in a flash of high energy radiation, at least according to Hawking, Penrose and most relativity people. The time required for this to happen though is billions and billions of times longer than the universe has been around so far!
{"url":"http://lofi.forum.physorg.com/Black-holes_10831.html","timestamp":"2014-04-20T08:22:16Z","content_type":null,"content_length":"153949","record_id":"<urn:uuid:34777cec-a4e3-4459-95dd-1f7737b8e021>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
New Dayton Audio Ultimax 15" Subwoofer at PE post #1 of 61 1/31/13 at 10:35am Thread Starter Does this driver show any promise? •Dual 2 ohm, 2-layer copper voice coils •Large vented pole piece and under-spider venting •Black anodized aluminum former and black pole improves heat dissipation •Thick, one-piece Nomex honeycomb covered cone with woven heavy-duty glass-fiber •Tall-boy rubber surround for extra-long linear excursion without reducing cone surface area •Dual linear stiffness spiders limit distortion and rocking modes •Copper shorting rings and cap reduce distortion due to inductance variations Cabinet recommendations: • Sealed 3.1 cubic ft. (net internal) with 1 lbs. of Acousta-Stuf polyfill, f3 of 35 Hz with a 0.707 Qtc alignment • Vented 6.0 cubic ft. (net internal, not including driver or port volume) with 6 lbs. of Acousta-Stuf tuned to 18 Hz with two 4" diameter by 26" long flared ports for an f3 of 22 Hz. Larger cabinets and lower tuning frequencies are possible. The largest recommended cabinet is 10.5 cubic feet tuned to 16 Hz using two 4" diameter ports that are 21" long for an f3 of 19 Hz. Note: All parameters derived with voice coils wired in series. Power Handling (RMS) 800 Watts Power Handling (max) 1,600 Watts Impedance 2+2 ohms Frequency Response 15 to 1,000 Hz Sensitivity 89.5 dB 2.83V/1m Voice Coil Diameter 2-1/2" Resonant Frequency (Fs) 19.5 Hz DC Resistance (Re) 3.4 ohms Voice Coil Inductance (Le) 1.31 mH Mechanical Q (Qms) 2.40 Electromagnetic Q (Qes) 0.59 Total Q (Qts) 0.47 Compliance Equivalent Volume (Vas) 7.92 ft.³ Maximum Linear Excursion (Xmax) 19 mm Cone Material Nomex Surround Material Rubber Overall Outside Diameter 15.28" Baffle Cutout Diameter 13.78" Depth 7.55" Dayton Audio UM15-22 15" Ultimax DVC Subwoofer 2 ohm Per Coi Brand Dayton Audio Model UM15-22 Part Number 295-514 UPC 844632099267 Product Category Subwoofers Product Rating Be the first to write a review Unit of Measure ea Weight 28.0000 Aaallllright! Now we're cooking with gas. Now then..... the 18". Dayton? The 18"? Wondering how this will model out in the f20, lilwrecker, housewrecker. I sent lilmike an email inquiring about it. I was wondering how long before this sucker showed up! I'm posting this as 'news' because it is! I guess I'll need some quality time with WinISD tonight. Getting closer to an 18" Works great in the F-20. Sadly, not nearly enough motor there for the big tapped horns. good to know, thanks! Looks like a a good option over the RS15HF, should be capable of about 1.5db greater output across the board in an equal sized sealed box. Needs a massive box to go ported. I'd like to see Ricci test this one out, Im sure that will happen sooner rather than later, but 19mm xmax on that seems conservative to me. That cut-out diameter in the spec doesn't quite compute. I wonder if it's a typo? Makes sense to me. Outside diameter is 1.5" bigger than the cutout. 0.75" on each side. This models very well. I might opt for this over the HF series 15" even. Yep, getting close. I wish these had a more compliant suspension/more motor. Considering this woofer needs an 18 sized box and an 18 would have more displacement. Edited by nograveconcern - 2/1/13 at 5:16am Can some of you folks who are wizards at using winISD model this compared to the Dayton 15" HO and 15" HF, as well as the 15" DVC385? These are all good 15" drivers, but how do they stack up against the really good Dayton HO18. In order at 20hz: B = SI in 3 cu ft G = Ult in 6.89 R = Ho in 3.5 Y = 385 in 3.7 In order at 20hz: G = Ult in 6.89 Y = 385 in 3.7 B = SI in 3 cu ft R = Ho in 3.5 To be fair, all in the same 3.5 cu ft box: In order at 20hz: B = SI G = Ult R = Ho Y = 385 Edited by nograveconcern - 2/1/13 at 6:50am The suspension is pretty good where it's at, imo. Any more and it would just want an even larger enclosure. It's optimal size looks to be 3.5-4cuft which isn't all that bad. C'mon. We're all just spoiled with 18's that can fit in small boxes. There are other good choices if you want a 15" in a smaller box. Originally Posted by Scott Simonian The suspension is pretty good where it's at, imo. Any more and it would just want an even larger enclosure. It's optimal size looks to be 3.5-4cuft which isn't all that bad. C'mon. We're all just spoiled with 18's that can fit in small boxes. There are other good choices if you want a 15" in a smaller box. huh? .707 alignment is 6.8 cu ft. It models poorly in 3.5 cu ft. Edit: I'm still playing with it in winisd. I would probably go for 4.5 cu ft with this. A little large but manageable. Edited by nograveconcern - 2/1/13 at 10:10am This looks awesome. If they had an 18" I'd... Yeah. In 3.5cuft I get a .813 Qtc which is pretty good for HT. Smaller and better excursion control. Shooting for .707 Qtc is just using up more space and less excursion control down low. thats not how i would model the subs i would enter the max wattage at xmax rather then using the max spl which i have found to be useless. and completely ignore pe since most scenes aren't rms but burst with subwoofers. since 2k watts @ 10ms isnt going to harm any of those subs Or it could be getting all the available low end without needing so much power you risk cooking the coil. I think we just have different goals. its rated for 800 watts rms Originally Posted by cookieattk thats not how i would model the subs i would enter the max wattage at xmax rather then using the max spl which i have found to be useless. and completely ignore pe since most scenes aren't rms but burst with subwoofers. since 2k watts @ 10ms isnt going to harm any of those subs Max SPL is the only meaningful graph for comparing the output of 2 subs and very helpful for designing ported enclosures. It's the most useful chart in WinISD. If you want to find the ideal enclosure then you start looking at minimum power on the max power chart and adjust enclosure size for the power you have available. The SPL chart is the least helpful. I'm aware of that. I'm also aware that program material is not a single frequency sine wave. Well... I get that but my rec for 3.5cuft needs only 600w (of a rated 800w) to fully utilize Xmax. This would be more effective for those using multiples... not just.... one sub. Edited by Scott Simonian - 2/1/13 at 11:06am Yeah it's 600w at 10hz, but like I said, program material is more than one frequency. You will have harmonic content at several frequencies in the octaves above 10hz that will surely have greater magnitude than 10hz. So, by the time you use up all that travel you are hitting it with 2kw. But, like you said you get more excursion control and that's cool if that's your design goal. I'm leaning toward efficiency (and judging it against another 15 that is more efficient in smaller boxes). If you want to use 16 of them then it's a moot point. Edited by nograveconcern - 2/1/13 at 11:13am Fair enough. Both perspectives are valid in their own right. Originally Posted by Scott Simonian The suspension is pretty good where it's at, imo. Any more and it would just want an even larger enclosure. It's optimal size looks to be 3.5-4cuft which isn't all that bad. C'mon. We're all just spoiled with 18's that can fit in small boxes. There are other good choices if you want a 15" in a smaller box. just load two per box opposing and you have ur high output 18 ;0) Erm, 21" actually. Nice to see you around, KG. The spider looks very similar to the Titanic MK3. Power rating is the same. I've tried to model it against the LMS-R 15 and Titanic MK3. Not sure why but can't get cone excursion to work but in SPL it appears to be well above both the LMS and MK3. I model with 2 drivers (series) in a 10cu ft (290 litres) with 750Watts - 98db @ 10Hz Qtc = 0.760 Dual MK3 Titanic in the same box and power - 87db @ 10hz Qtc = 0.704 Considering it sells for about $30 less than the Titanic its clearly gunning to take sales from MK3 and HF/HO markets. ** Reason for box model is already got amp and start glueing the box yesterday for my planned 15 dual opposed. Need this driver in wild getting tested to see what it can do. Strange that there is no mention of it on the Dayton website as yet. post #2 of 61 1/31/13 at 10:42am • 12,544 Posts. Joined 10/2001 • Location: Clovis, "Cullyfornia" • Thumbs Up: 450 post #3 of 61 1/31/13 at 11:00am post #4 of 61 1/31/13 at 11:15am • 3,786 Posts. Joined 12/2005 • Location: Philadelphia, PA • Thumbs Up: 664 post #5 of 61 1/31/13 at 11:21am • 2,403 Posts. Joined 11/2006 • Location: NY • Thumbs Up: 34 post #6 of 61 1/31/13 at 11:29am • 2,157 Posts. Joined 12/2009 • Location: Pacific Northwest • Thumbs Up: 67 post #7 of 61 1/31/13 at 11:52am post #8 of 61 1/31/13 at 7:48pm • 3,958 Posts. Joined 8/2002 • Location: AZ • Thumbs Up: 101 post #9 of 61 1/31/13 at 7:55pm • 6,388 Posts. Joined 12/2010 • Location: Western NC • Thumbs Up: 364 post #10 of 61 1/31/13 at 10:36pm • 173 Posts. Joined 12/2005 • Location: Tacoma, WA USA • Thumbs Up: 15 post #11 of 61 1/31/13 at 11:49pm • 407 Posts. Joined 4/2007 • Thumbs Up: 11 post #12 of 61 2/1/13 at 12:05am • 12,544 Posts. Joined 10/2001 • Location: Clovis, "Cullyfornia" • Thumbs Up: 450 post #13 of 61 2/1/13 at 5:10am post #14 of 61 2/1/13 at 5:59am • 2,865 Posts. Joined 6/2011 • Thumbs Up: 84 post #15 of 61 2/1/13 at 6:33am post #16 of 61 2/1/13 at 9:49am • 12,544 Posts. Joined 10/2001 • Location: Clovis, "Cullyfornia" • Thumbs Up: 450 post #17 of 61 2/1/13 at 9:58am post #18 of 61 2/1/13 at 10:03am • 747 Posts. Joined 12/2012 • Thumbs Up: 37 post #19 of 61 2/1/13 at 10:22am • 12,544 Posts. Joined 10/2001 • Location: Clovis, "Cullyfornia" • Thumbs Up: 450 post #20 of 61 2/1/13 at 10:46am post #21 of 61 2/1/13 at 10:46am post #22 of 61 2/1/13 at 10:53am post #23 of 61 2/1/13 at 10:55am post #24 of 61 2/1/13 at 10:56am post #25 of 61 2/1/13 at 11:01am • 12,544 Posts. Joined 10/2001 • Location: Clovis, "Cullyfornia" • Thumbs Up: 450 post #26 of 61 2/1/13 at 11:07am post #27 of 61 2/1/13 at 11:29am • 12,544 Posts. Joined 10/2001 • Location: Clovis, "Cullyfornia" • Thumbs Up: 450 post #28 of 61 2/1/13 at 11:43am • 5,573 Posts. Joined 4/2004 • Location: Upstate NY • Thumbs Up: 33 post #29 of 61 2/1/13 at 11:49am • 12,544 Posts. Joined 10/2001 • Location: Clovis, "Cullyfornia" • Thumbs Up: 450 post #30 of 61 2/1/13 at 2:21pm • 37 Posts. Joined 12/2009 • Thumbs Up: 10
{"url":"http://www.avsforum.com/t/1455549/new-dayton-audio-ultimax-15-subwoofer-at-pe","timestamp":"2014-04-18T04:08:58Z","content_type":null,"content_length":"207222","record_id":"<urn:uuid:ef868f7f-9296-49a2-83d5-4b694ccbcc02>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Moving the camera with a constant velocity [Archive] - OpenGL Discussion and Help Forums Ehsan Kamrani 02-15-2006, 03:49 AM I want to move the camera on a curve with a constant velocity. As you know, V = dy/dx . If i want to move the camera on the following curve, V is not constant: Y = 4 * X^2 + sin( 2 * X ) How can i solve this problem?
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-140745.html","timestamp":"2014-04-16T13:35:49Z","content_type":null,"content_length":"13164","record_id":"<urn:uuid:48e44f3e-3b73-4fc3-9d38-b25883ba268e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
The Interesting Test What is The Interesting Test? The Interesting Test is a collection of mathematical problems, puzzles and brainteasers. The material is designed to be unfamiliar to almost all high school students; this is simply our way of making sure everyone is on an equal footing. Despite initial appearances, each problem has a simple, short, correct answer. At the end of this document, we include some questions and answers from past Students are permitted to bring/use calculators during the test. What if I don't think I did so well? If you had fun trying, please try again. Many of our students were not admitted on the first try. Interesting Test Dates There are multiple opportunities for you to take your hand (and mind) at the Interesting Test. Dates will be on weekends and even a few weeknights, as we hope to accommodate all your very busy schedules. Dates Example Interesting Test Questions Example 1 Name a body part that almost everyone on earth has an above average number of! Justify your answer. Answer (no peeking until after you try!) Example 2 A woman and her husband attended a party with four other couples. As is normal at parties, handshaking took place. Of course, no one shook their own hand or the hand of the person they came with. And not everyone shook everyone else's hand. But when the woman asked the other (9) people present how many different people's hands they had shaken they all gave a different answer. Question (this is NOT a trick!): How many different people's hands did the woman's husband shake? Answer (hey! we said no peeking until after you try!) Example 3 A man starts walking up a narrow mountain path at 6:00 a.m. He walks at different speeds, stopping to eat, sometimes going back a few steps to look at a flower, but never leaving the path. He eventually arrives at the top at 10:00 p.m. the same day. He camps out over night and starts down the same path at 6:00 a.m. the next day. After stopping to eat and so forth he arrives at the bottom at 10:00 p.m. True or False? There is some point on the path that the man occupied at exactly the same time on the two different days. Explain your answer. Answer (did you peek?) We bet your brain is getting warmed up now....
{"url":"http://www.cs.cmu.edu/~leap/interestingtest.html","timestamp":"2014-04-16T17:10:44Z","content_type":null,"content_length":"3835","record_id":"<urn:uuid:80ab8edb-9b93-4827-a520-fdb28bec3e22>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysis of a local search heuristic for facility location problems Results 1 - 10 of 106 - Journal of the ACM , 1999 "... ..." - Proceedings of the ACM SIGMOD International Conference on Management of data, Philadelphia , 2000 "... We consider the following two instances of the projective clustering problem: Given a set S of n points in R d and an integer k ? 0; cover S by k hyper-strips (resp. hyper-cylinders) so that the maximum width of a hyper-strip (resp., the maximum diameter of a hyper-cylinder) is minimized. Let w ..." Cited by 246 (21 self) Add to MetaCart We consider the following two instances of the projective clustering problem: Given a set S of n points in R d and an integer k ? 0; cover S by k hyper-strips (resp. hyper-cylinders) so that the maximum width of a hyper-strip (resp., the maximum diameter of a hyper-cylinder) is minimized. Let w be the smallest value so that S can be covered by k hyper-strips (resp. hyper-cylinders), each of width (resp. diameter) at most w : In the plane, the two problems are equivalent. It is NP-Hard to compute k planar strips of width even at most Cw ; for any constant C ? 0 [50]. This paper contains four main results related to projective clustering: (i) For d = 2, we present a randomized algorithm that computes O(k log k) strips of width at most 6w that cover S. Its expected running time is O (nk 2 log 4 n) if k 2 log k n; it also works for larger values of k, but then the expected running time is O(n 2=3 k 8=3 log 4 n). We also propose another algorithm that computes a c... , 2001 "... ÔÖÓ��ÙÖ�ØÓØ���ÐÓ��ÐÓÔØ�ÑÙÑ�ÓÖ�Ñ����ÒÛ � Ö�Ø�ÓÓ��ÐÓ�ÐÐÝÓÔØ�ÑÙÑ×ÓÐÙØ�ÓÒÓ�Ø��Ò��Ù×�Ò�Ø�� × ÐÓ�Ð�ØÝ��ÔÓ��ÐÓ�Ð×��Ö�ÔÖÓ��ÙÖ��ר��Ñ�Ü�ÑÙÑ �Ñ����Ò�Ò����Ð�ØÝÐÓ�Ø�ÓÒÔÖÓ�Ð�Ñ×Ï���¬Ò�Ø� � ÁÒØ��×Ô�Ô�ÖÛ��Ò�ÐÝÞ�ÐÓ�Ð×��Ö���ÙÖ�ר�×�ÓÖØ�� ×�ÓÛØ��ØÐÓ�Ð×��Ö�Û�Ø�×Û�Ô×��×�ÐÓ�Ð�ØÝ��ÔÓ � ×�ÑÙÐØ�Ò�ÓÙ×ÐÝØ��ÒØ��ÐÓ�Ð�ØÝ��ÔÓ�Ø�� ..." Cited by 234 (10 self) Add to MetaCart ÔÖÓ��ÙÖ�ØÓØ���ÐÓ��ÐÓÔØ�ÑÙÑ�ÓÖ�Ñ����ÒÛ � Ö�Ø�ÓÓ��ÐÓ�ÐÐÝÓÔØ�ÑÙÑ×ÓÐÙØ�ÓÒÓ�Ø��Ò��Ù×�Ò�Ø�� × ÐÓ�Ð�ØÝ��ÔÓ��ÐÓ�Ð×��Ö�ÔÖÓ��ÙÖ��ר��Ñ�Ü�ÑÙÑ �Ñ����Ò�Ò����Ð�ØÝÐÓ�Ø�ÓÒÔÖÓ�Ð�Ñ×Ï���¬Ò�Ø� � ÁÒØ��×Ô�Ô�ÖÛ��Ò�ÐÝÞ�ÐÓ�Ð×��Ö���ÙÖ�ר�×�ÓÖØ�� ×�ÓÛØ��ØÐÓ�Ð×��Ö�Û�Ø�×Û�Ô×��×�ÐÓ�Ð�ØÝ��ÔÓ � ×�ÑÙÐØ�Ò�ÓÙ×ÐÝØ��ÒØ��ÐÓ�Ð�ØÝ��ÔÓ�Ø��ÐÓ�Ð×��Ö � �Ü�ØÐÝ�Ï��ÒÛ�Ô�ÖÑ�ØÔ���Ð�Ø��רÓ��×Û�ÔÔ�� �ÑÔÖÓÚ�ר��ÔÖ�Ú�ÓÙ×�ÒÓÛÒ��ÔÔÖÓÜ�Ñ�Ø�ÓÒ�ÓÖØ�� × ÔÖÓ�Ð�Ñ�ÓÖÍÒ�Ô��Ø�Ø�����Ð�ØÝÐÓ�Ø�ÓÒÛ�×�ÓÛ ÔÖÓ��ÙÖ��×�Ü�ØÐÝ Ó�ÐÓ�Ð×��Ö��ÓÖ�Ñ����ÒØ��ØÔÖÓÚ���×��ÓÙÒ�� � Ô�Ö�ÓÖÑ�Ò��Ù�Ö�ÒØ��Û�Ø�ÓÒÐÝ�Ñ����Ò×Ì��×�Ð×Ó �ÔÌ��×�ר��¬Öר�Ò�ÐÝ×�× ×Û�ÔÔ�Ò�����Ð�ØÝ��×�ÐÓ�Ð�ØÝ��ÔÓ��Ü�ØÐÝÌ�� × �ÑÔÖÓÚ�ר����ÓÙÒ�Ó�ÃÓÖÙÔÓÐÙ�Ø�ÐÏ��Ð×ÓÓÒ ×���Ö��Ô��Ø�Ø�����Ð�ØÝÐÓ�Ø�ÓÒÔÖÓ�Ð�ÑÛ��Ö��� � Ø��ØÐÓ�Ð×��Ö�Û���Ô�ÖÑ�Ø×����Ò��ÖÓÔÔ�Ò��Ò� Ø�ÔÐ�ÓÔ��×Ó�����Ð�ØÝ�ÓÖØ��×ÔÖÓ�Ð�ÑÛ��ÒØÖÓ�Ù � ���Ð�ØÝ��×��Ô��ØÝ�Ò�Û��Ö��ÐÐÓÛ��ØÓÓÔ�ÒÑÙÐ ÐÓ�Ð×��Ö�Û���Ô�ÖÑ�Ø×Ø��×Ò�ÛÓÔ�Ö�Ø�ÓÒ��×�ÐÓ ���Ð�ØÝ�Ò��ÖÓÔ×Þ�ÖÓÓÖÑÓÖ����Ð�Ø��×Ï�ÔÖÓÚ�Ø��Ø �Ò�ÛÓÔ�Ö�Ø�ÓÒÛ���ÓÔ�Ò×ÓÒ�ÓÖÑÓÖ�ÓÔ��×Ó� � �Ð�ØÝ��Ô��ØÛ��Ò�Ò�� ÝÈ�ÖØ��ÐÐÝ×ÙÔÔÓÖØ���Ý���ÐÐÓÛ×��Ô�ÖÓÑÁÒ�Ó×Ý×Ì� � Ê�×��Ö�Ä� � ÒÓÐÓ���×ÄØ���Ò��ÐÓÖ � ÞËÙÔÔÓÖØ���Ý�ÊÇ������� � - In Proceedings of the 31st Annual ACM Symposium on Theory of Computing , 1999 "... We present the first constant-factor approximation algorithm for the metric k-median problem. The k-median problem is one of the most well-studied clustering problems, i.e., those problems in which the aim is to partition a given set of points into clusters so that the points within a cluster are re ..." Cited by 215 (14 self) Add to MetaCart We present the first constant-factor approximation algorithm for the metric k-median problem. The k-median problem is one of the most well-studied clustering problems, i.e., those problems in which the aim is to partition a given set of points into clusters so that the points within a cluster are relatively close with respect to some measure. For the metric k-median problem, we are given n points in a metric space. We select k of these to be cluster centers, and then assign each point to its closest selected center. If point j is assigned to a center i, the cost incurred is proportional to the distance between i and j. The goal is to select the k centers that minimize the sum of the assignment costs. We give a 6 2 3-approximation algorithm for this problem. This improves upon the best previously known result of O(log k log log k), which was obtained by refining and derandomizing a randomized O(log n log log n)-approximation algorithm of Bartal. 1 - In Proceedings of the 40th Annual IEEE Symposium on Foundations of Computer Science , 1999 "... We present improved combinatorial approximation algorithms for the uncapacitated facility location and k-median problems. Two central ideas in most of our results are cost scaling and greedy improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of 2:414 ..." Cited by 209 (14 self) Add to MetaCart We present improved combinatorial approximation algorithms for the uncapacitated facility location and k-median problems. Two central ideas in most of our results are cost scaling and greedy improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of 2:414 + in ~ O(n 2 =) time. This also yields a bicriteria approximation tradeoff of (1 +; 1+ 2 =) for facility cost versus service cost which is better than previously known tradeoffs and close to the best possible. Combining greedy improvement and cost scaling with a recent primal dual algorithm for facility location due to Jain and Vazirani, we get an approximation ratio of 1.853 in ~ O(n 3 ) time. This is already very close to the approximation guarantee of the best known algorithm which is LP-based. Further, combined with the best known LP-based algorithm for facility location, we get a very slight improvement in the approximation factor for facility location, achieving 1.728.... - Journal of Algorithms , 1999 "... A fundamental facility location problem is to choose the location of facilities, such as industrial plants and warehouses, to minimize the cost of satisfying the demand for some commodity. There are associated costs for locating the facilities, as well as transportation costs for distributing the co ..." Cited by 182 (12 self) Add to MetaCart A fundamental facility location problem is to choose the location of facilities, such as industrial plants and warehouses, to minimize the cost of satisfying the demand for some commodity. There are associated costs for locating the facilities, as well as transportation costs for distributing the commodities. We assume that the transportation costs form a metric. This problem is commonly referred to as the uncapacitated facility location (UFL) problem. Applications to bank account location and clustering, as well as many related pieces of work, are discussed by Cornuejols, Nemhauser and Wolsey [2]. Recently, the first constant factor approximation algorithm for this problem was obtained by Shmoys, Tardos and Aardal [16]. We show that a simple greedy heuristic combined with the algorithm by Shmoys, Tardos and Aardal, can be used to obtain an approximation guarantee of 2.408. We discuss a few variants of the problem, demonstrating better approximation factors for restricted versions of the problem. We also show that the problem is Max SNP-hard. However, the inapproximability constants derived from the Max SNP hardness are very close to one. By relating this problem to Set Cover, we prove a lower bound of 1.463 on the best possible approximation ratio assuming NP / ∈ DT IME[n O(log log n)]. 1 "... We present a simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61 whereas the best previously known was 1.73. Furthermore, we will show that our algorithm has a property which allows us to apply the technique of Lagra ..." Cited by 116 (9 self) Add to MetaCart We present a simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61 whereas the best previously known was 1.73. Furthermore, we will show that our algorithm has a property which allows us to apply the technique of Lagrangian relaxation. Using this property, we can nd better approximation algorithms for many variants of the facility location problem, such as the capacitated facility location problem with soft capacities and a common generalization of the k-median and facility location problem. We will also prove a lower bound on the approximability of the k-median problem. - In Proceedings of the 5th International Workshop on Approximation Algorithms for Combinatorial Optimization , 2002 "... In this paper we present a 1.52-approximation algorithm for the metric uncapacitated facility location problem, and a 2-approximation algorithm for the metric capacitated facility location problem with soft capacities. Both these algorithms improve the best previously known approximation factor for ..." Cited by 112 (11 self) Add to MetaCart In this paper we present a 1.52-approximation algorithm for the metric uncapacitated facility location problem, and a 2-approximation algorithm for the metric capacitated facility location problem with soft capacities. Both these algorithms improve the best previously known approximation factor for the corresponding problem, and our soft-capacitated facility location algorithm achieves the integrality gap of the standard LP relaxation of the problem. Furthermore, we will show, using a result of Thorup, that our algorithms can be implemented in quasi-linear time. , 1998 "... Semistructured data is characterized by the lack of any fixed and rigid schema, although typically the data has some implicit structure. While the lack of fixed schema makes extracting semistructured data fairly easy and an attractive goal, presenting and querying such data is greatly impaired. Thus ..." Cited by 112 (5 self) Add to MetaCart Semistructured data is characterized by the lack of any fixed and rigid schema, although typically the data has some implicit structure. While the lack of fixed schema makes extracting semistructured data fairly easy and an attractive goal, presenting and querying such data is greatly impaired. Thus, a critical problem is the discovery of the structure implicit in semistructured data and, subsequently, the recasting of the raw data in terms of this structure. In this paper, we consider a very general form of semistructured data based on labeled, directed graphs. We show that such data can be typed using the greatest fixpoint semantics of monadic datalog programs. We present an algorithm for approximate typing of semistructured data. We establish that the general problem of finding an optimal such typing is NP-hard, but present some heuristics and techniques based on clustering that allow efficient and near-optimal treatment of the problem. We also present some preliminary experimental results.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=182865","timestamp":"2014-04-21T06:38:56Z","content_type":null,"content_length":"40046","record_id":"<urn:uuid:a6fced1c-c2a2-468e-bca9-edd6c9aa0aa4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: speed question: -collapse- vs -egen- [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: speed question: -collapse- vs -egen- From "Sergiy Radyakin" <serjradyakin@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: speed question: -collapse- vs -egen- Date Fri, 25 Apr 2008 21:12:55 -0400 Hello All! Jeph has asked about an efficient way of creating a dataset with means of one variable over the categories of another variable. He suggested two possible solutions and Stas added a third one. Below I report performance of each of these methods and compare it with the fourth: a plugin. I use an expanded version of auto.dta and tabulate mean {price} by different levels of {rep78}. 1. All methods resulted in the following table of results* meanprice rep78 4564.5 1 5967.625 2 6429.233 3 6071.5 4 2. The timing is as follows (Stata SE, Windows Server 2003, 32-bit) 1: 33.80 / 1 = 33.7960 2: 31.22 / 1 = 31.2190 3: 21.33 / 1 = 21.3280 4: 5.58 / 1 = 5.5780 3. Since the plugin was intended for similar but not exactly the same purposes, it does some extra work (simultaneously computing frequencies, etc), which means that this is not the ultimate record. 4. The plugin must be "plugged-in" before use. To achieve this, I first call the plugin without timing, so that Stata loads the DLL and becomes aware of it. This process takes about 3 seconds on this particular machine, because each DLL loaded into memory is scanned by an antivirus on-the-fly. This time-loss is a one-time loss, and if Jeph calls this program routinely, he should not be concerned about this fixed cost. Even with this overhead, the plugin still easily beats all of the competition above. 5. The benchmark program is listed below. -ftabstat- is an ado-wrapper for the plugin. So if anyone wants to reproduce these results, must first obtain this plugin from me (please write "ftabstat" in the email subject line). 6. This particular plugin has a limitation of the matrix size for groups (which is 11000 in most versions of Stata). It also does not properly handle the missing values in categorical variable (that's why I discard observations with missing {rep78}), but this can all be done without a penalty in terms of execution time. Another limitation is that the plugin is platform-specific, and this one is for Win32 only (it will run with Stata 32-bit on Windows 64-bit, though). 7. If Jeph is concerned about speed - plugins are the way to go. 8. I would be glad to see a solution in Mata posted here to see how efficient would that be. 9. the name "ftabstat" comes from "fast tabstat" as originally the plugin was "tabstat on steroids". But then a couple of other features were added. tabstat price, by(rep78) save takes ~45 seconds to just create a matrix, so it does not even qualify. Have a good weekend, Sergiy Radyakin // Benchmark program speed.do set more off set mem 500m log using speed.txt, text replace // get data sysuse auto, clear keep rep78 price keep if !missing(rep78) expand 100000 timer clear timer on 1 ftabstat price if _n==1 // kludge to load the plugin timer off 1 timer list 1 timer clear 1 preserve // start Jeph Herrin 1 timer on 1 bys rep78: egen meanprice=mean(price) bys rep78: keep if _n==1 keep rep78 meanprice timer off 1 list rep78 meanprice, clean noobs abb(10) preserve // start Jeph Herrin 2 timer on 2 collapse (mean) meanprice=price,by(rep78) timer off 2 list rep78 meanprice, clean noobs abb(10) preserve // start Stas Kolenikov timer on 3 gen byte one=1 if !missing(price) bys rep78: gen meanprice = sum(price)/sum(one) by rep78: keep if _n==_N keep rep78 meanprice timer off 3 list rep78 meanprice, clean noobs abb(10) preserve // start Sergiy Radyakin timer on 4 ftabstat price, by(rep78) drop _all matrix A=e(b)' svmat A,names(col) rename y1 meanprice matrix A=e(Row)' svmat A, names(col) rename r1 rep78 timer off 4 list rep78 meanprice, clean noobs abb(10) timer list log close // end of benchmark program speed.do *PS: The solution suggested by Stas will yield incorrect results, since the sum of {one} will also count observations for which the variable of interest is missing. This is definitely a typo and I have corrected it before doing any comparisons by changing the following gen byte one = 1 gen byte one = 1 if !missing(price) On 4/25/08, Stas Kolenikov <skolenik@gmail.com> wrote: > NJC can offer a precise answer, but my take would be > gen byte one = 1 > bys group: gen varmean = sum(mean)/sum(one) > by group: keep if _n==_N > keep whatever > Topics like those should've been covered somewhere in Nick's column in > Stata Journal, or in Stata tips. -egen- is slow as it does a lot of > checks and parsing and stuff -- for big processing jobs, single-liners > like above are always notably faster. -collapse- should be at least a > tad faster than -egen-, but again I would expect it to lose to the > above code. > On Fri, Apr 25, 2008 at 2:37 PM, Jeph Herrin <junk@spandrel.net> wrote: > > > > I'm optimizing some code that needs to run often > > for a simulation, and am wondering if I should > > expect any difference in processing time between > > > > bys group: egen varmean=mean(myvar) > > bys group: keep if _n==1 > > keep group varmean > > > > and > > > > collapse (mean) varmean=myvar, by(group) > > > > and if so, which would be faster? > > > > I know I could run some tests myself, but figured > > that others had either already done so or at least > > would have some insight. > > > -- > Stas Kolenikov, also found at http://stas.kolenikov.name > Small print: Please do not reply to my Gmail address as I don't check > it regularly. > * > * For searches and help try: > * http://www.stata.com/support/faqs/res/findit.html > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-04/msg01120.html","timestamp":"2014-04-16T13:18:28Z","content_type":null,"content_length":"12086","record_id":"<urn:uuid:7a9097f4-16fc-4dfe-b494-1bf33fbdbb01>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply bobbym wrote: When I divide the first term by it, it becomes simply (Pi/P). When I divide the second term by it, it becomes: -((Pi-w)n)/P You mean multiply instead of divide, and you are correct up to there. Yes, sorry again. I do mean multiply. What have I done wrong after? I really do not know what is going on, I have spent hours here. Alternatively I try multiplying the P across from this step: So (Pi/P) -((Pi-w)n)/P = 0 To get Pi - (Pi-w)/n = 0. Then it leads to the same thing, as I divide by P later to try to get the Pi/P expression. These two ways seem equivalent.
{"url":"http://www.mathisfunforum.com/post.php?tid=19461&qid=269207","timestamp":"2014-04-17T04:06:45Z","content_type":null,"content_length":"28634","record_id":"<urn:uuid:9c8b8c2f-2d95-417c-97ac-8eff459694b3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: Traffic Jam Applet tool, Teaching Mathematics as a Science Discussion: Traffic Jam Applet tool Topic: Teaching Mathematics as a Science Related Item: http://mathforum.org/mathtools/tool/10/ << see all messages in this topic < previous message Subject: RE: Teaching Mathematics as a Science Author: tackweed Date: Aug 18 2004 Now that this topic has calmed down a bit, it seems that the replies essentially raised the main question for any study of math: Who needs how much of What kind of math and Where and When should they get In this context math is a science (that deals with quantities, magnitudes, forms and their relationships according to one dictionary) and a form of communication. After all, if we did not need to communicate math, most of the structure would be meaningless. If math is to be this study of patterns and the way to communicate them, predicating math programs based on the perceived needs and desires of students could have serious consequences. Students are no different then other people when it comes time to work - we always try to follow the path of least resistence. What needs to be remembered is that the popular student question 'When will I ever need this?' is not posited from a position of knowledge. It may be a coverup to the much more important question to the student: 'How will your skills and talent benefit mankind?' I have yet to find the situation where I had too much experience and too much knowledge. Knowledge, like experience is not always acquired in situations we desire or understand. It's cumulative. The sooner you start, the more you Reply to this message Quote this message when replying? yes no Post a new topic to the Traffic Jam Applet tool Discussion discussion Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?id=10&context=tool&do=r&msg=12954","timestamp":"2014-04-18T23:47:03Z","content_type":null,"content_length":"17015","record_id":"<urn:uuid:e9f2d1db-ed0a-4a3e-8d91-643776b0f8b4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Buoyant Force Welcome to PF! First, the definition is that the buoyancy force equals mass of object (sand core) minus mass of displaced fluid (steel), that is, the negative of what you wrote. Secondly, from that definition you can see that you also need to use the fact, that the mass of the displaced fluid is calculated from the volume of the object, that is with your terms you should use that W[c] = M[c]g = VD[c]g and W[m] = VD[m]g. It should now be possible for you to relate the buoyancy force with the volume V, the difference in density D[m]-D[c] and the acceleration of gravity g and isolate for V.
{"url":"http://www.physicsforums.com/showthread.php?t=571608","timestamp":"2014-04-19T02:12:10Z","content_type":null,"content_length":"32695","record_id":"<urn:uuid:fc57f149-0208-4f2a-9249-955388da504f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Approximation of Hyperbolic Systems of Conservation Results 1 - 10 of 166 - J. Comput. Phys , 1998 "... . Conservation laws with source terms often have steady states in which the flux gradients are nonzero but exactly balanced by source terms. Many numerical methods (e.g., fractional step methods) have difficulty preserving such steady states and cannot accurately calculate small perturbations of suc ..." Cited by 54 (5 self) Add to MetaCart . Conservation laws with source terms often have steady states in which the flux gradients are nonzero but exactly balanced by source terms. Many numerical methods (e.g., fractional step methods) have difficulty preserving such steady states and cannot accurately calculate small perturbations of such states. Here a variant of the wave-propagation algorithm is developed which addresses this problem by introducing a Riemann problem in the center of each grid cell whose flux difference exactly cancels the source term. This leads to modified Riemann problems at the cell edges in which the jump now corresponds to perturbations from the steady state. Computing waves and limiters based on the solution to these Riemann problems gives high-resolution results. The 1D and 2D shallow water equations for flow over arbitrary bottom topography are use as an example, though the ideas apply to many other systems. The method is easily implemented in the software package clawpack. Keywords: Godunov meth... - SIAM J. Sci. Comput "... Abstract. We consider the Saint-Venant system for shallow water flows, with nonflat bottom. It is a hyperbolic system of conservation laws that approximately describes various geophysical flows, such as rivers, coastal areas, and oceans when completed with a Coriolis term, or granular flows when com ..." Cited by 42 (4 self) Add to MetaCart Abstract. We consider the Saint-Venant system for shallow water flows, with nonflat bottom. It is a hyperbolic system of conservation laws that approximately describes various geophysical flows, such as rivers, coastal areas, and oceans when completed with a Coriolis term, or granular flows when completed with friction. Numerical approximate solutions to this system may be generated using conservative finite volume methods, which are known to properly handle shocks and contact discontinuities. However, in general these schemes are known to be quite inaccurate for near steady states, as the structure of their numerical truncation errors is generally not compatible with exact physical steady state conditions. This difficulty can be overcome by using the so-called well-balanced schemes. We describe a general strategy, based on a local hydrostatic reconstruction, that allows us to derive a well-balanced scheme from any given numerical flux for the homogeneous problem. Whenever the initial solver satisfies some classical stability properties, it yields a simple and fast well-balanced scheme that preserves the nonnegativity of the water height and satisfies a semidiscrete entropy inequality. , 2003 "... WestEC here te comput tmp of shallow-wath equat ons wi ttN:LE&BN y byFinit Volume metmeN , in a one-dimensional framework(tNL&3 allmetB: sint oduced may benatEfiLLN extEfiLL in t o dimensions) . AllmetA:3 are based on a discretcrNB on of tN tBCEAEN(B by a piecewisefunctE n constEC on each cell of tN ..." Cited by 37 (4 self) Add to MetaCart WestEC here te comput tmp of shallow-wath equat ons wi ttN:LE&BN y byFinit Volume metmeN , in a one-dimensional framework(tNL&3 allmetB: sint oduced may benatEfiLLN extEfiLL in t o dimensions) . AllmetA:3 are based on a discretcrNB on of tN tBCEAEN(B by a piecewisefunctE n constEC on each cell of tN mesh, from an original idea of Le Rouxet al. Whereaste Well-Balanced scheme of Le Roux is based on tN exact resol utol of each Riemann problem, we consider here approximat Riemann solvers. Several singlestg metleN are derived from tom formalism, and numerical result are comparedt a fract ionalsta metN4 . Some tme cases arepresent& : convergencetn ardsstsN: stsN: in subcritECC and supercriterc configuratfiE s, occurrence of dry area by a drain over a bump and occurrence of vacuum by a double rarefactNL wave over astAA Numerical schemes, combined wi t an appropriat high-order extNfi ion, provideaccurat e and convergent approxim atximN # 2003 Elsevier ScienceLtc Allright s reserved. 1. I5964V139 We stE4 intNL paper some approximat Godunov schemest comput shallow-watN equatlow wit a sourcetur oftBAL4fiN(LA in a one-dimensional framework. Allmetfi3L presentL may beextB3L4 natB3L4N t tt tB3L4N(&CL4 al model. , 1997 "... During the recent decades there was an enormous amount of activity related to the construction and analysis of modern algorithms for the approximate solution of nonlinear hyperbolic conservation laws and related problems. To present some aspects of this successful activity, we discuss the analytical ..." Cited by 34 (11 self) Add to MetaCart During the recent decades there was an enormous amount of activity related to the construction and analysis of modern algorithms for the approximate solution of nonlinear hyperbolic conservation laws and related problems. To present some aspects of this successful activity, we discuss the analytical tools which are used in the development of convergence theories for these algorithms. These include classical compactness arguments (based on BV a priori estimates), the use of compensated compactness arguments (based on H^-1-compact entropy production), measure valued solutions (measured by their negative entropy production), and finally, we highlight the most recent addition to this bag of analytical tools -- the use of averaging lemmas which yield new compactness and regularity results for nonlinear conservation laws and related equations. We demonstrate how these analytical tools are used in the convergence analysis of approximate solutions for hyperbolic conservation laws and related equations. Our discussion includes examples of Total Variation Diminishing (TVD) finite-difference schemes; error estimates derived from the one-sided stability of Godunov-type methods for convex conservation laws (and their multidimensional analogue -- viscosity solutions of demi-concave Hamilton-Jacobi equations); we outline, in the one-dimensional case, the convergence proof of finite-element streamline-diffusion and spectral viscosity schemes based on the div-curl lemma; we also address the questions of convergence and error estimates for multidimensional finite-volume schemes on non-rectangular grids; and finally, we indicate the convergence of approximate solutions with underlying kinetic formulation, e.g., finite-volume and relaxation schemes, once their regularizing effect is quantified in terms of the averaging lemma. - SIAM J. Numer. Anal , 2000 "... We present here some numerical schemes for general multidimensional systems of conservation laws based on a class of discrete kinetic approximations, which includes the relaxation schemes by S. Jin and Z. Xin. These schemes have a simple formulation even in the multidimensional case and do not need ..." Cited by 33 (11 self) Add to MetaCart We present here some numerical schemes for general multidimensional systems of conservation laws based on a class of discrete kinetic approximations, which includes the relaxation schemes by S. Jin and Z. Xin. These schemes have a simple formulation even in the multidimensional case and do not need the solution of the local Riemann problems. For these approximations we give a suitable multidimensional generalization of the Whitham's stability subcharacteristic condition. In the scalar multidimensional case we establish the rigorous convergence of the approximated solutions to the unique entropy solution of the equilibrium Cauchy problem. - MATH. MODEL. NUMER. ANAL , 2001 "... We present a family of high-order, essentially non-oscillatory, central schemes for approximating solutions of hyperbolic systems of conservation laws. These schemes are based on a new centered version of the Weighed Essentially NonOscillatory (WENO) reconstruction of point-values from cell-averages ..." Cited by 31 (12 self) Add to MetaCart We present a family of high-order, essentially non-oscillatory, central schemes for approximating solutions of hyperbolic systems of conservation laws. These schemes are based on a new centered version of the Weighed Essentially NonOscillatory (WENO) reconstruction of point-values from cell-averages, which is then followed by an accurate approximation of the fluxes via a natural continuous extension of Runge-Kutta solvers. We explicitly construct the third and fourthorder scheme and demonstrate their high-resolution properties in several numerical tests. - M2AN, Math. Model. Numer. Anal "... The model order reduction methodology of reduced basis (RB) techniques offers efficient treatment of parametrized partial differential equations (P 2 DEs) by providing both approximate solution procedures and efficient error estimates. RB-methods have so far mainly been applied to finite element sch ..." Cited by 28 (15 self) Add to MetaCart The model order reduction methodology of reduced basis (RB) techniques offers efficient treatment of parametrized partial differential equations (P 2 DEs) by providing both approximate solution procedures and efficient error estimates. RB-methods have so far mainly been applied to finite element schemes for elliptic and parabolic problems. In the current study we extend the methodology to general evolution schemes such as finite volume schemes for parabolic and hyperbolic evolution equations. The new theoretic contributions are the formulation of a reduced basis approximation scheme for general evolution problems and the derivation of rigorous a-posteriori error estimates in various norms. Algorithmically, an offline/online decomposition of the scheme and the error estimators is realized. This is the basis for a rapid online computation in case of multiple-simulation requests. We introduce a new offline basis-generation algorithm based on our a posteriori error estimator which combines ideas from existing approaches. Numerical experiments for an instationary convection-diffusion problem demonstrate the efficient applicability of the approach. 1 - Math. Comp , 2003 "... Abstract. The use of multiresolution decompositions in the context of finite volume schemes for conservation laws was first proposed by A. Harten for the purpose of accelerating the evaluation of numerical fluxes through an adaptive computation. In this approach the solution is still represented at ..." Cited by 26 (13 self) Add to MetaCart Abstract. The use of multiresolution decompositions in the context of finite volume schemes for conservation laws was first proposed by A. Harten for the purpose of accelerating the evaluation of numerical fluxes through an adaptive computation. In this approach the solution is still represented at each time step on the finest grid, resulting in an inherent limitation of the potential gain in memory space and computational time. The present paper is concerned with the development and the numerical analysis of fully adaptive multiresolution schemes, in which the solution is represented and computed in a dynamically evolved adaptive grid. A crucial problem is then the accurate computation of the flux without the full knowledge of fine grid cell averages. Several solutions to this problem are proposed, analyzed, and compared in terms of accuracy and complexity. 1. - SIAM J. Sci. Comput , 2000 "... We present a new third-order, semi-discrete, central method for approximating solutions to multi-dimensional systems of hyperbolic conservation laws, convectiondiffusion equations, and related problems. Our method is a high-order extension of the recently proposed second-order, semi-discrete method ..." Cited by 26 (2 self) Add to MetaCart We present a new third-order, semi-discrete, central method for approximating solutions to multi-dimensional systems of hyperbolic conservation laws, convectiondiffusion equations, and related problems. Our method is a high-order extension of the recently proposed second-order, semi-discrete method in [16]. The method is derived independently of the specific piecewise polynomial reconstruction which is based on the previously computed cell-averages. We demonstrate our results, by focusing on the new third-order CWENO reconstruction presented in [21]. The numerical results we present, show the desired accuracy, high resolution and robustness of our method. Key words. Hyperbolic systems, convection-diffusion equations, central difference schemes, high-order accuracy, non-oscillatory schemes, WENO reconstruction. AMS(MOS) subject classification. Primary 65M10; secondary 65M05. - FILTRATION IN POROUS MEDIA AND INDUSTRIAL APPLICATIONS, LECTURE NOTES IN MATHEMATICS , 1999 "... During recent years the authors and collaborators have been involved in an activity related to the construction and analysis of large time step operator splitting algorithms for the numerical simulation of multi-phase flow in heterogeneous porous media. The purpose of these lecture notes is to revie ..." Cited by 25 (14 self) Add to MetaCart During recent years the authors and collaborators have been involved in an activity related to the construction and analysis of large time step operator splitting algorithms for the numerical simulation of multi-phase flow in heterogeneous porous media. The purpose of these lecture notes is to review some of this activity. We illustrate the main ideas behind these novel operator splitting algorithms for a basic two-phase flow model. Special focus is posed on the numerical solution algorithms for the saturation equation, which is a convection dominated, degenerate convection-diffusion equation. Both theory and applications are discussed. The general background for the reservoir flow model is reviewed, and the main features of the numerical algorithms are presented. The basic mathematical results supporting the numerical algorithms are also given. In addition, we present some results from the BV solution theory for quasilinear degenerate parabolic equations, which provides the correct ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=75157","timestamp":"2014-04-19T23:48:25Z","content_type":null,"content_length":"41653","record_id":"<urn:uuid:d3660045-cac6-4a75-bad8-33627788a6d7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/511f25f8e4b06821731c5813","timestamp":"2014-04-19T02:01:26Z","content_type":null,"content_length":"205798","record_id":"<urn:uuid:5a2747b7-4e94-4d55-b588-41dcc8f72c30>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Academia and scientific integrity The following story is a typical example of the things that I viscerally dislike about the current Academia and one of the reasons why I am so looking forward to be gone. The story will show that a portion of the Academia is literally built on • corruption • parasitism • superficial people • hypocrisy • people who're not able to recognize that the behavior of others is just a matter of politeness The person who will prove my point is Scott Aaronson but be sure that his example is far from an isolated anomaly. First, it is really not difficult to show that he doesn't have an infinitesimal remnant of scientific integrity. How is he deciding about the validity of particular statements in theoretical physics? Please don't read the text below if you just had your dinner. • I have therefore reached a decision. From this day forward, my allegiances in the String Wars will be open for sale to the highest bidder. Like a cynical arms merchant, I will offer my computational-complexity and humor services to both sides, and publicly espouse the views of whichever side seems more interested in buying them at the moment. Fly me to an exotic enough location, put me up in a swank enough hotel, and the number of spacetime dimensions can be anything you want it to be: 4, 10, 11, or even 172.9+3πi. ... I might have opinions on these topics, but they’re nothing that a cushy job offer or a suitcase full of “reimbursements” couldn’t change. ... Until then, I shall answer to no quantum-gravity research program, but rather seek to profit from them all. It is absolutely impossible for me to hide how intensely I despise people like Scott Aaronson because this fact must be easily detectable by looking at my skin color and other quantities and observables. ;-) He's the ultimate example of a complete moral breakdown of a scientist. It is astonishing that the situation became so bad that the people are not only corrupt and dishonest but they proudly announce this fact on their blogs. In fact, I have learned that the situation is so bad that when I simply state that Aaronson's attitude is flagrantly incompatible with the ethical standards of a scholar as they have been understood for centuries, there could be some parts of the official establishment that would support him against me. There doesn't seem to be a single blog article besides mine that denounces Aaronson's Corruption has become the holy standard and some fields completely depend on it. Feminist career scholars who belong to the diversity industry financially depend on their pseudoscience about the absence of differences between the sexes much like a large fraction of the climate scientists' funding depends on spreading unsubstantiated fears and much like the loop quantum gravity research depends on spreading myths about the existence of "alternatives" to string theory even though these "alternatives" are nothing else than artifacts of confused, superficial, and sloppy thinking. And some of these people will openly tell you that the reason why they say what they say is their financial well-being. The Academia is simply contaminated beyond imagination. You can guess that Aaronson doesn't say anything nice about me either. The difference between two of us is like the difference between a superman from the action movies who fights for the universal justice on one side and the most dirty corrupt villain on the other side. It's like the Heaven and the Hell, freedom and feminism, careful evaluation of the climate and the alarmist hysteria, or string theory and loop quantum gravity. ;-) In order to avoid misunderstandings, let me emphasize that Aaronson's comment is no joke. First of all, it is not too nice to be joking about these serious matters. But there is a simpler way to see why it's no joke. He has not only written what he wrote: has has also acted like that. He has advocated the crackpots and when he was paid a visit to California, he started to write different things. He's a corrupt piece of moral trash. My anger may be quiet but it is unyielding. It's not just him: the Academia is literally flooded by intellectual prostitutes. Stanford and weird discussions Scott Aaronson was invited to Stanford to speak about some alleged links between the anthropic principle and NP-complete problems in computer science. He mentioned some elementary - for him - things about computational complexity plus its hypothetical relations to the landscape. He referred to the paper by Douglas and Denef. I don't think that their paper holds too much water - they assume that the configuration space of string theory is an uncontrollable, chaotic pile of numbers, and they use this assumption to derive that the configuration space of string theory is an an uncontrollable, chaotic pile of numbers that can't be used to find the right place where we live. ;-) From the viewpoint of conventional physics, this statement is kind of manifestly wrong. If we made the appropriate measurements of physics at various scales up to the Planck scale, especially at the compactification scale, we could simply *measure* various parameters of the compactification. We could *measure* the fluxes and the numbers of branes wrapped on various cycles and simply reconstruct what our compactification looks like: here I assume that we live in a standard flux compactification but similar procedures could work if we live in another kind of vacuum, too. We could determine various currently unknown features of Nature just like the physicists typically did thousands of times in the past: using experiments. Even without experiments, we can find - and we are frequently finding - new theoretical organizing principles that increase our understanding of the theory. Every insight makes our picture of reality less confusing and more comprehensible while it makes the argument of Denef and Douglas weaker. Even if the task of finding the exact vacuum were difficult in practice, it is pretty obvious that the additional measurements would simplify our search for the correct vacuum. Denef and Douglas have to assume that we won't find any new organizing principle which is a pretty pessimistic - and unlikely - assumption. What a surprise that they can derive pessimistic Fine. I don't want to argue about Denef and Douglas because these physicists are extremely smart and experienced and they know what they're doing and how the actual terms in various quantities such as vacuum energy look like. But now we discuss another piece of work which seems to have nothing to do with the vacuum selection problem: correct me if I am wrong. The author just argues that he did something in computer science because of his thinking about the vacuum selection problem but this opinion seems to be a consequence of his not knowing what the vacuum selection problem is actually all about. Unconventional seminar speakers You know, high-energy physics groups often invite speakers who are not exactly in the mainstream but who can be interesting or inspiring because of other reasons. The more pedantically these speakers focus on our field that they don't understand, the more nonsensical the resulting talk typically is. Needless to say, when you're the organizer and when you're a hospitable person, it is obvious that you will try to make the visit as pleasant for the speaker as you can even if his talk makes no sense. This is how good organizers normally behave. After all, we don't want to create a hell out of our environment that couldn't be cured by turning the monitor off. Many of us have organized many kinds of seminars and we could share our stories. The string theorists are undoubtedly among the most hospitable people in the world. But the speaker must be very silly if he misinterprets the hospitality and politeness as a confirmation of his ideas about the Universe. Be sure that virtually all well-known senior string theorists realize that all the comments about the "alternatives" of string theory are just mathematically unconvincing conglomerates of half-baked ideas that can often be easily disproved within a minute and that are considered to be worth a talk only because of the unusually poor standards of their proponents. At the same moment, the string theory community contains a huge number of very generous and receptive hosts. These two features are not necessarily in contradiction with each other because they refer to different types of the string theorists' mind. The fact that a string theorist understands that loop quantum gravity is nonsense is about her familiarity with some general facts, arguments, and theorems about theoretical physics. The fact that she is a great host talks about her social qualities. But one shouldn't forget that these are two very different characteristics that manifest themselves in different situations. They shouldn't be mixed with each other. And if there is a risk that there could be a misunderstanding, I think it is a good policy to try to avoid such a misunderstanding. Local description of quantum gravity Scott Aaronson refers to some comments of Steve Shenker and especially Lenny Susskind about the local description of gravity. Although it would be nice to have a local description of string theory, one that resembles the old-fashioned field theories as closely as possible, it seems clear to me that there exists a lot of unnecessary fog about this question. First of all, we do have such a description in many situations. It is called string field theory. String field theory has its issues - especially its awkward treatment of closed strings that doesn't seem to tell us anything about non-perturbative physics. Nevertheless, it is a correct local off-shell description of perturbative string theory. There are indeed many "local fields" at weak coupling of string theory. If you suggest that there is another local off-shell description of weakly-coupled string theory, you're pretty close to a contradiction. If you choose different degrees of freedom than the elementary string fields, they will have to be strongly coupled in order to describe the same physics. No one - not even Lenny Susskind - has ever told me in what sense this description should be better than string field theory as we know it so I consider this line of reasoning to be pretty much closed. At weak coupling, we know more or less everything we want to know about string theory: there could perhaps exist some interesting things we should know but we don't yet know what these future questions are. ;-) It would be really great to extend our tools - such as the worldsheet conformal field theory - to all values of the couplings. But the perturbative part of the resulting formalism is physically understood. Manifest Hawking-Bekenstein entropy Scott Aaronson says that the loop quantum gravity proponents agree with conventional quantum gravity - and string theory - that the black hole entropy is proportional to the horizon area. That's a very bizarre statement. The proportionality law was, first of all, found by Bekenstein who has made some visionary observations about the laws of thermodynamics and their links with physics of black holes and by Hawking who has quantitatively incorporated these visions into semiclassical quantum gravity. Second of all, the entropy formula was independently derived in string theory for rather large classes of black holes. These derivations look completely different and inequivalent according to the present understanding of physics of string theory. It is likely that a more complete and universal description of string theory will make this conclusion seem less mysterious but right now, the confirmation is something that no reasonable physicist can dismiss because it is a highly non-trivial argument supporting the formula as well as the statement that string theory is the only known consistent theory of gravity. And yes, it is probably also the only mathematically possible theory of quantum gravity but right now, we can't prove this assertion directly. Loop quantum gravity cannot derive that the entropy is proportional to the area and it is likely that if loop quantum gravity is treated properly according to its rules, the proportionality law doesn't hold. The law only holds if we assume that all the degrees of freedom in the black hole interior can be ignored and should be ignored: the whole region must be removed from space by hand and a new kind of physics must be attached at the end-of-the-world domain wall. These degrees of freedom are ignored because we want to get a certain result. No one has any other justification of this step that would follow from loop quantum gravity itself. Moreover, the calculation of the proportionality coefficient leads to a completely wrong result instead of the correct factor of one quarter. Some loop quantum gravity proponents can perhaps agree that the proportionality law is true - because the evidence supporting this law is overwhelming, exact, and based on many different and inequivalent formalisms. But it is dishonest to pretend that the proportionality law is equally close to the equations of loop quantum gravity as it is to string theory. The real situation is extremely far from being a symmetric one. Universality of the area law Another thing I want to say is the following: the proportionality law holds for all kinds of large black holes. It is a universal law. But the reason why the law is universal is explained by arguments based on general relativity itself: imagine something like Wald's derivation of the entropy or one of the procedures by Steve Carlip. I think it is a fundamentally flawed idea to expect another universal proof of the formula that is not based on general relativity. After all, the usual general relativistic description is the only one that can easily talk about the "horizon area". If you have a different description that doesn't directly use the metric tensor as a degree of freedom, it won't provide us with any universal method to define the "horizon area". The only thing that all these black holes in different vacua of string theory share is the horizon area and the Einstein-Hilbert action for some of the relevant low-energy degrees of freedom. There is nothing else that they share, and there can thus be no other universal derivation of the law. The people who believe otherwise are probably making the same mistake as the people who try to derive two inequivalent descriptions of a weakly-coupled regime of a physical theory. The alternative, microscopic calculations of the black hole entropy look different. Most of them are eventually based on Cardy's formula but the exact identity of the CFT whose density of states is calculated by this asymptotic formula differs from one black hole to the next. The macroscopic classical description based on general relativity is the only thing that all black holes really share. Merry Christmas. snail feedback (0) :
{"url":"http://motls.blogspot.com/2006/12/academia-and-scientific-integrity.html","timestamp":"2014-04-16T13:03:01Z","content_type":null,"content_length":"202821","record_id":"<urn:uuid:8f89467b-5b06-4f35-a2ff-adb23875e45e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Constructing a metric over a lattice up vote 5 down vote favorite Consider a lattice $({\cal L}, \wedge, \vee)$ with an antimonotonic function $f: {\cal L} \rightarrow {\mathbb R}$ defined on it (i.e $x \preceq y \implies f(x) \ge f(y)$). $f$ is said to be submodular if for all $x,y \in {\cal L}$, $$f(x) + f(y) \ge f(x \wedge y) + f(x \vee y)$$ and supermodular if the inequality is flipped (again for all $x,y$). It's generally known (there's an easy proof), that a submodular $f$ induces a metric on ${\cal L}$ via the defn $$ d_s(x,y) = 2f(x \wedge y) - f(x) - f(y)$$. If $f$ is supermodular, then the construction $$d^s(x,y) = f(x) + f(y) - 2f(x \vee y)$$ yields a metric. Question I'm dealing with an $f$ that is nether sub- nor supermodular. I can define the "distance" $$ d(x,y) = \min ( d^s(x,y), d_s(x,y))$$ Conjecture: $d(x,y)$ is a metric. I have very little sound mathematical intuition for why this conjecture should be true, and bucketloads of empirical evidence (from a lattice I'm actually working with). This seems like the kind of thing that if true, would be reasonably well known to experts, and if false, might have a clear counterexample. So this is a plea for help. Since it might make a difference, I should mention that the lattice I'm working with is nondistributive in general, but it has distributive sublattices where I'm still unable to prove the conjecture. Suresh, it will be very helpful if you explain in your question what your notations $L,\wedge,\vee$ mean. L stands for lattice, but what are $\wedge$ and $\vee$? I guess $x \wedge y$ does not mean wedge product of x and y ? – Dmitri – Dmitri Feb 21 '10 at 11:04 2 In a lattice $x \vee y$ denotes the greatest lower bound of $\{x,y\}$ and $x \wedge y$ the least upper bound. – Gerald Edgar Feb 21 '10 at 11:31 Ok, I think I got, it, here by lattice we mean this: en.wikipedia.org/wiki/Partially_ordered_set , but not this: en.wikipedia.org/wiki/Lattice_(group) I was not awear of the first definition of Lattice, wiki is helpful here en.wikipedia.org/wiki/Lattice_(mathematics) – Dmitri Feb 21 '10 at 12:47 1 By the way, I seem to have reversed $\wedge, \vee$ in my comment. – Gerald Edgar Feb 21 '10 at 13:28 1 @Dmitri: Look at the tag "lattice" ... it is interesting that there are posts on both types of lattice that use the label. – Gerald Edgar Feb 21 '10 at 17:45 show 1 more comment 3 Answers active oldest votes First failed attempt (this poset is not a lattice) triangle inequality fails for the three .5 nodes in the middle up vote 8 down vote Second attempt Our lattice consists of sets, with intersection and union. I show them by Venn diagrams here... The same three middle sets fail the triangle inequality for the same accepted reasons as before. But now it is surely a lattice, right? 2 So I originally had a comment saying that this was a good example (and I also accepted the answer), but now I'm not so sure. In your example, let's label the three .5 elements A, B, C from left to right. Consider the "middle" .5 element B, and the .4 element "above" it in the drawing. These two elements have two meets in the lattice (the .9 and the .6). That violates the unique meet (and join) property of a lattice. I'm not certain there's an easy way to fix this. – Suresh Venkat Feb 21 '10 at 17:37 Edited to add a second attempt. – Gerald Edgar Feb 21 '10 at 18:53 I think this example works, but I'm going to let it stew in my head for a few hours before I accept it :). Thanks ! – Suresh Venkat Feb 21 '10 at 20:23 add comment Don't you need some strict inequality somewhere in your definitions? For example, a constant function meets your definitions of antimonotonic, submodular, and supermodular, but does not up vote 1 induce a metric (assuming your lattice has more than one element) since $d^s$ and $d_s$ would then always evaluate to zero. down vote For this I think "antimonotonic" should be: $x < y \Longrightarrow f(x) > f(y)$. – Gerald Edgar Feb 22 '10 at 13:02 yes. I was being sloppy with the definitions – Suresh Venkat Feb 22 '10 at 18:01 add comment Probably to define a distance starting with a general antimonotonic $f$ we should do this: if $a\lt b$ then $d(a,b) = d(b,a) = f(a)-f(b)$, and for general $a,b$ let $d(a,b)$ be $$\inf \sum_ {n=1}^n d(x_i,x_{i-1}),$$ where the infimum is over all sequences $a=x_0, x_1, \dots, x_n=b$ such that adjacent terms are comparable (call such sequences paths from $a$ to $b$). In your up vote original proposal we used only two-step paths from $a$ to $b$, but to get the triangle inequality we need to allow longer paths as well. Presumably submodular implies that a certain two-step 1 down path is shortest, and supermodular that a different two-step path is shortest. Plus, this will apply to a poset that is not a lattice. That's exactly right. That's how we did define a metric in fact. the original hope was that it could be computed easily using only two-step paths, and your example shows that this is not possible. – Suresh Venkat Feb 22 '10 at 18:01 add comment Not the answer you're looking for? Browse other questions tagged lattices mg.metric-geometry order-theory posets ra.rings-and-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/15964/constructing-a-metric-over-a-lattice?sort=oldest","timestamp":"2014-04-18T21:21:57Z","content_type":null,"content_length":"73195","record_id":"<urn:uuid:b23a10cc-0cd2-45ef-b3d0-0f58f061796a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
5 replies Case Assigner I just read the "Where ist the fish?" topic? This reminds me of another one named "Where is the father"? Maybe some of you already know this one. I guess it is quite famous, at least in Germany it is. You can try to solve it. And it definetely works - you can give an adequate solution to this problem. If no one is able to solve it, I will give the answer later on. This is one of those questions I like most...... There is a mother which is 21 years older than her son. In 6 years, the son will be 5 times younger than the mother. ( <-- btw., is this correctly translated into English? ) Question: Where is the father? Hi Case Assigner, well, where is the father? What about the son? And the mother? And what are they doing? Heh... hehehe, let's see: a[m] = a[s] + 21 (age of mother = age of son plus 21) a[m6] = 5a[s6] (age of mother after 6 years = 5 times the age of the son after 6 years) a[m6] = 5a[s6 ]a[m ]+ 6 = 5(a[s] + 6) a[s] + 21 + 6 = 5(a[s] + 6) a[s ]= -3/4 (son's age) a[m] = a[s] + 21 = 20+1/4 (mother's age) So the son is -3/4 years old, that is, minus nine month old. He will... be born? What's the future of "was born"? Umm, let's then say he'll "come out" in nine months. You asked, where is the father... how should I know? I don't even know exactly where the son is! Well, since it takes 9 months to carry a baby, I would say the father is, at that moment, impregnating the mother. Although babies do come at their own pace, so it seems a little silly to predict exactly when he'll be born. Grammar GeekWell, since it takes 9 months to carry a baby, I would say the father is, at that moment, impregnating the mother. Although babies do come at their own pace, so it seems a little silly to predict exactly when he'll be born.Yeah, it doesn't take exactly nine months, so you can't tell exactly where the son is... maybe he's still in the testicles, who knows... or maybe he's going to leave, and he's like "Ok dude, I'm outta here, see ya in nine months". That's why I said there's no exact answer actually. Case Assigner Yes, you are right..... It´s not possible to give a very accurate solution to this problem. I just like this one because you explain a problem which is to be solved and everyone thinks about age and all that stuff.... and then the question is where the father is at that moment...... I really like it, because you can approximately give the answer..... In German it is a little bit nicer, because the answer is much ruder, but I didn´t want to post this one here... Grammar Geek has formulated it very nicely. Interesting stuff Live chat Registered users can join here
{"url":"http://www.englishforums.com/English/WhereIsTheFather/vrzhw/post.htm","timestamp":"2014-04-20T18:27:52Z","content_type":null,"content_length":"32411","record_id":"<urn:uuid:337ce228-fe97-4338-95d4-39717a83ef41>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Please Help on Inverse Operators; Differential Equation !!! April 12th 2013, 05:59 PM #1 Apr 2013 Please Help on Inverse Operators; Differential Equation !!! Hi guys, I have an exam tomorrow, for which I am prepared for except for the topic of inverse operator. I've been trying to learn this topic for the last few days, but I can't seem to. Can someone please explain how inverse operators work? Also, could someone solve and explain to me in detail how to solve the following using inverse operator method: 4y'' - 3y' + 9y = 5x^2 Thank you for all the help in advance! Re: Please Help on Inverse Operators; Differential Equation !!! I'm not sure what you mean by Inverse Operators, but that DE is second order linear constant coefficient, so solve for the homogeneous equation using the Characteristic Equation, and guess a Particular Solution of the form Ax^2 + Bx + C and solve using variation of parameters. Re: Please Help on Inverse Operators; Differential Equation !!! Sorry, ill be more specific. Inverse operator as in Inverse Differential Operator. Such as: Inverse operators I tried figuring out that thread, but no luck Last edited by math786; April 12th 2013 at 06:10 PM. Re: Please Help on Inverse Operators; Differential Equation !!! I understand and can do all other methods for 2nd order ODE's. Its just method of inverse differential operator. April 12th 2013, 06:02 PM #2 April 12th 2013, 06:08 PM #3 Apr 2013 April 12th 2013, 06:09 PM #4 Apr 2013
{"url":"http://mathhelpforum.com/differential-equations/217354-please-help-inverse-operators-differential-equation.html","timestamp":"2014-04-17T21:59:06Z","content_type":null,"content_length":"39019","record_id":"<urn:uuid:f62a3a4b-fa68-4a5b-9de5-5039d74c1454>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Population doubling time of keratinocyte,etc. - Population doubling time of keratinocyte,etc. - (Mar/01/2005 ) again thanks!! i've done some preliminary tests, what i suspected was the period of keratinocyte adherent might be due to some factors. around the results i have got was the seeding density. if i seed too less cells, cell could adhere but need long period to get 60% of confluency and of course some cells were proliferated and DIFFERENTIATED. this is worse! if too much cells, medium will need to change everyday, but 80% confluency can be achieved within 4 days. donor variability also a factor in this aspect as well. what do u think about the seeding density?? thanks for comments!! you must pay attention to the time cells need to differenciate. if your cells need 5 days to differenciate, i would say that three days is the max time for your experiment. But if time of differenciation is greater no problem fo doing your experiment on 4 days! I think that 60% confluency is good. For two reasons. 1 cells are relatively close to make good divisions (but not too close) and 2 if you seed at more than 60% and due to the fact your treatment is an increaser of division frequency (PDT decreases) more than 60% may result in too much confluency in treated cells. Enjoy your experiment. Good luck. u are right. i'll try through out the passage 0 to 2 with treatment, to observe the differences of PDT among passage 0,1 and 2. what i guess was the passage 2 will be cleaner than the other two. thus, will be giving out the more accurate answer. what do u think?? I am stimating PDT for a normal and transformed cell line but I cannot find the formula to calculate it. Does any one has this formula? I am plating 50,000 or 100,000 cells in 35mm dishes and counting cells 24, 48, and 72 hours later. I will appreciate your help. Thanks. -David Chiluiza- here you are... negative sign may occur but you don't care of it... Thanks for the formula for PDT. As I can see, I can use only two time points at a time, is that correct? I have data collected at times 0, 24, 48, and 72 hours. Should I compared PDT between 0-24, 0-48, and 0-72, using t=24, 48, and 72 respectively, then average? or should I compara 0-24, 24-48, and 48-72 using t=24 for all, then average? How would you use the formula to obtain your cell line's PTD? -David Chiluiza- Can anyone send me keratinocyte isolation protocol? Thanx U
{"url":"http://www.protocol-online.org/biology-forums/posts/5398more1.html","timestamp":"2014-04-18T18:24:15Z","content_type":null,"content_length":"11528","record_id":"<urn:uuid:98f64046-5351-44d4-9efe-aec66c57130a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Coding help. Coding help. I need to code y= (-b + (b^2 – 4ac)^1/2 )/2a Variables a,b,c,and y are of type float. And for this code the square root is exactly the same as it is IRL, like Sqrt 16=4 Thanks alot :) So, what have you tried? idk how to code it, so i haven't tried anything!! Start with showing us how you would write the expression (b^2 – 4ac) as C++ code. If you cannot do that, then you need to re-read your learning material. fairly simple...break it up into parts. like for instance code the parenthesis first then move outwards till you've finished it.
{"url":"http://cboard.cprogramming.com/cplusplus-programming/134805-coding-help-printable-thread.html","timestamp":"2014-04-17T08:51:37Z","content_type":null,"content_length":"7237","record_id":"<urn:uuid:4b26d213-d1f0-44ab-a41c-16e70da3bb26>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Richland Hills, TX Trigonometry Tutor Find a Richland Hills, TX Trigonometry Tutor ...I use OSHA, National Cancer Institute (NCI), EPA, and other health and safety systems to analyze statistics for the DFW region, but also for National or International trends. Some Key terms from the NCI age-adjusted rate An age-adjusted incidence or mortality rate is a weighted average of the a... 93 Subjects: including trigonometry, chemistry, English, reading ...While a student at Trinity, I tutored freelance between 7-10 students regularly. I also worked for Huntington Learning Center in San Antonio during my last year in college. I have experience tutoring most math classes taught in Texas middle and high schools as well as teaching test preparation for the math portions of the ASVAB, SAT and ACT. 14 Subjects: including trigonometry, chemistry, geometry, SAT math I have undergraduate degrees in Mechanical and Aerospace Engineering. I completed a Master's in Industrial Engineering and have four years of industry experience. I have been tutoring and mentoring since high school, all the way through college. 21 Subjects: including trigonometry, chemistry, English, accounting ...I have a patient and clear way to explain things that most tutors can't do, even if they know the material. I always try to point out the common mistakes and review the important points. I can take you through the short-cut way of doing things, make it simple, and most important - you remember it. 15 Subjects: including trigonometry, chemistry, physics, statistics ...I majored in math in college and have a Master's degree in Mathematics. My students have consistently improved their SAT scores by 200 points overall and raised their class grade by one letter grade. I try to make math fun for my students and work at a pace that the student can maintain. 15 Subjects: including trigonometry, chemistry, calculus, geometry Related Richland Hills, TX Tutors Richland Hills, TX Accounting Tutors Richland Hills, TX ACT Tutors Richland Hills, TX Algebra Tutors Richland Hills, TX Algebra 2 Tutors Richland Hills, TX Calculus Tutors Richland Hills, TX Geometry Tutors Richland Hills, TX Math Tutors Richland Hills, TX Prealgebra Tutors Richland Hills, TX Precalculus Tutors Richland Hills, TX SAT Tutors Richland Hills, TX SAT Math Tutors Richland Hills, TX Science Tutors Richland Hills, TX Statistics Tutors Richland Hills, TX Trigonometry Tutors Nearby Cities With trigonometry Tutor Bedford, TX trigonometry Tutors Colleyville trigonometry Tutors Fort Worth trigonometry Tutors Fort Worth, TX trigonometry Tutors Haltom City trigonometry Tutors Hurst, TX trigonometry Tutors N Richland Hills, TX trigonometry Tutors N Richlnd Hls, TX trigonometry Tutors North Richland Hills trigonometry Tutors Pantego, TX trigonometry Tutors River Oaks, TX trigonometry Tutors Saginaw, TX trigonometry Tutors Sansom Park, TX trigonometry Tutors Watauga, TX trigonometry Tutors Westover Hills, TX trigonometry Tutors
{"url":"http://www.purplemath.com/Richland_Hills_TX_trigonometry_tutors.php","timestamp":"2014-04-20T01:49:58Z","content_type":null,"content_length":"24670","record_id":"<urn:uuid:77c44605-d9a9-4315-a034-3c7ff7f1cfc1>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
The Napoleon Point and More Date: 09/04/98 at 03:10:25 From: Adi Schulthess Subject: Fermat On each side of a triangle you put on triangles having three sides of the same length. You take the centers of them and draw a line to the opposite vertex in the original triangle. Does anybody know the proof why the three lines all go to the same point? Date: 09/05/98 at 12:46:58 From: Doctor Floor Subject: Re: Fermat Hi Adi, Thank you for sending your question to Dr. Math. I wonder why you write "Fermat" in your subject line, because the point of concurrence you are describing is the "Napoleon point" of the triangle. The "Fermat point" is constructed by taking the 'new' vertices of the triangles constructed at the sides, not their centers. For the Napoleon point, visit Clark Kimberling's page: From this page you can look for other triangle centers too. Both points are special cases of the following theorem: Let ABC be a triangle, and let A', B' and C' be points such that: angle(ABC') = angle(CBA') = angle(BCA') = angle(ACB') = angle(CAB') = angle(BAC') = t (where the angles are only equal when they are all outside or all inside the triangle). Then the lines AA', BB' and CC' concur in one To prove this, let's for simplicity reasons take all angles outside triangle ABC. Let's name the sides of triangle ABC in the standard way, i.e. AB = c, BC = a and CA = b. Let's name the angles angle(CAB) = angle(A), angle(ABC) = angle(B) and angle(BCA) = angle(C). Here is a picture for your reference:
{"url":"http://mathforum.org/library/drmath/view/55042.html","timestamp":"2014-04-17T00:56:04Z","content_type":null,"content_length":"8508","record_id":"<urn:uuid:230e1bc8-0f12-4a0e-a84e-b7d89af8f213>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Adam-- Dr Bob, Help! on Wednesday, September 26, 2012 at 5:34pm. I Posted this question the other day: A 1.00 g sample of enriched water, a mixture of H2O and D2O, reacted completely with Cl2 to give a mixture of HCl and DCl. The HCl and DCl were then dissolved in pure H2O to make a 1.00 L solution. A 25.00 mL sample of the 1.00 L solution was reacted with excess AgNO3 and 0.3800 g of an AgCl precipitate formed. What was the mass % of D2O in the original sample of enriched water? your reply was : I have deleted the original response and replaced it with the following. Check my thinking. Cl2 + H2O ==> HOCl + HCl Cl2 + D2O ==> DOCl + DCl let x = mass H2O and y = mass D2O x + y = 1.00 [x*(molar mass AgCl/molar mass H2O)] + [y*(molar mass AgCl/molar mass D2O)] = 0.3800 x 1000/25 Two equation in two unknowns. Solve for x and y and convert to percent. Check my thinking. I understand your reasoning and I feel that this should work, but when I try to solve it I am confused. My two equations would be : x + y = 1.00 15.2=7.96x + 7.16y when I solve x= 10.05 and y = -9.05 I'm not sure if this is wrong or if I just don't understand the concept. When I would calculate the % D2O it doesn't really make sense... • Chemistry - DrBob222, Wednesday, September 26, 2012 at 6:56pm Thanks for posting the original question and my response. I don't get your numbers exactly but I agree something is wrong because, I, too, obtained a negative number for D2O and that can't be. I have looked at my solution and I don't see anything that stares out at me that I can correct. Does the problem give any additional information? Do you have an answer? Is there an equation for the reaction with Cl2? Have you checked that the problem is posted correctly with all of the correct numbers? There is something very basic that's wrong for the following: If I take the 0.3800 g AgCl, convert to moles and back to mols H2O and convert that to grams I get 1.9g if it's 100% H2O or 2.17 g if it's 100% D2O. And neither of those can be right since we had only 1.00 g total initially. • Chemistry - Adam-- Dr Bob, Help!, Wednesday, September 26, 2012 at 7:24pm Deuterium Oxide (D2O) is a commonly solvent for NMR spectroscopy. However, it is often contaminated with H2O. To test for residual H2O in D2O , a student took 1.00g of D2O + H2O sample and reacted with Cl2 to give a mixture of HCl and DCl. The HCl and DCl mixture was then dissolved to make a 1.00L solution. A 25mL sample of the 1.00L solution was reacted with excess AgNO3 and 0.3800g of AgCl formed. What was the mass % of D2O in the original sample of deuterium enriched water? that is the exact question, I don't think I missed anything, but maybe you will see something different • Chemistry - DrBob222, Wednesday, September 26, 2012 at 10:35pm I have turned this thing around in my head and I don't see anything wrong but it is obvious that SOMETHING is wrong. Either I've done something that isn't cricket or one of the numbers in the problem is not right. I will ask my friend, Bob Pursley, to take a look at it. If you will post each day I will have a way to get back to you. • Chemistry - bobpursley, Thursday, September 27, 2012 at 4:04am I dont see anything wrong with the concept in the solution. Obviously, with a negative solution, it means probably that one of the data must be wrong • Chemistry - Adam-- Dr Bob, Help!, Thursday, September 27, 2012 at 9:44am Thank you both so much for your help , it is really appreciated. Related Questions math - Hi, I had an SAT question of the day that I cannot figure out. The ... arithmetic - Word Problem: A hospital needs to figure the annual budget for an ... pre-calc - One day, a controversial video is posted on the Internet that ... Math word problem - Which would you rather be paid for 12 days - $100 per day ... HELP HELP SCIENCE - Mrs. Browns class is studdying how the path of the suns ... physics - Question: 15 of 30: The distance versus time plot for a particular ... math - A rock-climbing gym charges non-members $16.00 per day to sue the gym ... algebra - An employer has a daily payroll of 1225.00 when employing some workers... algebra - you rent a car for $55.00 a day. .20 cents a mile , and you pay $80.00... Geography/Economics - I posted this question the other day and someone gave me a...
{"url":"http://www.jiskha.com/display.cgi?id=1348695241","timestamp":"2014-04-21T15:19:27Z","content_type":null,"content_length":"12197","record_id":"<urn:uuid:68452318-9068-4411-82c4-eace4806c743>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Log transform of skewed data [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Log transform of skewed data From Roger Newson <roger.newson@kcl.ac.uk> To statalist@hsphsun2.harvard.edu Subject Re: st: Log transform of skewed data Date Wed, 02 Jun 2004 21:56:18 +0100 At 14:53 02/06/04 -0400, you wrote: I have data on the "cost" (actually tranformed hours) of various types of caretaking for Alzheimers patients. I'm interested in a regression model to test treatment effects in a multisite study. As is usual for cost data, it is positively skewed. So, I contemplated a log transform, either through a direct transformation of the response, or through a log link in a glm, gee, or something similar. I actually am using "xt" commands to allow for nonindependence among caretakers treated at the same site. the problem is that the mode cost is $0, so that the distribution is bimodal. This, of course, remains true if I do a lof transform. Any ideas on how to analyze such data would be apreciated. Log-transformed data can often be understood in terms of geometric means and their ratios. If in Stata you type findit gmratio then you should be taken to my website, where you can download my Stata Tip on the -eform- option of -regress- (Newson, 2003), which shows how to use this to calculate confidence intervals for geometric means and their ratios. If there are zeros, however, then there is a problem, because the log of zero is not defined. In this case, you either have to transform the zeros to something else, or use arithmetic means instead of geometric means, with a log link function, in a glm or gee, usually using the -eform- option. The parameters will then be arithmetic means and their ratios, instead of geometric means and their ratios. Arithmetic means are still defined if the outcome is possibly zero, as is the case with loglinear modelling of count data, and the principle is the same with non-count data such as your caretaker-hours. The trick with the -noconst- option, mentioned in Newson (2003) may still be useful if you want a baseline arithmetic mean for a baseline patient. Hope this helps. Newson R. Stata tip 1: The eform() option of regress. The Stata Journal 2003; 3(4): 445. Roger Newson Lecturer in Medical Statistics Department of Public Health Sciences King's College London 5th Floor, Capital House 42 Weston Street London SE1 3QD United Kingdom Tel: 020 7848 6648 International +44 20 7848 6648 Fax: 020 7848 6620 International +44 20 7848 6620 or 020 7848 6605 International +44 20 7848 6605 Email: roger.newson@kcl.ac.uk Website: http://www.kcl-phs.org.uk/rogernewson Opinions expressed are those of the author, not the institution. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2004-06/msg00083.html","timestamp":"2014-04-19T14:38:08Z","content_type":null,"content_length":"7760","record_id":"<urn:uuid:7f7cd487-3d3c-42e0-ae9a-f3e229738d89>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex numbers with a polenom! March 19th 2011, 01:56 AM #1 Mar 2011 So here is this combined polynomial & complex numbers; I couldn't solve it! F(z)= z^7 + a(6)*z^6 + a(5)*z^5 + a(4)*z^4 + a(3)*z^3 + a(2)*z^2 + a(1)*z^1+a(0) It is given that z=-4-2i is a root of F(z),F'(z),F''(z) (I guess it's correct to say that z=-4+2i is also root for the three functions!) a(0),,,,,,a(6) are all real numbers What is the value of a(6) ? **a(Number) indicates for index! Any thoughts? Here's one thought: if the coefficients of the polynomial are all real, then any complex roots come in complex-conjugate pairs. Hence, -4+2i is also a root of F, F', and F''. That means you technically have 7 equations and 7 unknowns (the unknowns being the coefficients). I would just start plugging in numbers into the functions you have, and start cranking away. It's a fair bit of algebra, looks like. What do you get? Here's one thought: if the coefficients of the polynomial are all real, then any complex roots come in complex-conjugate pairs. Hence, -4+2i is also a root of F, F', and F''. That means you technically have 7 equations and 7 unknowns (the unknowns being the coefficients). I would just start plugging in numbers into the functions you have, and start cranking away. It's a fair bit of algebra, looks like. What do you get? Are you sure this is a good was to solve this it mayet take me hours Are you allowed to use a Computer Algebra System like Mathematica? Since -4-2i is a zero of F, F', and F'', (z + 4 + 2i)^3 must be a factor of F. The same is true of -4+2i, so (z + 4 -2i)^3 is also a factor. Let's say the remaining zero of F is a. Then F(z) = (z-a) (z + 4 + 2i)^3 (z + 4 -2i)^3 = (z-a) [(z + 4 + 2i) (z + 4 -2i)]^3 = (z-a) (z^2 + 8z + 20)^3 You can use F(0) = -6 to determine a. So... Since -4-2i is a zero of F, F', and F'', (z + 4 + 2i)^3 must be a factor of F. The same is true of -4+2i, so (z + 4 -2i)^3 is also a factor. Let's say the remaining zero of F is a. Then F(z) = (z-a) (z + 4 + 2i)^3 (z + 4 -2i)^3 = (z-a) [(z + 4 + 2i) (z + 4 -2i)]^3 = (z-a) (z^2 + 8z + 20)^3 You can use F(0) = -6 to determine a. So... Very nice! Much more elegant than my brute force solution. There's your answer, kadmany. Hello,first thank u all, second off "Computer Algebra System" please if u got a link to such a calculator then post it, second the method with F(z) = (z-a) (z + 4 + 2i)^3 (z + 4 -2i)^3 = (z-a) [(z + 4 + 2i) (z + 4 -2i)]^3 = (z-a) (z^2 + 8z + 20)^3 You can use F(0) = -6 to determine a. So... I put z=0 then I get (-a)[(4+2i)(4-2i)]^3 = (-a)(20)^3 this equals to -6 => a*20^3=6 => a=6/(20^3) then what?, I gotta find a(6) to open operators is bit hard! Guys I got this expression : Is it correct? March 19th 2011, 03:04 AM #2 March 19th 2011, 03:27 AM #3 Mar 2011 March 19th 2011, 06:33 AM #4 March 19th 2011, 07:01 AM #5 March 19th 2011, 07:20 AM #6 March 19th 2011, 08:22 AM #7 Mar 2011 March 19th 2011, 10:23 AM #8 Mar 2011
{"url":"http://mathhelpforum.com/calculus/175027-complex-numbers-polenom.html","timestamp":"2014-04-19T14:07:59Z","content_type":null,"content_length":"53340","record_id":"<urn:uuid:ec7ef317-fc5e-47fc-b995-f3829cbfae79>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Phase Diagrams Mixtures of Phases of Matter Phase diagrams are used to describe equilibrium situations in which two or more phases of matter exist together in pure substances or in solutions. They are widely used in the physical sciences, especially in the fields of metallurgy, materials science, geology, and physical chemistry. In these fields, substances are often formed at high temperatures and then subsequently cooled to the solid state. The manner in which they are cooled determines the mixture of phases that exists when they become solid. This can have an enormous impact on the physical properties of the solid material due to internal stresses (e.g. tempered steel). Phase diagrams have seen very little use in biology, however they have been widely appreciated in cryobiology since Cocks and Brower published an article showing their utility (Cryobiology11: 340-358, 1974). In biological systems, the primary component is water; the entire system is a collection of compartments filled with an aqueous solution. As aqueous solutions are cooled, the water forms a crystalline solid (ice) which has almost no solubility for the solutes that were in the aqueous solution. As ice forms, then, the solutes will be confined to the remaining liquid phase, becoming more concentrated. Since this lowers the freezing point of the aqueous liquid, the system can remain in equilibrium with a substantial unfrozen fraction. As cooling continues, the solubility limit of the solution will also be reached, leading to the precipitation of solutes. These events are succinctly described by a phase diagram. Binary Phase Diagrams The simplest type of phase diagram is for binary systems; systems in which there are only two phases present. The following diagram shows the phase diagram for sodium chloride and water, the most important solution for physiological systems. Fig. 6.1.1 Starting at the left hand side of the diagram, if the temperature of a solution with 0% salt is lowered, the freezing point occurs at 0ºC. If the solution has salt dissolved in it (i.e. the concentration of salt is below the solubility limit), then the mixture will exist in the brine compartment. As the temperature is lowered, the weight percent of NaCl doesn't change until the thick line is reached. This line defines the freezing point of the solution. Further cooling will take the solution along the curve defined by the thick line until the eutectic point is reaches at -21.2ºC. At this point, the unfrozen compartment of the mixture is saturated with NaCl; any further cooling will cause salt to precipitate out of the mixture. For freezing biological systems, this left side of the phase diagram is the most important as it describes the osmolality of the solution in which the cells exist. For convenience, this curve can be described by a simple quadratic equation:
{"url":"http://people.ucalgary.ca/~kmuldrew/cryo_course/cryo_chap6_1.html","timestamp":"2014-04-20T01:03:00Z","content_type":null,"content_length":"7709","record_id":"<urn:uuid:0267c6b6-520b-4074-af9d-6c08bbe474b1>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
More S&P 500 correlation July 28, 2011 By Pat Here are some additions to the previous post on S&P 500 correlation. Correlation distribution Before we only looked at mean correlations. However, it is possible to see more of the distribution than just the mean. Figures 1 and 2 show several quantiles: 10%, 25%, 50%, 75%, 90%. Figure 1: Quantiles of 50-day rolling correlation of S&P 500 constituents to the index. Figure 2: Quantiles of intra-constituent 50-day correlations. I don’t see anything special going on recently (this is still with data that ends July 15). Mid 2008 is interesting though: the small correlations got very small until Lehman came along. Correlation uncertainty Last time we used the statistical bootstrap to investigate variability. That was fine for thinking about the variability from constituents. However, it is reasonably unintuitive when looking at the variability due to different days. Another approach is to look at what happens when we leave out one observation at a time. In statistical terminology this is the “jackknife” — it was used before the bootstrap came along. Figure 3 shows the range of the 50 jackknifed results at each time point for the mean correlation among the constituents. We’re making a very small change here — dropping one day out of 50 — yet the result can move a non-trivial amount. Figure 3: Leave-one-out range of mean intra-constituent 50-day correlations. Figure 4 shows how the width of the jackknife range changes through time. Figure 4: Width of the jackknife range for mean 50-day intra-constituent correlation. Appendix R Two additional functions were written for this post. They are: > pp.quan.const.cor function (x, probs, window=50, index=1) x <- x[, -index] ans <- x[, 1:length(probs)] colnames(ans) <- probs ans[] <- NA wseq <- (1-window):0 lt <- lower.tri(diag(ncol(x)), diag=FALSE) for(i in window:nrow(x)) { ans[i,] <- quantile(cor(x[wseq+i,])[lt], probs=probs) > pp.jackknife.const.cor function (x, window=50, index=1) x <- x[, -index] ans <- x[, 1:2] ans[] <- NA wseq <- (1-window):0 lt <- lower.tri(diag(ncol(x)), diag=FALSE) jknife <- numeric(window) jseq <- 1:window for(i in window:nrow(x)) { xw <- x[wseq+i,] jknife[] <- NA for(j in jseq) { jknife[j] <- mean(cor(xw[-j,])[lt]) ans[i,] <- range(jknife) Each of these is assuming that the first column of the data holds the index returns. Subscribe to the Portfolio Probe blog by Email daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/more-sp-500-correlation/","timestamp":"2014-04-21T04:43:53Z","content_type":null,"content_length":"39386","record_id":"<urn:uuid:383669e5-1300-4ba7-9324-173e1b660813>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by jbennet 17,127 posts since Apr 2005 Reputation Points: 1,618 [?] Q&As Helped to Solve: 736 [?] Skill Endorsements: 38 [?] •Team Colleague • Featured Built it :). All seems good :) The Celeron is much nippier than the old P4, really impressed with it. Ive got an i3 on my work laptop. While not as fast as that, it is still very good and obviously much cheaper. That case was very nice, used another PSU in the end as it was a tad noisy (did work, and not unbearable though). It is very very roomy. Almost makes me wish I got a full size ATX motherboard now, to make use of the space :) - Room to expand, at least, and probably needed (the sound card in particulaar is physically quite "long") The cheap ram seemed not too bad, passed memtest86. The motherboard was not so great. While it does work, the documentation was poor. But at that price who can complain...
{"url":"http://www.daniweb.com/members/33110/jbennet/posts","timestamp":"2014-04-20T00:48:50Z","content_type":null,"content_length":"78911","record_id":"<urn:uuid:315e77a2-c40d-4c88-8f2c-f49978ba5485>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
General simple closed curve equation I have a list of points on a 2D simple closed curve and I'd like to approximate that curve using a polynomial such that the approximation will be given by: Ʃa[i,j]x^iy^j = 0 However, I still need to limit the a[i,j] to make sure the approximation is also a simple closed curve, while still keeping the equation as general as possible(so it can be used for all sorts of simple closed curves). I haven't found a general equation such as this, and I was hoping you guys could point me in the right direction(or just tell me if such a thing doesn't exist).
{"url":"http://www.physicsforums.com/showthread.php?p=4176884","timestamp":"2014-04-20T18:36:24Z","content_type":null,"content_length":"19767","record_id":"<urn:uuid:33052ab9-7422-4d82-8ac5-51e5e827a922>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Abstract: An accurate computational method is presented for determining the mass distribution in a mature spiral galaxy from a given rotation curve by applying Newtonian dynamics for an axisymmetrically rotating thin disk of finite size with or without a central spherical bulge. The governing integral equation for mass distribution is transformed via a boundary-element method into a linear algebra matrix equation that can be solved numerically for rotation curves with a wide range of shapes. To illustrate the effectiveness of this computational method, mass distributions in several mature spiral galaxies are determined from their measured rotation curves. All the surface mass density profiles predicted by our model exhibit approximately a common exponential law of decay, quantitatively consistent with the observed surface brightness distributions. When a central spherical bulge is present, the mass distribution in the galaxy is altered in such a way that the periphery mass density is reduced, while more mass appears toward the galactic center. By extending the computational domain beyond the galactic edge, we can determine the rotation velocity outside the cut-off radius, which appears to continuously decrease and to gradually approach the Keplerian rotation velocity out over twice the cut-off radius. An examination of circular orbit stability suggests that galaxies with flat or rising rotation velocities are more stable than those with declining rotation velocities especially in the region near the galactic edge. Our results demonstrate the fact that Newtonian dynamics can be adequate for describing the observed rotation behavior of mature spiral galaxies. Abstract: The past year has seen an explosion of new and old ideas about black hole physics. Prior to the firewall paper, the dominant picture was the thermofield model apparently implied by anti-de Sitter conformal field theory duality. While some seek a narrow responce to Almheiri, Marolf, Polchinski, and Sully (AMPS) , there are a number of competing models. One problem in the field is the ambiguity of the competing proposals. Some are equivalent while others incompatible. This paper will attempt to define and classify a few models representative of the current discussions. Abstract: We discuss the production of massive relic coherent gravitons in a particular class of ƒ(R) gravity, which arises from string theory, and their possible imprint in the Cosmic Microwave Background. In fact, in the very early Universe, these relic gravitons could have acted as slow gravity waves. They may have then acted to focus the geodesics of radiation and matter. Therefore, their imprint on the later evolution of the Universe could appear as filaments and a domain wall in the Universe today. In that case, the effect on the Cosmic Microwave Background should be analogous to the effect of water waves, which, in focusing light, create optical caustics, which are commonly seen on the bottom of swimming pools. We analyze this important issue by showing how relic massive gravity waves (GWs) perturb the trajectories of the Cosmic Microwave Background photons (gravitational lensing by relic GWs). The consequence of the type of physics discussed is outlined by illustrating an amplification of what might be called optical chaos. Abstract: In this review we summarize, expand, and set in context recent developments on the thermodynamics of black holes in extended phase space, where the cosmological constant is interpreted as thermodynamic pressure and treated as a thermodynamic variable in its own right. We specifically consider the thermodynamics of higher-dimensional rotating asymptotically flat and AdS black holes and black rings in a canonical (fixed angular momentum) ensemble. We plot the associated thermodynamic potential—the Gibbs free energy—and study its behavior to uncover possible thermodynamic phase transitions in these black hole spacetimes. We show that the multiply-rotating Kerr-AdS black holes exhibit a rich set of interesting thermodynamic phenomena analogous to the “every day thermodynamics” of simple substances, such as reentrant phase transitions of multicomponent liquids, multiple first-order solid/liquid/gas phase transitions, and liquid/gas phase transitions of the van derWaals type. Furthermore, the reentrant phase transitions also occur for multiply-spinning asymptotically flat Myers–Perry black holes. These phenomena do not require a variable cosmological constant, though they are more naturally understood in the context of the extended phase space. The thermodynamic volume, a quantity conjugate to the thermodynamic pressure, is studied for AdS black rings and demonstrated to satisfy the reverse isoperimetric inequality; this provides a first example of calculation confirming the validity of isoperimetric inequality conjecture for a black hole with non-spherical horizon topology. The equation of state P = P(V,T) is studied for various black holes both numerically and analytically—in the ultraspinning and slow rotation regimes. Abstract: The visible mass of the observable universe agrees with that needed for a flat cosmos, and the reason for this is not known. It is shown that this can be explained by modelling the Hubble volume as a black hole that emits Hawking radiation inwards, disallowing wavelengths that do not fit exactly into the Hubble diameter, since partial waves would allow an inference of what lies outside the horizon. This model of “horizon wave censorship” is equivalent to a Hubble-scale Casimir effect. This incomplete toy model is presented to stimulate discussion. It predicts a minimum mass and acceleration for the observable universe which are in agreement with the observed mass and acceleration, and predicts that the observable universe gains mass as it expands and was hotter in the past. It also predicts a suppression of variation on the largest cosmic scales that agrees with the low-l cosmic microwave background anomaly seen by the Planck satellite. Abstract: Dark energy with negative pressure and positive energy density is believed to be responsible for the accelerated expansion of the universe. Quite a few theoretical models of dark energy are based on tachyonic fields interacting with itself and normal (bradyonic) matter. Here, we propose an experimental model of tachyonic dark energy based on hyperbolic metamaterials. Wave equation describing propagation of extraordinary light inside hyperbolic metamaterials exhibits 2 + 1 dimensional Lorentz symmetry. The role of time in the corresponding effective 3D Minkowski spacetime is played by the spatial coordinate aligned with the optical axis of the metamaterial. Nonlinear optical Kerr effect bends this spacetime resulting in effective gravitational force between extraordinary photons. We demonstrate that this model has a self-interacting tachyonic sector having negative effective pressure and positive effective energy density. Moreover, a composite multilayer SiC-Si hyperbolic metamaterial exhibits closely separated tachyonic and bradyonic sectors in the long wavelength infrared range. This system may be used as a laboratory model of inflation and late time acceleration of the universe.
{"url":"https://www.mdpi.com/journal/ajax/latest_articles/galaxies","timestamp":"2014-04-20T09:13:03Z","content_type":null,"content_length":"13500","record_id":"<urn:uuid:31cdef2d-ff3d-4e2c-8598-d8308e4801a8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - User Profile for: jh992 Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. User Profile for: jh992 UserID: 862871 Name: He, Jing Yun Registered: 10/6/12 Total Posts: 8 Show all user messages
{"url":"http://mathforum.org/kb/profile.jspa?userID=862871","timestamp":"2014-04-19T09:50:06Z","content_type":null,"content_length":"11924","record_id":"<urn:uuid:1040ebc5-e6d3-4d0f-b791-da26d47e5c61>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Even/Odd Cardinality, Generalized Replies: 3 Last Post: Jan 14, 2011 2:31 PM Messages: [ Previous | Next ] Re: Even/Odd Cardinality, Generalized Posted: Jan 13, 2011 9:54 PM On Wed, 12 Jan 2011 20:11:50 -0500 (EST), ksoileau <kmsoileau@gmail.com> wrote: >We say that a set X is EVEN if there exist disjoint A and B such that >A union B equals X and A and B have the same cardinality, otherwise >say that X is ODD. Are all infinite sets even? prove or disprove, >assuming the Axiom of Choice. The countably infinite case is trivial, >so assume X is uncountable. Use Zorn's Lemma: Consider the set of all pairs (A,f) where A is a subset of X and f is a 1-1 map from A into X\A. Define the order, <, by (A,f) < (B,g) iff A is a subset of B f is the restriction of g to A. This is a partial order and I have convinced myself that it satisfies the hypotheses of Zorn's Lemma (for any chain of such pairs consider the union of the sets in each these pairs), so there is a maximal element: (M,h). The range of h is contained in X\M, and if there are 2 elements x,y of X\M not in h(M), then add x to M to get M' and extend h by mapping x to y to get h'. Then (M,h) < (M',h'), contradicting the maximality of M. Therefore X\M differs from h(M) by at most one element, and so have the same cardinality. Since M and h(M) also have the same cardinality, it follows that M and X\M have the same cardinality. To reply by email, change LookInSig to luecking Date Subject Author 1/12/11 Even/Odd Cardinality, Generalized ksoileau 1/13/11 Re: Even/Odd Cardinality, Generalized David Hobby 1/13/11 Re: Even/Odd Cardinality, Generalized Dan Luecking 1/14/11 Re: Even/Odd Cardinality, Generalized David Madore
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2227534&messageID=7357947","timestamp":"2014-04-20T19:15:10Z","content_type":null,"content_length":"20832","record_id":"<urn:uuid:c6ebb819-ec28-4ed6-beef-49204f3fa112>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
On 27 Aug 2004 03:24:25 -0700, http://www.velocityreviews.com/forums/(E-Mail Removed) (ALuPin) wrote: >What library do I need to declare the following signal: >constant L: integer:=log2(N); --ceiling log2(N) Sadly, that one is missing... but this will work in both synthesis and simulation: package usefuls is --- find minimum number of bits required to --- represent N as an unsigned binary number function log2_ceil(N: natural) return positive; package body usefuls is --- find minimum number of bits required to --- represent N as an unsigned binary number function log2_ceil(N: natural) return positive is if N < 2 then return 1; return 1 + log2_ceil(N/2); end if; Converting my tail-recursive function into an iterative implementation is left as an exercise for the student Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL, Verilog, SystemC, Perl, Tcl/Tk, Verification, Project Services Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, BH24 1AW, UK Tel: +44 (0)1425 471223 mail:(E-Mail Removed) Fax: +44 (0)1425 471573 Web: The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.
{"url":"http://www.velocityreviews.com/forums/t22799-log2-n.html","timestamp":"2014-04-19T00:38:21Z","content_type":null,"content_length":"38142","record_id":"<urn:uuid:11c14efe-5ed7-48d7-a94f-7809935827d3>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Tutors Kensington, MD 20895 Successful Math Tutor -- Recently retired high school math teacher I recently retired from a local private high school. I have taught all math subjects including Algebra 1, Algebra 2, , Trigonometry, Pre-Calculus, and AP Calculus AB. I have taught math for 44 years. I have tutored students privately over the last 30 years... Offering 10+ subjects including geometry
{"url":"http://www.wyzant.com/Mc_Lean_VA_geometry_tutors.aspx","timestamp":"2014-04-21T06:09:53Z","content_type":null,"content_length":"61034","record_id":"<urn:uuid:8a1c20bb-3595-4c94-8a29-a8ae576faae4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
A. The multiconfiguration time-dependent Hartree method B. Coordinates and Hamiltonian 1. Kinetic energy operator 2. Potential energy operator C. Initial wave packet D. Final state analysis 1. Probabilities 2. Cross sections A. Numerical details B. Probabilities and cross sections
{"url":"http://scitation.aip.org/content/aip/journal/jcp/127/11/10.1063/1.2776266","timestamp":"2014-04-18T14:04:24Z","content_type":null,"content_length":"96863","record_id":"<urn:uuid:95a1b162-208d-43f8-ba03-f67617176421>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Fulshear Trigonometry Tutor ...I have seen the look regarding subjects from an elementary level to a collegiate level. Regardless of what subject I am working with someone on, I will strive to make sure the student understands. Here is a list of the subjects I've have taught or am capable of teaching: Math- Pre-Algebra ... 38 Subjects: including trigonometry, reading, calculus, chemistry ...My success stories: * A 6th grader struggling with grammar and math improved her 'C' to 'A-' * A high school student who was scared of even attempting her math quizzes and exam conquered her lifelong fear of math within three months and did well on her final exam * An MBA student scored well o... 20 Subjects: including trigonometry, reading, writing, geometry ...My specialty is tutoring high school mathematics courses (algebra I/II, geometry, precalculus, calculus), but I am also capable and willing to help elementary and junior high students as well. I am a resident of Katy and would prefer to tutor in the West Houston area. Please contact me if you are interested! 9 Subjects: including trigonometry, geometry, algebra 1, algebra 2 ...If you are willing to try and are open to new approaches, I can help you develop skills that can last a lifetime. I have a Ph.D. in Analytical Chemistry and have many years of industry experience using both analytical and organic chemistry. In high school I had some great math and science teach... 6 Subjects: including trigonometry, chemistry, algebra 2, geometry ...Calculus is probably one of my most favorite subjects to tutor. I have helped many students in calculus I and calculus II. We begin with the study of limits and go into graphing, differentiation, and the many types of integration. 24 Subjects: including trigonometry, chemistry, calculus, physics
{"url":"http://www.purplemath.com/fulshear_tx_trigonometry_tutors.php","timestamp":"2014-04-19T14:51:49Z","content_type":null,"content_length":"23979","record_id":"<urn:uuid:303eb668-dcab-4c66-abb7-4a5ed0548024>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
digitalmars.D - double trouble "John C" <johnch_atms hotmail.com> I'm writing routines for numeric formatting and this involves round-tripping of floating-point numbers. I'm storing the integral and fractional parts in a string buffer, and in a separate variable storing the decimal point position, and these are converted back into a double when needed. Unfortunately the algorithm I'm using is too lossy and as a result a number like 2.78 (stored in the buffer as "278", with the decimal pos being 1) becomes 2.7799999999998. I believe the problem is with my scaling code. Any ideas to increase the // v is a numeric representation of the digits, eg 278. // exponent is either the exponent for scientific numbers, or the number of fractional digits, eg 2. double p10 = 10.0; int n = exponent; if (n < 0) n = -n; while (n) { if (n & 1) { if (exponent < 0) v /= p10; v *= p10; n >>= 1; p10 *= p10; Feb 25 2006 I believe the problem is with my scaling code. Any ideas to increase the Scratch that. It must be the way I'm converting the digits to a double that introduces the inaccuracy. // pDigits is a pointer into a zero-terminated buffer (with the digits "278"). double v = 0.0; while (*pDigits) { v = v * 10 + (*pDigits - '0'); pDigits++; } v is now 277999999999999.999990. Then it's scaled using the code previously posted to get 2.7799999999999998. How would I round this up to the original 2.78? Feb 25 2006 "Lionello Lunesu" <lio remove.lunesu.com> I would get rid of the while, or at least the /= and *= therein, these will only add to the inaccuracy. Better keep track of integers (++n, --n) and use pow when you need the 10^n or so. Remember that the 10.0 is already inaccurate (imagine it's 9.999 or so), using it multiple times will definately add to the inaccuracy. Mar 01 2006 Lionello Lunesu wrote: I would get rid of the while, or at least the /= and *= therein, these will only add to the inaccuracy. Better keep track of integers (++n, --n) and use pow when you need the 10^n or so. Remember that the 10.0 is already inaccurate (imagine it's 9.999 or so), using it multiple times will definately add to the inaccuracy. Actually, 10.0 is accurately representable, but 0.1 isn't.. Perhaps the most accurate way to calculate decimal digits would be digit/pow(10.0, -exp) instead of digit*pow(0.1, exp), avoiding exponentiating an inaccurate number.. Well, I went ahead and tested it (in Java, but the results should match D's double). Using digit*constant (0.000...0001) and digit/pow(10, exp) produce about the same results, while repeatedly multiplying with 0.1 and exponentiation of 0.1 also produce about the same results, except the latter two are slightly more off. http://www.xs0.com/prec/ xs0 Mar 01 2006 Don Clugston <dac nospam.com.au> John C wrote: I'm writing routines for numeric formatting and this involves round-tripping of floating-point numbers. I'm storing the integral and fractional parts in a string buffer, and in a separate variable storing the decimal point position, and these are converted back into a double when needed. Unfortunately the algorithm I'm using is too lossy and as a result a number like 2.78 (stored in the buffer as "278", with the decimal pos being 1) becomes 2.7799999999998. The trick to maximal efficiency in these conversions is to make sure that you only do ONE division (because that's the point at which the rounding error occurs). Don't divide by 10 every time, and definitely don't use pow. Instead, keep track of a denominator, and every time you'd do a v/=10, do denominator*=10 instead. Then, right at the end, divide v by denominator. The reason this works is that integers can be exactly represented in reals, so the multiplies aren't introducing any error. But every time you divide by something that isn't a multiple of 2, a tiny roundoff error creeps in. You'll also reduce the error if you use real v=0.0; instead of double. Even if you ultimately want a double. I believe the problem is with my scaling code. Any ideas to increase the // v is a numeric representation of the digits, eg 278. // exponent is either the exponent for scientific numbers, or the number of fractional digits, eg 2. double p10 = 10.0; int n = exponent; if (n < 0) n = -n; while (n) { if (n & 1) { if (exponent < 0) v /= p10; v *= p10; n >>= 1; p10 *= p10; Mar 01 2006 "Don Clugston" <dac nospam.com.au> wrote in message news:du48kq$i5i$1 digitaldaemon.com... The trick to maximal efficiency in these conversions is to make sure that you only do ONE division (because that's the point at which the rounding error occurs). Don't divide by 10 every time, and definitely don't use pow. Instead, keep track of a denominator, and every time you'd do a v/=10, do denominator*=10 instead. Then, right at the end, divide v by The reason this works is that integers can be exactly represented in reals, so the multiplies aren't introducing any error. But every time you divide by something that isn't a multiple of 2, a tiny roundoff error creeps in. You'll also reduce the error if you use real v=0.0; instead of double. Even if you ultimately want a double. Thanks. Working on the raw bits (actually a 64-bit integer) eventually proved easier (relatively speaking). Mar 01 2006
{"url":"http://www.digitalmars.com/d/archives/digitalmars/D/34260.html","timestamp":"2014-04-20T01:37:45Z","content_type":null,"content_length":"18018","record_id":"<urn:uuid:91dd7709-c1ff-4df6-94db-da12574f1f11>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
An expression with square roots How would you calculate $(\sqrt {2} - \sqrt {5}) \cdot \sqrt{ 7 + 2 \sqrt {10}}$ ? Hello, p.numminen! How would you calculate: . $(\sqrt {2} - \sqrt {5}) \cdot \sqrt{ 7 + 2 \sqrt {10}}$ ? There is a very sneaky trick we can pull . . . First, we notice that: . $7 + 2\sqrt{10}$ .just happens to equal $\left(\sqrt{2} + \sqrt{5}\right)^2$. . Check it out! Then: . $\sqrt{7 + 2\sqrt{10}} \:=\:\sqrt{\left(\sqrt{2} + \sqrt{5}\right)^2} \:=\:\sqrt{2} + \sqrt{5}$ So the problem becomes: . $\left(\sqrt{2} - \sqrt{5}\right)\left(\sqrt{2} + \sqrt{5}\right) \;=\;\left(\sqrt{2}\right)^2 - \left(\sqrt{5}\right)^2 \;\;=\;2 - 5 \;\;=\;\;\boxed{-3}$ We can simplify $\sqrt{7+2\sqrt{10}}$ by simultaneous equations as well. Let $\sqrt{7+2\sqrt{10}}=A+B\sqrt{10}$ Squaring, we get $7+2\sqrt{10}=A^2+10B^2+2AB\sqrt{10}$ Now by matching up coefficients, we get two equations: $A^2+10B^2=7$ ...[1] $2AB=2 \implies AB=1$ ...[2] $B=\frac{1}{A}$ ...[2'] Now substituting into [1]: $A^2+10\left(\frac{1}{A} \right)^2=7$ $A^4+10=7A^2$ $A^4-7A^2+10=0$ Now we make the substitution $u=A^2$, then it becomes: $u^2-7u+10=0$ $(u-2)(u-5)=0$ $u=5$ or $u=2$ Then $A^2=5$ or $A^2=2$ So $A=\pm\sqrt{5}$ or $A=\pm \sqrt{2}$ So the solutions are $\{(A,B)<img src=$\sqrt {5},\frac{1}{\sqrt{5}}),(-\sqrt{5},-\frac{1}{\sqrt{5}}),(\sqrt{2},\frac{1}{\sqrt{2}}), (-\sqrt{2},-\frac{1}{\sqrt{2}})\}" alt="\{(A,B) Let's try the first one; then we have: $\sqrt{7+2\sqrt{10}}=\ sqrt{5}+\frac{1}{\sqrt{5}} \cdot \sqrt{10}=\sqrt{5}+\sqrt{2}$ You can check this by equating both sides as Soroban did. Note that $A=\sqrt{2}$, $B=\frac{1}{\sqrt{2}}$ also works, but the other two solutions Do Not. They are the negative solutions. That's why you have to check the final result to see if it fits! Of course it then follows that $(\sqrt{2}-\sqrt{5})(\sqrt{2}+\sqrt{5})=2-5=-3$ Sometimes, one try to guess what's the expression which powered to 2 gives us the result. Note that $\sqrt{7+2\sqrt{10}}=\sqrt{\left(2+2\sqrt{10}+5\rig ht)}$ Now, you can see the expression $(a+b)^2$ which is hidden there. Of course, always remember that $\sqrt{x^2}=|x|$
{"url":"http://mathhelpforum.com/algebra/20632-expression-square-roots.html","timestamp":"2014-04-20T11:09:28Z","content_type":null,"content_length":"47540","record_id":"<urn:uuid:e0b6dd56-76f7-4bc3-b453-b5a0d3ef5c9d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
[Maxima] Assume behavior Karl-Dieter Crisman kcrisman at gmail.com Mon Sep 28 12:49:09 CDT 2009 Dear Maxima list, Thanks for taking a newbie question. I have used Maxima through Sage a fair amount, but only recently started using it stand-alone as well (some of you know this already). I have a question about the use of http://maxima.sourceforge.net/docs/manual/en/maxima_11.html says that the deduction mechanism is "not very strong". Maxima 5.19.1 http://maxima.sourceforge.net Using Lisp SBCL 1.0.30 Distributed under the GNU Public License. See the file COPYING. Dedicated to the memory of William Schelter. The function bug_report() provides bug reporting information. (%i1) assume(x>=y,y>=z,z>=x); (%o1) [x >= y, y >= z, z >= x] (%i2) is(x=z); (%o2) false (%i3) maybe(x=z); (%o3) false Maxima 5.19.1 http://maxima.sourceforge.net Using Lisp SBCL 1.0.30 Distributed under the GNU Public License. See the file COPYING. Dedicated to the memory of William Schelter. The function bug_report() provides bug reporting information. (%i1) assume(x<=1); (%o1) [x <= 1] (%i2) assume(x>=1); (%o2) [x >= 1] (%i3) is(x=1); (%o3) false (%i4) maybe(x=1); (%o4) false Maxima 5.19.1 http://maxima.sourceforge.net Using Lisp SBCL 1.0.30 Distributed under the GNU Public License. See the file COPYING. Dedicated to the memory of William Schelter. The function bug_report() provides bug reporting information. (%i1) assume(x>0); (%o1) [x > 0] (%i2) is(x#1); (%o2) true (%i3) maybe(x#1); (%o3) true My question is whether there is more explicit documentation on the behavior of is() than at It mentions that it essentially calls ev(), but the documentation for that function only say, "pred causes predicates (expressions which evaluate to true or false) to be evaluated," but doesn't seem to indicate if that occurs in some other Maxima module or elsewhere. Thank you for any help; I greatly appreciate it. More information about the Maxima mailing list
{"url":"https://www.ma.utexas.edu/pipermail/maxima/2009/018811.html","timestamp":"2014-04-18T08:03:24Z","content_type":null,"content_length":"4878","record_id":"<urn:uuid:f349f327-123e-4e99-93b5-6db30ff268cd>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Numerical methods Can anyone here give me the examples of most test problems of Numerical Methods in the Mid and End of semester at undergraduate level? What is the most test problems of Numerical Methods in the mid and end of semester at undergraduate level that probably must come out in the test? I am taking Numerical Methods and Finite Element Method course on this semester. As I was read here that Numerical Methods for Partial Differential Equation is the fundamental of the Finite Element Method, then I guess I should start to learn numerical methods first from here. Btw, there are Numerical METHODS & Numerical ANALYSIS and Finite Element METHODS & Finite Element ANALYSIS. ANALYSIS & METHOD, what is the difference?
{"url":"http://www.physicsforums.com/showpost.php?p=1425942&postcount=1","timestamp":"2014-04-20T18:37:23Z","content_type":null,"content_length":"9067","record_id":"<urn:uuid:7920a552-86ef-44fc-882a-686be60102d1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
[R-sig-DCM] number of iterations Wirth, Ralph (GfK SE) ralph.wirth at gfk.com Wed Feb 23 18:56:24 CET 2011 I think Dimitris problem is due to the huge amount of the respondents. Their draws are generated in the inner (i.e. the second) loop. Loops are slow in R, and apply doesn't help much. I tried to vetorize the inner loop one day, but then I got problems with RAM. If you know the C programming language, then you could write the loops in C and use this C code within R. This should speed up your calculations A LOT (peope told me that estimations can easily become 10 times faster). As to iterations: I usually let my R function do more than 100000 iterations, then look at the matplots and either use the draws after convergence for calculating the point estimates or (if convergence hasn't occured yet) continue the MCMC algorithm. ----- Originalnachricht ----- Von: r-sig-dcm-bounces at r-project.org <r-sig-dcm-bounces at r-project.org> An: Dimitri Liakhovitski <dimitri.dcm at gmail.com> Cc: R DCM List <r-sig-dcm at r-project.org> Gesendet: Wed Feb 23 18:47:03 2011 Betreff: Re: [R-sig-DCM] number of iterations Wow -- your problem is large. I should have noted that my runs were with binary logit and much smaller samples (usually only a dozen or so choices for N~200). Still, I run on a 64-bit Xeon machine with 16GB RAM. Runs of many hours are usual -- I think the max was about 60 hours (but I didn't time it and it was over a weekend). I haven't done direct comparison, but would *guess* that bayesm is 25-100x slower than Sawtooth's CBC/HB. I know some academics who say that they code real problems in Fortran ... One thing I wonder about bayesm is whether there are obvious optimizations that would be easy to do. (For instance, in some of my own MNL code, I got a speedup of 7x (!!) simply by replacing two calls of "apply(x,1,sum)" with "rowSums(x)") Might be worth a quick look ... From: Dimitri Liakhovitski Sent: Wednesday, February 23, 2011 4:17 PM To: Chris Chapman Cc: R DCM List Subject: Re: [R-sig-DCM] number of iterations Yes, I've taken Greg's tutorial - but I would not say it was of much use to practitioners, and especially it had nothing to do with DCM... Wow, 50k! 100k! I've just done a DCM in bayesm (rhierMnlRwMixture) - 4 attributes, total # of levels 17, 7 tasks but a very large sample size (~4,200). I also had a categorical covariate with 8 levels, i.e., I had 7 dummy-coded centered covariates. It refused to run on my laptop (ran out of memory), but it did run on my powerful 64-bit Windows 7 PC (R 12.2 - for 64 bits). It did not run out of memory, but I've done 21K iterations in total and it took me 9.5 hours (!). On Wed, Feb 23, 2011 at 11:05 AM, Chris Chapman <cnchapman at msn.com> wrote: Hi Dimitri -- For my part, yes, I think it all depends :-) The usual recommendation is to run it "quite a while" (50k+ iterations) and inspect the convergence of the estimates (i.e., plot the draws and see if there are approximately horizontal lines after a certain number of iterations, with no "blow ups" of individual lines or major crossovers among them). Personally, I tend to start with 100k iterations (only because I like round numbers) and take beta draws every 10 of the final 20k. If it doesn't converge but looks plausible (not all over the place), then I try 200k. If it still doesn't converge, I'll decide what to do based on how bad the convergence plots look. That's assuming something like 6-8 attributes and 30-40 total levels in a CBC model. (BTW, for a better answer ... Greg Allenby [one of the authors of bayesm] offers tutorials at ART Forum most years that go into the general bayesm approach in substantial depth. I'd bet you've taken that already, though :-) -- Chris From: "Dimitri Liakhovitski" <dimitri.dcm at gmail.com> Sent: Wednesday, February 23, 2011 3:37 PM To: "R DCM List" <r-sig-dcm at r-project.org> Subject: [R-sig-DCM] number of iterations Question for those who have done HB to assess DCM utilities in bayesm: I know, this question is too general and it all depends on the nature of the DCM at hand, # of attributes, # of levels, etc. But in general: when you run HB in bayesm, how many iterations do you run in total and how many do you use to grab your beta draws from? Thank you! [[alternative HTML version deleted]] R-SIG-DCM mailing list R-SIG-DCM at r-project.org [[alternative HTML version deleted]] R-SIG-DCM mailing list R-SIG-DCM at r-project.org GfK SE, Nuremberg, Germany, commercial register Nuremberg HRB 25014; Management Board: Professor Dr. Klaus L. Wübbenhorst (CEO), Pamela Knapp (CFO), Dr. Gerhard Hausruckinger, Petra Heinlein, Debra A. Pruent, Wilhelm R. Wessels; Chairman of the Supervisory Board: Dr. Arno Mahlert This email and any attachments may contain confidential or privileged information. Please note that unauthorized copying, disclosure or distribution of the material in this email is not permitted. More information about the R-SIG-DCM mailing list
{"url":"https://stat.ethz.ch/pipermail/r-sig-dcm/2011-February/000023.html","timestamp":"2014-04-20T17:17:07Z","content_type":null,"content_length":"8637","record_id":"<urn:uuid:2ade8eb4-3f92-4529-8170-3ca5c9ce72bb>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Card Counting BlackJack « « The American mathematician, Dr Edward O. Thorpe, is generally considered the father of card counting with his book, “Beat the Dealer”, first published in 1962. Even before this, a small number of professional card counters were already in operation across America. The 1957 book, “Playing Blackjack to Win”, was the first to publish an accurate Basic Strategy and included a rudimentary card counting system. Famous gamblers like Jess Marcum and Joe Bernstein have also been credited for inventing card counting systems for Blackjack. Blackjack is played in a variety of formats, and to familiarize yourself with these visit Casino.com NZ. Regardless of the form, the basic principles of card counting remain the same. As each card is dealt in Blackjack, the deck changes in composition, and so the odds lilt back and forth between Player and Dealer. The objective of the card counter is to identify those points along the deal where the odds favour the Player, and therefore he can bet big and win more, and those points along the deal where the odds favour the Dealer, and therefore he can bet low and lose less. It should be noted that Card Counting systems are of little use unless you are going to play Basic Strategy. You can practice your basic strategy on demo/free play at any number of the many good online casinos that feature blackjack. Casinokiwi has good resources on casinos offering Blackjack as well as articles and guides to get people started playing online. All we are doing in card counting is counting off the High Cards against the Low Cards. The Low Cards are 2-6 and the High Cards are 10-Ace. For every Low Card you count "+1" and for every High Card you count "-1". As you can see, we have 5 low cards and 5 high cards. So if you start on zero and count off a deck, you should finish on zero. What we do know is that if we receive more Low Cards than High Cards, that further on along the deal we’re going to have to receive more High Cards than Low Cards. It’s simple – if there are only 5 of each. Depleting one incrementally increases the chances of receiving the other. If you run out of low cards, then you only have high cards to receive. In card counting, this is exactly what we are keeping count of – the Proportion of High Cards to Low Cards yet to be dealt. The Simple High-Low Strategy Since the days of Dr. Thorpe, ever more elaborate systems have been devised, from simple ‘Plus-Minus Systems’ to complex ‘Point Systems’ and ‘Side Counts’. As compelling as their proponents make them, reliability comes from being able to use a system flawlessly over hours and hours of play. The system we will examine here is called the Simple Hi-Lo Strategy. For every Low Card we count ‘+1’ and for every High Card we count ‘-1’. Seven, eight and nine receive no value at all. We only count the Highs, 10 – Ace, against the Lows, 2 – 6. │ The Simple Hi-Lo Strategy │ │ ♥ ♠ ♦ ♣ │ Low Cards │ No Value │ High Cards │ │ Card Face │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ 8 │ 9 │ 10 │ J │ Q │ K │ A │ │ Count Value │ +1 │ +1 │ +1 │ +1 │ +1 │ - │ - │ - │ -1 │ -1 │ -1 │ -1 │ -1 │ │ The Count: 0 │ 1 │ 2 │ 3 │ 4 │ 5 │ 5 │ 5 │ 5 │ 4 │ 3 │ 2 │ 1 │ 0 │ There is nothing particularly mysterious or complex about card counting. Proficiency comes in mastering the technique – being able to count the cards reliably at a rate that keeps you in the game. You should aim at being able to reliably count off a deck in 30 seconds or less. Then try counting multiple decks. Playing the Count When the count is even, and the deck favours neither High Cards or Low Cards, there is no particular advantage to either Player or Dealer (except, of course, the House advantage). But when the count goes high, indicating that the deck has proportionally more 10’s and Aces than it does 2’s to 6’s, now Play favours the Player and he will win more often than he loses. If he bets high, he will win more while he is winning more often. However, when the count is low, indicating that the deck has proportionally fewer Aces and 10's then it does 2's-6's, then play will favour the Dealer and the Player will lose more often. If he bets low, he will lose less while he is losing more often. But why should this be? Why does the Player win more often when the deck is rich in Tens and Aces? Firstly, because these are the two cards which make up a “Blackjack” – an Ace and a 10. And BlackJack pays 1 ½. Secondly, the Player wins more often because the high cards don't help the Dealer's "stiffs". The Dealer doesn't have a choice. He must draw on 16 and stand on 17. A deck rich in high cards sees him bust more often. The Running Count and the True Count The game of Blackjack as it is played in casinos around the world is a multi-deck game. That means that the significance of the count is reduced. In a single deck game, a count of +5 means there are actually 5 more High cards than there are Low Cards left. When there are only 20 of each that go into the deck in the first place, that’s a significant difference. But when there is a difference of just 5 among 312 cards (six decks), that fluctuation is barely a blip on the radar. Therefore, players have devised a distinction between the “Running Count” and the “True Count”. The Running Count is the count we have as the cards come out of the shoe. This is what we see. This is what we count. The True Count, on the other hand is calculated against the number of ½ decks still remaining in the shoe. In this way, we weigh the significance of the running count, against the backdrop of the number of cards yet to come. A count of +10 when there are 300 cards to come is a difference so insignificant our chances of receiving either a High card or a Low card remain even. But a count of +10 when there are just 20 cards to come means that there are more than twice the number of High cards to Low cards left to come, and our odds are very good. In order to calculate the True Count you divide the Running Count by the number of ½ decks remaining in the shoe. True Count = Running Count ÷ No. of ½ Decks remaining. So a Running Count of +10 with 2 ½ decks remaining, we calculate the True count by dividing 10 by 5 – the number of ½ decks – and arrive at a True Count of 2. The Conversion Factors tabled below are for six deck games. │ Six Deck Conversion Factors │ │ No. of Decks Discarded │ No. of Decks to be Dealt │ Conversion Factor │ │ │ │ (No. ½ Decks left in the Shoe) │ │ ½ │ 5½ │ 11 │ │ 1 │ 5 │ 10 │ │ 1½ │ 4½ │ 9 │ │ 2 │ 4 │ 8 │ │ 2½ │ 3½ │ 7 │ │ 3 │ 3 │ 6 │ │ 3 ½ │ 2 ½ │ 5 │ │ Source: Ken Uston; Million Dollar Blackjack, Carol Publishing Group Edition – 1994, page 119 │
{"url":"http://whiteknucklecards.com/blackjack/cardcounting.html","timestamp":"2014-04-18T06:34:30Z","content_type":null,"content_length":"19527","record_id":"<urn:uuid:c41b3a5d-2605-472a-896c-c3170c16e64c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
[Scipy-tickets] [SciPy] #1588: scipy.optimize.cobyla not consistant in Windows [Scipy-tickets] [SciPy] #1588: scipy.optimize.cobyla not consistant in Windows SciPy Trac scipy-tickets@scipy.... Mon Jan 23 18:35:26 CST 2012 #1588: scipy.optimize.cobyla not consistant in Windows Reporter: casperskovby | Owner: somebody Type: defect | Status: needs_info Priority: normal | Milestone: Unscheduled Component: Other | Version: 0.10.0 Keywords: | Changes (by pv): * status: new => needs_info Bugs in f2py sound unlikely, so it's best to rule out other issues first: The important question is what are the values of the objective function `f(pvec)` -- does this change significantly between runs? If it does not change much, then the optimization problem is probably ill- defined, and the result is sensitive to rounding error. In that case, this is not a bug. Now, naively one would expect that the rounding error would be the same from one run to another, but I think this depends on the compiler --- alignment of data in memory can change, and this can trigger different compiler-optimized branches in the code. (See http://www.nccs.nasa.gov/images/FloatingPoint_consistency.pdf) If it's alignment, then one would expect that the program produces only a few different answers. Is this so? Ticket URL: <http://projects.scipy.org/scipy/ticket/1588#comment:1> SciPy <http://www.scipy.org> SciPy is open-source software for mathematics, science, and engineering. More information about the Scipy-tickets mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-tickets/2012-January/004912.html","timestamp":"2014-04-16T07:46:26Z","content_type":null,"content_length":"4562","record_id":"<urn:uuid:23eed19e-eb50-4284-84b2-039951e319ec>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Integration and radiality: measuring the extent of an individual's connectedness and reachability in a network. Social Networks 20(1): 89–105 Results 1 - 10 of 15 - Journal of Mathematical Sociology , 2001 "... The betweenness centrality index is essential in the analysis of social networks, but costly to compute. Currently, the fastest known algorithms require #(n ) time and #(n ) space, where n is the number of actors in the network. ..." Cited by 295 (5 self) Add to MetaCart The betweenness centrality index is essential in the analysis of social networks, but costly to compute. Currently, the fastest known algorithms require #(n ) time and #(n ) space, where n is the number of actors in the network. "... Viral marketing refers to marketing techniques that use social networks to produce increases in brand awareness through self-replicating viral diffusion of messages, analogous to the spread of pathological and computer viruses. The idea has successfully been used by marketers to reach a large number ..." Cited by 4 (0 self) Add to MetaCart Viral marketing refers to marketing techniques that use social networks to produce increases in brand awareness through self-replicating viral diffusion of messages, analogous to the spread of pathological and computer viruses. The idea has successfully been used by marketers to reach a large number of customers rapidly. In case data about the customer network is available, centrality measures can be used in decision support systems to select influencers and spread viral marketing campaigns in a customer network. The literature on network theory describes a large number of such centrality measures. A critical question is which of these measures is best to select customers for a marketing campaign, an issue that little prior research has addressed. In this paper, we present the results of computational experiments based on real network data to compare different centrality measures for the diffusion of marketing messages. We found a significant lift when using central customers in message diffusion, but also found differences in the various centrality measures depending on the underlying network topology and diffusion process. More importantly, we found that in most cases the simple out-degree centrality outperforms almost all other measures. Only the SenderRank, a computationally much more complex measure that we introduce in this paper, achieved a comparable performance. Key words: customer relationship management, viral marketing, centrality, network theory 1. "... This paper is available online at www.jtaer.com DOI: 10.4067/S0718-18762010000200006 ..." "... We propose a novel disk-based index for processing single-source shortest path or distance queries. The index is useful in a wide range of important applications (e.g., network analysis, routing planning, etc.). Our index is a tree-structured index constructed based on the concept of vertex cover. W ..." Cited by 3 (3 self) Add to MetaCart We propose a novel disk-based index for processing single-source shortest path or distance queries. The index is useful in a wide range of important applications (e.g., network analysis, routing planning, etc.). Our index is a tree-structured index constructed based on the concept of vertex cover. We propose an I/O-efficient algorithm to construct the index when the input graph is too large to fit in main memory. We give detailed analysis of I/O and CPU complexity for both index construction and query processing, and verify the efficiency of our index for query processing in massive real-world graphs. , 2008 "... In this paper we extend a popular non-cooperative network creation game (NCG) [11] to allow for disconnected equilibrium networks. There are n players, each is a vertex in a graph, and a strategy is a subset of players to build edges to. For each edge a player must pay a cost α, and the individual ..." Cited by 3 (2 self) Add to MetaCart In this paper we extend a popular non-cooperative network creation game (NCG) [11] to allow for disconnected equilibrium networks. There are n players, each is a vertex in a graph, and a strategy is a subset of players to build edges to. For each edge a player must pay a cost α, and the individual cost for a player represents a trade-off between edge costs and shortest path lengths to all other players. We extend the model to a penalized game (PCG), for which we reduce the penalty for a pair of disconnected players to a finite value β. We prove that the PCG is not a potential game, but pure Nash equilibria always exist, and pure strong equilibria exist in many cases. We provide tight conditions under which disconnected (strong) Nash equilibria can evolve. Components of these equilibria must be (strong) Nash equilibria of a smaller NCG. But in contrast to the NCG, for the vast majority of parameter values no tree is a stable component. Finally, we show that the price of anarchy is Θ (n), several orders of magnitude larger than in the NCG. Perhaps surprisingly, the price of anarchy for strong equilibria increases only to at most 4. "... Abstract: The structural analysis of biological networks includes the ranking of the vertices based on the connection structure of a network. To support this analysis we discuss centrality measures which indicate the importance of vertices, and demonstrate their applicability on a gene regulatory ne ..." Cited by 3 (0 self) Add to MetaCart Abstract: The structural analysis of biological networks includes the ranking of the vertices based on the connection structure of a network. To support this analysis we discuss centrality measures which indicate the importance of vertices, and demonstrate their applicability on a gene regulatory network. We show that common centrality measures result in different valuations of the vertices and that novel measures tailored to specific biological investigations are useful for the analysis of biological networks, in particular gene regulatory networks. , 2000 "... Centrality indices are an important tool in network analysis, and many of them are derived from the set of all shortest paths of the underlying graph. The so-called betweenness centrality index is essential for the analysis of social networks, but most costly to compute. Currently, the fastest known ..." Cited by 1 (0 self) Add to MetaCart Centrality indices are an important tool in network analysis, and many of them are derived from the set of all shortest paths of the underlying graph. The so-called betweenness centrality index is essential for the analysis of social networks, but most costly to compute. Currently, the fastest known algorithms require Theta(n³) time and Theta(n²) space, where n is the number of vertices. Motivated by the fast-growing need to compute centrality indices on large, yet very sparse, networks, new algorithms for betweenness are introduced in this paper. They require O(n + m) space and run in O(n(m + n)) or O(n(m + n log n)) time on unweighted or weighted graphs, respectively, where m is the number of edges. Since these algorithms simply augment single-source shortest-paths computations, all standard centrality indices based on shortest paths can now be computed uniformly in one framework. Experimental evidence is provided that this substantially increases the range of "... Abstract. Trust models have been touted to facilitate cooperation among unknown entities. Existing behavior-based trust models typically include a fixed evaluation scheme to derive the trustworthiness of an entity from knowledge about its behavior in previous interactions. This paper in turn propose ..." Cited by 1 (0 self) Add to MetaCart Abstract. Trust models have been touted to facilitate cooperation among unknown entities. Existing behavior-based trust models typically include a fixed evaluation scheme to derive the trustworthiness of an entity from knowledge about its behavior in previous interactions. This paper in turn proposes a framework for behavior-based trust models for open environments with the following distinctive characteristic. Based on a relational representation of behavior-specific knowledge, we propose a trust-policy algebra allowing for the specification of a wide range of trust-evaluation schemes. A key observation is that the evaluation of the standing of an entity in the network of peers requires centrality indices, and we propose a first-class operator of our algebra for computation of centrality measures. This paper concludes with some preliminary performance experiments that confirm the viability of our approach. 1 "... The cognitive associative structure of two populations was studied using network analysis of free-word associations. Structural differences in the associative networks were compared using measures of network centralization, size, density and path length. These measures are closely aligned with cogni ..." Cited by 1 (0 self) Add to MetaCart The cognitive associative structure of two populations was studied using network analysis of free-word associations. Structural differences in the associative networks were compared using measures of network centralization, size, density and path length. These measures are closely aligned with cognitive theories describing the organization of knowledge and retrieval of concepts from memory. Size and centralization of semantic structures were larger for college students than for seventh graders, while density, clustering and average path-length were similar. Findings presented reveal that subpopulations might have very different cognitive associative networks. This study suggests that graph theory and network analysis methods are useful in mapping differences in associative structures across groups. , 2006 "... contained in this dissertation are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ..." Cited by 1 (0 self) Add to MetaCart contained in this dissertation are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=17596","timestamp":"2014-04-16T14:10:51Z","content_type":null,"content_length":"36380","record_id":"<urn:uuid:d237bf21-6438-432b-9eae-935b39f322e7>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
A GAME FOR 2, 3 OR 4 PLAYERS (Ages: 8 to Adult) Cribbage is basically a two player game, but can be played with 4 players as teams of 2 players each. E. S. Lowe (A Milton Bradley Company) Derby Cribbage board has three tracks and allows the play of 3 players. This cut throat game for three is explained below. OBJECT OF THE GAME: To be the first to score 121 points by counting combinations of cards during "play," "hands," and "crib." In cutting for deal, the low card wins; and shuffles. After shuffling, the non-dealer cuts; the dealer unites the cards and deals one card at a time first to his opponent, then to himself alternately till each, has six cards. Each player studies his hand and then discards two of his six face down without his opponent seeing them. These four face down discards are placed together, forming an extra hand known as the ''CRIB." 1. The opponent's hand 2. The dealer's hand 3. The Crib The crib is counted by the dealer after each players hand has been played and counted. After discarding to the Crib the opponent cuts the pack and the dealer turns the top card of the lower packet face up on top of the whole pack. This card is called the "Start." It is not used in the play. It is counted with each hand and the crib after play is completed. If the "Start" is a Jack. the dealer pegs two holes on the board at once. and this must be done before the dealer plays his first card in order to be counted "Two for his heels." After cutting the "Start," the non-dealer plays any card from his hand, face up on the table, in front of himself, calling out the value of the card he plays. sum of his opponent's card and the card he plays. The non-dealer then plays another card, calling out the sum of all cards that have been played and then the dealer plays in the same manner. Playing alternates until the sum of the cards played is 31 or until neither player can play without exceeding 31. Either player unable to play a card making the sum less than 31, says ''Go" and the other player must go on playing till he reaches 31 or until he cannot play a card making the sum less than 31. The player coming nearest 31 scores a "go" and may peg one hole on the board. If either player makes exactly 31 he pegs two holes on the board. After a "go" or after 31 has been reached, each player turns the cards he has played face down on the table in front of himself. NUMBERS REFER TO TABLE BELOW (28 k .GIF image) (2) Either player pegs two for 31 or (3) one for "go" or "last card." All cards must be played. If one player has played four cards and the other player has two cards left - for example, 8 and 7 - the latter must play them, calling "fifteen two" and ''last card." He then pegs 3. THE "GO": Two "ten cards" and a 4 are played making 24, and the dealer, having no card under 8, calls "go.'' If the non-dealer has a 7, he may play it, calling "31 two" and pegs 2. If the non-dealer has a 4, he may play it, making a "pair" and a "go" and pegging 3. If the non-dealer has 3 and 2, he plays them, making a "run of three" and a "go" and pegging 4. Dealer then plays remaining cards. No player may call a "go" unless he has a card or cards that will not come in under 31. The Cribbage Board is placed horizontally between the two players. Each player uses two pegs moving one peg ahead of the other (like footsteps) as he counts his points. Each hole counts one point. Both players start at the same end of the board, moving their pegs up the outside row of 30 holes and then down the inside row of 30 holes The first player to do this twice, plus at least one extra point (121 or more) WINS THE GAME. THREE PLAYER GAME -back to top- The game is played in two partnerships. Partners sit opposite one another. Each player is dealt 5 cards and discards one to the "Crib." The player to the left of the dealer cuts the deck for the "starter" and plays first. When one player calls "Go" the others must play in turn. Dealer partnership counts their hands last and they also count the "Crib" Cribbage Pegs Now Available! &COPY;1975 By Milton Bradley Co. under Berne & Universal Copyright Conventions. Made in U.S.A.
{"url":"http://www.centralconnector.com/GAMES/Cribbage.htm","timestamp":"2014-04-20T13:19:05Z","content_type":null,"content_length":"22462","record_id":"<urn:uuid:2c459a40-4b0e-4b67-a8d5-20c7e4bf7ce5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Data interpretation Anonymous posted on Tuesday, January 16, 2001 - 3:47 pm In a time-invariant conditional LTM, how do you interpret the significant effects of the predictors on the slope (in this case, an overall downward trend in health over time). Education: estimate .074 (se .056) Energy: estimate -.231 (se .054) For a 1 unit increase in education, you see... For a 1 unit increase in energy, you see... Thanks in advance. Bengt O. Muthen posted on Wednesday, January 17, 2001 - 9:22 am Can you describe your model more fully? Does LTM stand for latent transition or latent trait modeling? Do you have dichotomous dependent variables? Anonymous posted on Tuesday, January 23, 2001 - 9:31 am The model has four time-points with a categorical dependent health variable (measured on a 5-point scale). The model is time-invariant with baseline predictors (e.g. education) predicticing the intercept and slope of the model. The education and energy variables are categorical, with higher numbers representing higher levels of education and energy to participate in daily activities. I used the acronym LTM to mean latent trajectory modeling. Bengt O. Muthen posted on Tuesday, January 23, 2001 - 11:47 am With categorical repeated measures, the interpretation of the effects of time-invariant covariates on the slope growth factor can be expressed in several different ways. First, the coefficient (unstandardized) can be simply interpreted as in regular regression in terms of change in the slope for a unit change in a covariate (holding other covariates constant). This may not carry much meaning because the scale of the slope is arbitrary. Second, one can consider the standardized coefficient, in which case the change in the slope is expressed in slope standard deviations. This still doesn't mean much given that the outcome is categorical. Third, one can express the ultimate effect of the change in the covariate on the outcome variable probabilities. This may give a more "down to earth" interpretation. For instance, you can compute the outcome probabilities for some chosen values of your covariates. You do this by first computing the mean values of the slope given the chosen covariate values and then computing the outcome variable probabilities for these mean values. Anonymous posted on Thursday, June 28, 2001 - 2:00 pm How do you calcute the mean value of the slope given the chosen covariate value and then compute the outcome variable probabilities? Can this be done directly in MPLUS or hand-calculated. Thanks for your assistance. bmuthen posted on Sunday, July 01, 2001 - 12:10 pm I assume that you have a categorical outcome and that by slope you mean the slope growth factor. The mean value of the slope is obtained in TECH4 and is s_m = a+g*x for covariate value x, where a is the intercept of the slope factor and g is the regression coefficient for the slope regressed on x. The probability has to be computed by hand, for instance with unit scale factors delta (see User's Guide), you have for a binary y scored 0/1, P (y=1 |x) = F(-tau + s_m*x_t), where F is the normal distribution function, tau is the threshold parameter held equal across time, and x_t are the time scores (the slope loadings). Louise Sullivan posted on Wednesday, November 10, 2004 - 11:17 am I'm looking at social mobility across 4 different time points and according to the BIC value the best fitting unrestricted latent class growth analysis is a 7 class model. Seven different classes is not substantively useful and I notice from your June 2000 paper (Muthen and Muthen 'Integrating Person-Centered and Variable-Centered Analyses: Growth Mixture Modeling with Latent Variables' in Alchoholism: Clinical and Experimental Research) that you outline other criteria on page 887 for assessing how many latent classes to use. Please could you explain how high the average posterior probabilites should be - for my 7 class model I've got cross-classification figures as low as 0.683, 0.610 and 0.662. I would prefer to use a 4 class model which has a higher BIC (26224 compared to 26165) but better cross-classification values ie between 0.764 and 0.91. Am I justified in using a 4 class model? bmuthen posted on Sunday, November 14, 2004 - 12:35 pm The posterior probabilities tell you how useful the model is, but not how many classes fit the data best. You can consider other fit statistics. For example, several simulation studies indicate that Mplus' sample-size adjusted BIC is better than BIC. Also, the Lo-Mendell-Rubin test in Mplus' Tech11 can be used. Ultimately, the usefulness of the model is a key consideration besides statistical fit indices, e.g. predictive performance. Marion Sheenan posted on Saturday, August 13, 2005 - 3:38 pm I want to perform hypothesis testing on the individual parameters in my model. I know that I can use est/error but should I use a T or Z distribution? Linda K. Muthen posted on Sunday, August 14, 2005 - 11:14 am The estimate divided by the standard error shown in the Mplus output follows an approximate z distribution. Gwen Marchand posted on Saturday, February 02, 2008 - 4:02 pm I am relatively new to growth modeling and have managed to confuse myself. I have a simple question regarding interpretation of coefficients. I understand that including a time-invariant covariate into a model influences the latent slope and intercept, so that estimates listed under the "intercept" section of the output for the slope and intercept account for the influence of the covariate on the latent factors. In my case, the slope mean in the tech4 output is negative, but I have a positive coefficient estimate for my slope in the intercept section of the output (slope estimate = .113). I'm currently exploring why that may have occurred. But in the meantime, the covariate has a negative association with the slope. (-.05). Would I interpret this so that higher scores on the covariate at time 1 are associate with more slowly increasing slopes (based on the slope coefficent)? Thank you in advance. Linda K. Muthen posted on Monday, February 04, 2008 - 9:12 am If s is the slope growth factor, mean (s) = a + b mean (x) When x is zero, the mean (s) is equal to the intercept (s), that is, a. In your case, a is positive and b is negative, so the mean of x must be a positive value large enough to cause the product to be negative and larger than a resulting in a negative mean of s. The interpretation is that as x increases the slope is a larger negative value. anonymous posted on Saturday, February 14, 2009 - 8:50 am First, I'd like to thank you for making this forum available, it is such a great help! I am attempting to revise a paper and have some questions related to interpreting the correlation between intercept and growth factors. The LGM focuses on symptoms from time 1 to time 7. 1. Given a positive intercept mean (0.82), a negative linear slope mean (-0.16), and a positive quadratic slope mean (0.02), how do you interpret a negative correlation between the slope and intercept? Is it that the higher individuals are on symptoms at time 1, the slower the rate of decline in symptoms? 2. If the variance of the quadratic factor is fixed to 0, is it necessary to include it in your interpretation or to include a correlation between the intercept or linear slope and quadratic factor? 3. Given a positive intercept mean (0.97), a negative linear slope mean (-0.13), and a positive quadratic slope mean (0.02), how do you interpret: a) a positive (but nonsignificant) correlation between the intercept and quadratic slope b) a negative correlation between the linear and quadratic slope? Thanks very much in advance! Bengt O. Muthen posted on Sunday, February 15, 2009 - 11:17 am 1. If you center at time 1, then the higher an individual is at time 1, the lower his/her slope - that is, the steeper the decline. I am referring to the correlation between the intercept and the linear slope (but see also the caveat in 3 below). 2. With Var(q)=0 you don't have covariances between q and other growth factors. You still have the mean of q to explain. 3. With a quadratic growth model the linear and quadratic terms are partly confounded and are not easy to give separate interpretations for (this is why orthogonal polynomials are sometimes used). Vaguely speaking, with centering at time 1 the linear slope has the biggest influence in the beginning of the growth and the quadratic the end of the growth. Because of the confounding, I would not go into interpretations of correlations among growth factors in a quadratic model. With that caveat, a) if it were significant this would probably mean that a person with a high intercept also has a high upturn towards the end. b) when the initial decline is steeper, the ending upturn is higher. anonymous posted on Tuesday, February 17, 2009 - 3:20 pm I conducted a conditional two-group LGM and I'm having some trouble wrapping my head around interpreting the effect of two predictors on the slope functions. In the first group, which consists of only an intercept (intcpt=.315) and linear factor (intercept = .023), how do you suggest I interpret the following: 1. a positive path coefficient (0.032) for the regression of the slope on predictor 1. 2. a negative path coefficient of -0.034 for the regression of the slope on predictor 2. In the second group, which consists of an intercept (int= 0.844), linear factor (int=-0.015), and a quadratic factor (int=-0.004), how do you suggest I interpret the following: 1. a negative path coefficient of -0.062 for the regression of the linear slope on predictor 2. 2. a positive path coefficient of 0.012 for the regression of the quadratic slope on predictor 2. Thanks for your assistance! Bengt O. Muthen posted on Thursday, February 19, 2009 - 9:21 am Use the rules for interpreting coefficients in linear regression. As in that case, these path coefficients are partial regression coefficients, so giving the effect on the DV as the predictor changes 1 unit while holding the other predictors constant. anonymous posted on Thursday, February 19, 2009 - 11:52 am Thanks for your help. However, I still am not clear on whether the predictor is predicting a faster or slower rate of change. Bengt O. Muthen posted on Friday, February 20, 2009 - 5:01 am Since you mention predictor (singular form) I assume you refer to the question you have for the second group, regarding the quadratic model. Is that right? anonymous posted on Friday, February 20, 2009 - 5:43 am Yes, that is correct. I think (but please correct me if I'm wrong!) that for the group with the negative linear trend (the first group), predictor 1 (with a positive coefficient) predicts a slower decline and predictor 2 (with a negative coefficient) predicts a faster decline However, I am completely confused as to how to interpret the effect of the predictor on the quadratic group. Thanks again in advance! Linda K. Muthen posted on Friday, February 20, 2009 - 6:24 am See the following post from Sunday, February 15: 3. With a quadratic growth model the linear and quadratic terms are partly confounded and are not easy to give separate interpretations for (this is why orthogonal polynomials are sometimes used). Vaguely speaking, with centering at time 1 the linear slope has the biggest influence in the beginning of the growth and the quadratic the end of the growth. Because of the confounding, I would not go into interpretations of correlations among growth factors in a quadratic model. With that caveat, a) if it were significant this would probably mean that a person with a high intercept also has a high upturn towards the end. b) when the initial decline is steeper, the ending upturn is higher. anonymous posted on Friday, February 20, 2009 - 9:05 am Thanks - is this also true for the effect of covariates on linear and quadratic slope? Linda K. Muthen posted on Saturday, February 21, 2009 - 9:35 am Ingrid Holsen posted on Thursday, February 17, 2011 - 7:45 am I am really workinng on this to get it right, but I am now confused. I have several covariates predicting latent growth in body image measured at several time points between ages 13 and 30. (I am here using the ´model results´,is STDYX preferable?) To use the covariate close parent-adolescent relationship as an example; for boys there is a positive estimate at initial level at age 13 (0.14) (understandable!), negative sign. estimate for slope (-0.25), and a positive estimate for q (0.14). Do I interpete the s and q as that parent adolescent relationship influences body image growth to a less degree (during adolescence) for so to be of more importance again in early adulthood (q)? The body image curve for boys are increasing between the ages 13 and 18, so leveling off and decresing some at ages 21 and 23 .(so an increase again up to age 30). Ingrid Holsen posted on Thursday, February 17, 2011 - 12:58 pm Hi again, We are treating close adolescent relationship and peer relationship as ´time invariant covariates´from time 1. I have BMI as a time varying covariate at six points in time. When we include BMI in the model the effects of the time invariant covariates for girls on slope and quadratic growth disappears, while almost no difference for boys. Is there a way that we in one model (one step) can reveal this effect. Now we run it twice. Thanks in advance (for both my two posts) Bengt O. Muthen posted on Thursday, February 17, 2011 - 3:52 pm Regarding your first post, it is difficult to separately interpret effects on linear and quadratic slopes. This is why sometime "orthogonal polynomials" are used. The effect on the intercept is straightforward, however, and one approach to this issue is shown in Muthén, B. & Muthén, L. (2000). The development of heavy drinking and alcohol-related problems from ages 18 to 37 in a U.S. national sample. Journal of Studies on Alcohol, 61, 290-300. which is on our web site under Papers. Regarding the choice of standardization, see the UG. Bengt O. Muthen posted on Thursday, February 17, 2011 - 3:56 pm When you say run it twice, I think you don't mean for boys and girls but with and without BMI. If so, it seems difficult to capture the changing gender role in one model. The growth is different with BMI as a tvc. I wonder if having BMI as a parallel growth process instead of as a tvc would be useful. Olli Kiviruusu posted on Wednesday, February 23, 2011 - 7:27 am I'm analysing a latent growth curve model with four timepoints and time-invariant and time-varying covariates. I'm interested in the total effect of TVCs on the growth factors, especially on the mean (or intercept) of the slope factor i.e. does the mean growth rate change significantly after TVCs are specified. In the model without the TVCs the intercept of the slope factor is .35 and in the model with the TVCs .45 indicating that growth rate would be higher if the effects of TVCs were removed from the equation. How can I assess the significance of this change? Is it okay to constrain the intercept of the slope factor in the model with TVCs to the value it had in the model without TVCs and then analyse the chi-square change in model fit? Or is there a better/correct way to do Thanks in advance. Bengt O. Muthen posted on Wednesday, February 23, 2011 - 5:07 pm No, that doesn't sound correct. As a first step you want to think about how to make the question well defined. What is the intercept/mean of the slope growth factor when the model includes the TVCs - does it mean the same thing as when TVCs are not included? When included, does your model let the TVCs influence the slope growth factor? If not, doesn't the slope refer to the development of the Ys at zero values of the TVCs? Which raises the question, are the TVCs centered (sample means subtracted)? Olli Kiviruusu posted on Thursday, February 24, 2011 - 6:44 am My model looks like this: ! Non-linear crowth curve; ylevel yslope | y1@0 y2@0.6 y3@1.6 y4@1.6; ! TICs; ylevel on tica ticb; yslope on tica ticb; ! TVCs/concurrent effects; y1 on tvc1; y2 on tvc2; y3 on tvc3; y4 on tvc4; ! TVCs/lagged effects; y2 on tvc1; y3 on tvc1 tvc2; y4 on tvc1 tvc2 tvc3; Y is a personality variable and TVCs are the number of certain types of events. Regressions of Ys on TVCs show small, but significant negative effects. If I understand your point right, to me, the essential meaning of Ys and hence (I think) its growth parameters is the same whether TVCs are specified or not. I can do the centering of TVCs, but the interpretation of growth parameters at zero number of events as the TVCs now stands is also well motivated. TVCs do not directly influence the growth factors - actually regressing the slope factor on the TVCs would in a way be the easiest solution to my problem, but I don't think that it is allowed here, or is it? Thanks again. Bengt O. Muthen posted on Thursday, February 24, 2011 - 10:37 am Say that the TVC means decline linearly over time. Then the direct negative effects onto the Ys will help pull down the Y means instead of the slope mean being the only source affecting the Y means. This affects the interpretation of the slope mean changing across the two models. You can regress the slope on TVCs. For instance, TVC1 happens before the slope affects the change from time 1 to time 2. TVC1 might also be correlated with the intercept. These points illustrate the complexity of models with TVCs. Olli Kiviruusu posted on Friday, February 25, 2011 - 9:12 am Thanks for your help. I regressed the slope (and intercept) factors on TVC1 (and on TVC2 in a three timepoint model) and there were no significant effects on the slope. These models seem to me less than perfect however as TVCs 3 and 4 can't (I think) be used in them. If there is no way to assess the joint effect of all TVCs on the growth factors I guess I need to consider some other models than LGC. Any suggestions? Bengt O. Muthen posted on Friday, February 25, 2011 - 1:28 pm You might want to take a look at how intercept changes can be modeled by TVCs - see slides 157-159 of the Topic 3 handout of 05/17/2010. You can also formulate a growth model for the TVC process and do parallel growth modeling where the TVC growth factors influence the growth factors for the Y process. Elizabeth Adams posted on Tuesday, March 08, 2011 - 3:15 pm I am modeling gender and race centrality as predictors of change in cross race contact. In terms of the analysis interpretation we are unsure of how to interpret the output from MPlus (gender is 0=female and 1=Male)). GENDER -0.040 0.073 -0.540 0.589 CENTRALITY 0.101 0.060 1.679 0.093 GENDER -0.111 0.061 -1.802 0.071 CENTRALITY 0.102 0.055 1.866 0.062 Bengt O. Muthen posted on Tuesday, March 08, 2011 - 6:42 pm You interpret these slopes just like you would in a regular linear regression with a continuous dependent variable - that is, if INT was observed and if SLOPE was observed. Ingrid Holsen posted on Saturday, May 07, 2011 - 2:53 am I have a question regarding the interpretion of BMI as a time varying covariate. It is a latent growth curve, i s and q, outcome; body image at 6 ages from 13 to 30, also background variables. The time varying covariate BMI has a sign neg estimate for males at age 13 (-.18) and 30 (-.27), but a sign. pos at age 21 (.04, p<0.01). Females have a sign pos at age 21 and a neg at age 30. I have checked this several times now, it seems correct. So BMI has an additional effect on body image at this ages; at ages 13 and 30 boys´ relatively high BMI led to further decline in body satisfaction, while at age 21 the opposite occured? Or am I interpreting the estimates with time varying covariates wrong here? How is it best to express it? Many thanks! Linda K. Muthen posted on Saturday, May 07, 2011 - 8:12 am See the following book whcih has a section on the interpreation of time-varying covariates: Bollen, K.A. and Curran, Patrick, J. Latent Curve Models: A Structural Equation Modeling Perspective. Wiley 2006. Ingrid Holsen posted on Sunday, May 08, 2011 - 2:42 am Thanks for quick reply! My worry is A) that the results (see above) might be incorrect. Based on previous research I just cannot see how a positive prediction age 21 (BMI (tvc)/body satisf.) can be correct, particularly not for girls. Also the correlations are negative, around -.30. B)The fit measures for the model are not that good, cfi .93, RMSEA 0.04, SRMR 0.08. I have looked at mod. indic.: When I include a path Q on bmi30 the fit is much better, cfi .97, RMSEA 0.03 and SRMR 0.05.. Chi square much lower too (but still sign, sample is 1082). It makes sense to me that BMI30 predicts Q; males (-.20), females (-.39). (curve in body image adulthood levels off), but can I do that? Then BMI age21 are no longer positive! and not significant, all effects go through Q. Linda K. Muthen posted on Sunday, May 08, 2011 - 6:20 am A) This I cannot comment on without more information than can be handled on Mplus Discussion. B) If the model doesn't fit, any interpretation of the results is invalid. Ingrid Holsen posted on Sunday, May 08, 2011 - 6:34 am Ok, we are struggling with this. Can I post more information here or can we do it another way? Carolin posted on Monday, August 22, 2011 - 2:38 am I'm analyzing a quadratic GMM with four timepoints and covariates. One covariate has a significant influence on the linear slope factor, but insignificant influence on the quadratic factor. How can I interpret this? Does this mean that the covariate only affects the change between T1 and T2 and after this there is no influence? Thanks a lot Bengt O. Muthen posted on Monday, August 22, 2011 - 10:07 am Not quite. Telling effects on the linear and quadratic parts growth factors is difficult because those two factors interact. The covariate that influnces the linear factor significantly continues to have an influence after T2 because the linear slope continues to have an influence beyond T2. But beyond that it is hard to parse out the influences via the two growth factors. Karen Offermans posted on Friday, September 09, 2011 - 1:22 am Dear Linda and Bengt, I am analyzing the following longitudinal growth model on a continuous variable (perceived alcohol availability), including multiple group analyses on age: GROUPING is age (13=13 14=14 15=15); iPalav sPAlav | M1PAlAv@0 M2PAlAv@1 M3PAlAv@2; iPAlav on group1; sPAlav on group1; I get the following warning in the output: WARNING: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IN GROUP 14 However I do not know what to check in the technical output 4 and what I can conclude from this?? I hope you can help me further. Karen Offermans posted on Friday, September 09, 2011 - 1:55 am Just to give some more information in relation to my previous question; In the tech 4 outcome concerning 14 year olds Mplus doesn't give the correlations between sPalav and the other latent variables (stated as 999) or itself. Furthermore, in the estimated covariance matrix for latent variables I can see a negative covariance between spalav and spalav of -.107. Is there anything we can do to solve this problem? Looking forward to your reply. Linda K. Muthen posted on Friday, September 09, 2011 - 7:01 am It sounds like spalav has a negative variance. This makes the model inadmissible. You would need to change the model. Gareth posted on Monday, April 09, 2012 - 4:55 am I have two questions about this formula for calculating outcome variable probabilities (binary y scored 0/1) at different levels of a covariate over time, in categorical growth models: "P (y=1 |x) = F(-tau + s_m*x_t), where F is the normal distribution function, tau is the threshold parameter held equal across time, and x_t are the time scores (the slope loadings). The mean value of the slope is obtained in TECH4 and is s_m = a+g*x for covariate value x, where a is the intercept of the slope factor and g is the regression coefficient for the slope regressed on 1. The mean value of the slope obtained in TECH4 is different from the the intercept of the slope factor. If the intercept of the slope factor is used in the formula, why is the mean value of the slope obtained in TECH4 relevant? 2. Is this formula the same for logit and probit coefficients? If different, how should it be modified? Linda K. Muthen posted on Monday, April 09, 2012 - 9:12 am 1. The mean and intercept are different parameters. The mean of y, y_bar, is y_bar = a + b*x_bar; The intercept is a = y_bar - x_bar. 2. See Chapter 14 of the user's guide. There is a section on probit and another on logit. Meg posted on Monday, June 11, 2012 - 6:48 am I have a question regarding the interpretation of my growth model. I am looking at depression (outcome) across four time points and assessing the influence of time-variant and invariant predictors on the slope and intercept of the depression curve. The UGM suggests that depression decreases from mid adolescence through young adulthood (I used a freed-factor loading approach). I am a little confused as to how to interpret the regression coefficients because most of the examples have positive slopes. Here are my questions: 1. if the estimate for gender (boys) predicting the slope of depression is positive, does this mean that boys have a slower rate of decline in depression over time? 2. The time-varying covariate also has a declining slope. The regression estimate for the effect of the intercept of the TVC on the slope of depression is positive (.085). Does this mean that those people with higher levels on the TVC have a slower rate of decline in depression? Linda K. Muthen posted on Monday, June 11, 2012 - 4:56 pm 1. Yes. 2. Yes. Gareth posted on Thursday, November 01, 2012 - 7:02 am Suppose I have a parallel process growth model with categorical outcomes. The intercepts and slopes are regressed on covariates, and the intercepts and slopes are correlated. For each covariate, I have calculated probabilities using the formulae in the discussion above: (1) s_m = a+g*x for covariate value x (2) F(-tau + s_m*x_t). How can this formula be modified to estimate the probability at the first time point that the one outcome is already present, when the other outcome is already present at baseline? The two intercepts are correlated, so I want to illustrate that someone already having outcome 1 is more likely to already have outcome 2 at baseline. The correlation between the intercepts is captured by a covariance rather than by a regression coefficient. Bengt O. Muthen posted on Thursday, November 01, 2012 - 8:59 pm You say P (y=1 |x) = F(-tau + s_m*x_t) but that is P(y=1 | x, s=s_m). If s is a random effect, to get P(y=1 | x) you have to integrate over s. P(y1=1, y2=1 | x) requires bivariate integration over s1, s2. Cameron McIntosh posted on Wednesday, December 05, 2012 - 6:40 pm I am estimating longitudinal models with binary and ordinal observed outcomes using the multilevel features (TWOLEVEL RANDOM) in Mplus (as opposed to the SEM/LGCM approach). I know that with SEM/ LGCM, the regressions of growth factors on covariates are linear regressions (as the intercept and slope are continuous latent variables with arbitrary metrics) and the factor loadings for the intercept and slope are fixed logit or probit coefficients. One can, however, get predicted probabilities for the observed categorical indicators of growth by combining the appropriate parameter But in the case of MLM, where the intercept and slope are estimated by directly regressing the categorical outcome on an observed ordinal time variable using stacked (long) data (rather than creating a measurement model for the growth factors) and a logit or probit link, can the estimated growth parameters themselves be more directly interpreted on the logit (or probit) scale... following which one could simply convert these to odds ratios and thus predicted probabilities? Is this correct or am I missing something? Bengt O. Muthen posted on Wednesday, December 05, 2012 - 8:29 pm I think this is the right way to look at it. This is like UG ex 9.16 but with categorical outcomes. Here x1 and x2 influence s and y on the Between (subject) level) which in turn influence the outcome at each time point as the figure implies. So x1 and x2 ultimately have a logit/probit influence on the outcomes. Dustin Pardini posted on Thursday, May 16, 2013 - 9:41 am I am using a simple linear growth curve to predict a distal outcome. In explaining the contribution of the slope as a predictor, in addition to interpreting the estimates provided, is it practical to convey this information by using R2? Specifically, would it be feasible to run the model with only the intercept or slope being used as a predictor (ex.- y on I), then the same model but with both I and S as predictors (y on I S) and report the differences in R2 between these two models? Bengt O. Muthen posted on Thursday, May 16, 2013 - 11:28 am That doesn't seem unreasonable, as long as i and s are not too highly correlated. B Chenoworth posted on Wednesday, June 12, 2013 - 2:11 am I am estimating a linear latent growth curve model across three time points using ordinal-categorical data. Specifically the response scale of the outcome variable is a 4-point likert scale (none, 1-2 days, 3-5 days, 6-7 days). As such I have used WLSMV to estimate the model. In the output, the mean of the intercept is fixed at 0 and the mean of the slope is estimated. The output tells me that the mean of the slope is -0.3 (p<.05). So the outcome variable decreases by 0.3 points between each time point. Reviewers of my work continue to ask me, what does this mean in term of how much change occurred. Because the outcome variable is ordinal-categorical I am finding it difficult to answer this. Would this best be answered in terms of calculating an effect size for the slope, and if so, how would I do this? Thank you in advance for your help. Linda K. Muthen posted on Wednesday, June 12, 2013 - 1:13 pm I think looking at a plot of probabilities would be helpful. See the SERIES option of the PLOT command. B Chenoworth posted on Wednesday, June 12, 2013 - 3:15 pm Thank you for your response Linda. Just to follow up, it is possible to calculate an effect size for the slope, in order to report that the decrease represented a small, medium, or large change? Bengt O. Muthen posted on Wednesday, June 12, 2013 - 3:39 pm It is possible, but does not convey how that impacts the probabilities of the observed variables. Ivana Igic posted on Wednesday, August 14, 2013 - 3:17 am Dear Drs. Muthen, I’m running a 3-step GMM (Mplus Web Notes: No. 15). In the first step after I have tested different models I got a 5 class curvilinear solution as the most suitable.In the 3-step I have predicted the distal outcome in T5, while I have controlled for the T1 value of distal outcome. 1.Is the intercept value of distal outcome within the class the mean value of the distal outcome per class? 2. The values of distal outcome are 1-6, what is wrong if I got the negative values for intercept or the values higher than 6 for the intercept per class? 3.I also analyzed the same data in SPSS using ANCOVA and I got very different values for the estimated mean value of distal outcome per class. 4.I used Wald test for distal outcome means comparison as suggested but this doesn’t work. Did I do something wrong? [t5_y ] (m1); [t5_y ] (m2); [t5_y ] (m3); [t5_y ] (m4); [t5_y ] (m5); Model test: Thank you very much for your help. Linda K. Muthen posted on Wednesday, August 14, 2013 - 9:17 am 1-3. The intercepts not means are being estimated. 4. Remove The other tests imply those tests. Ivana Igic posted on Wednesday, August 14, 2013 - 10:49 am Thank you very much for answering me! 1. I want to compare the value of distal outcome within diff. classes, how should I then interpret the intercept values? I want to be able to say that people within one class feel better/worse (my distal outcome) compare to people in other classes and to test the significance of these differences using the wald test. 2.The model test is still not working. Thank you very much for your help and have a nice day. Linda K. Muthen posted on Thursday, August 15, 2013 - 8:45 am 1. The intercept is the mean controlled for the covariate. 2. Please send the output and your license number to support@statmodel.com. Maike Theimann posted on Wednesday, August 28, 2013 - 12:37 am my question relates to conditional linear latent growth curve models. I have an unconditional model which shows that the endogenous variable declines over time. If I regress the slope Factor on an exogenous variable, its effect on the slope factor is negative. Unconditional model: Unstandardized Means I 3.902 0.028 141.290 0.000 S -0.037 0.010 -3.506 0.000 Conditional Model: S ON AGSLB -0.194 0.039 -4.941 0.000 As you see there is a negative effect of AGSLB on the slope Factor. Does this mean that if AGSLB increases by one, the curve of the endogenous variable will move (0.194 units) towards zero? And in general does a positive effect on the Slope-Factor mean that if the exogenous variable increases, the curve of the endogenous variable will move more into the direction it has in the unconditional model and a negative Effect that it will move more towards zero, indifferent of the curve’s shape in the unconditional model? Bengt O. Muthen posted on Wednesday, August 28, 2013 - 6:20 pm A covariate that has a negative effect on a slope is interpreted as follows. As the covariate value increases, the slope value decreases. It doesn't matter if the slope mean is negative or positive. If the mean is negative, increasing covariate value beyond its mean makes it even more negative. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=14&page=100","timestamp":"2014-04-16T08:26:23Z","content_type":null,"content_length":"116218","record_id":"<urn:uuid:7228a239-ae9c-46eb-8cf3-891af81ec2c4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Degrees of freedom A square is composed of many particles with the constraint that distance between every particle is the fixed.so such a square is moving in XY plane. if i consider two particles they have 4-1 =3degree of freedom(one is subtracted due to constraint that distance between particles are fixed).if i consider third particle it is defined by two co-ordinates and two constraints and therefore no degree of freedom.....the same for fourth fifth and so on..... so my answer is three and i dont find any difference in this respect with a triangular lamina...... I do not know the answer to the question.......please comment on this and say if you have any other opinion.
{"url":"http://www.physicsforums.com/showthread.php?t=670269","timestamp":"2014-04-17T21:25:34Z","content_type":null,"content_length":"33950","record_id":"<urn:uuid:685fd5f5-d6fc-4ede-bc18-c74a489066db>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Report by the Institute of Marine and Antarctic Studies: Reproducing the mortality model in Neira 2011 Andrew Wadsley has made several recent statements in the media (e.g. Tasmania Times 26/8/2012) that Neira’s (2011) calculation of spawning biomass is wrong and that his results cannot be reproduced. We have conducted an internal review of Neira’s analysis and requested an independent external review from SARDI. Both supported the analysis by Neira and showed that his method was easily Despite making this review known (Attachment 1), Wadsley’s message persists:- that this work is bad science, is falsified, scientific method hasn’t been followed and that our results have been fabricated to suit industry. He has been active in the media with this message and it’s been repeated by others including the Tasmanian Greens and the Tasmanian Conservation Trust. This message has also been communicated through mainstream media (eg ABC News TV and radio) and in Parliament and more recently presented at a public rally. We do not wish to debate the merits of Neira’s study in the media, but believe that a more detailed explanation of why we support his work is warranted. Neira 2011’s mortality model Neira (2011) produced a spawning stock biomass estimate using the daily egg production method (DEPM), a method that is widely used to produce biomass estimates for pelagic fish stocks. A central component of the DEPM is the mortality model, which estimates the daily egg production, P[0]. The data in DEPM studies are typically skewed and noisy. Consequently care needs to be taken with the method used to fit the mortality model to the data. Neira (2011) used two methods: non-linear regression (although citing Lo et al., 1996, the actual method is a standard non-linear regression using non-linear least squares (NLS)) and a generalized linear model with a negative binomial error distribution (Cubillos et al., 2007; Neira and Lyle 2011). These methods are well established in the DEPM peer reviewed literature and provided comparable results. Neira (2011) describes the methods he used with the following statement: “Two functions were fitted to the daily egg abundance-at-age data, namely the traditional least squares non-linear regression (NLS) model (Lo et al., 1996), and a generalized linear model (GLM) using a negative binomial error distribution (Cubillos et al., 2007; Neira and Lyle, 2011).” This description is sufficient to enable fisheries scientists with experience in DEPM studies or a statistical background to repeat the analysis as the methods are standard and the name provides sufficient description (in the same way basic statistical operations are not referenced in scientific reports). For example, non-linear regression is a completely standard routine and NLS refers to non-linear least squares – the statistical method used to fit non-linear regressions. Indeed many statistical tools like R have routines called “NLS”. However, to someone new to DEPM studies or applied statistics these methods may be unfamiliar. Should such a person want to apply these methods they may choose to examine the citations (although perusal of a statistics textbook or Google would provide a succinct description). In this case the chosen citation for non-linear regression is not the most helpful. Lo et al. (1996) only mentions non-linear regression in the methods section on temperature dependent mortality: “All coefficients were estimated by nonlinear regression (Chambers and Hastie 1992) assuming additive errors.” Reference to the actual method used to fit the mortality model was given as Picquelle and Stauffer (1985), and no mention of NLS. Fortunately the topic can be found in many statistical textbooks or through internet searches for “NLS statistics” or “NLS regression” or “non linear regression (NLS)”. Andrew Wadsley’s Analyses Wadsley published an analysis of Neira’s mortality curve in the Tasmanian Times (TT) article: “Margiris: UTAS VC must investigate” (Attachment 2) (Note that he then subsequently published additional analyses as errors in his approach were progressively identified by respondents to the article). Wadsley may be unfamiliar with non-linear regression and NLS and has consequently resorted to the citations in Neira to determine what non-linear regression (NLS) is. Unfortunately those citations do not give an explicit account of NLS and Wadsley has interpreted unrelated sections of those texts to be a description of NLS. Here we examine how Wadsley’s analyses vary from Neira’s and why they are flawed. Method 1 Source: This method is described in Wadsley’s original analysis (Attachment 2). Description: This method fitted an exponential curve in Microsoft Excel. The method was only applied to non-zero abundances. Flaws in Wadsley’s approach: 1. Zero egg abundances are valid and supply informative data which has been completely disregarded. 2. Microsoft Excel fits exponential curves using linear regression (fitted with ordinary least squares) on log transformed data. This is a completely different method to those applied by Neira, which consequently is expected to provide a different answer. The closest method in the literature is the log-linear model, however this has a substantial negative downward bias that must be corrected (Ward et al., 2011) and was not performed by Wadsley. Applying this correction to Dr. Wadsley’s approach (where only positive abundances are considered) increases P[0 ]to exceed Neira’s original estimate (ie implies a higher harvest than would be indicated through the Niera analysis). Origin: Neira (2011) cannot be interpreted to justify this method. It suggests that Wadsley was either i) unaware of the difference between linear regression of log transformed data and non-linear regression or ii) unaware that the two methods were likely to provide substantially different answers. Method 2 Source: This method is described as the “Picquelle and Stauffer” method in Wadsley’s subsequent analysis (Attachment 3). Description: This method claims to have “used the inbuilt non-linear trend function in MS Excel to calculate NLS exponential trends” Flaw in Wadsley’s approach: MS Excel has no in built non-linear regression methods. Non linear regressions (using non linear least squares,) must be fitted using custom application of the Solver add-in or other third party add-ins. In fact Wadsley has applied the same method as Method 1 (which we confirmed by obtaining Wadsley’s results by applying his Method 1 to the new data set). Origin: Neira (2011) cannot be interpreted to justify this method. Wadsley’s statement above indicates that he is unaware of the difference between fitting log transformed data using linear regression and using non-linear regression. This distinction is crucial in DEPM studies. Method 3 Source: This method is described as the “Lo et al.” method in Wadsley’s subsequent analysis (Attachment 3). Description: This method bins the data into half day groups and calculates the mean age and abundance for each group of data points. Method 1 is then applied to these means (note that there are no zero abundances in these binned data points). Flaw in Wadsley’s approach: This is not the method used in Neira. There is no rationale for this approach and it severely reduces the available data for fitting the mortality model. This is evident from the variability in parameters observed by Wadsley (P[0] varies by a factor of four between Wadsley’s analyses of 8a and 8b). Origin: Lo et al. (1996) binned their data in this manner before using an unspecified method to fit the mortality model. Wadsley has mistakenly considered this as a possible description of NLS (It is not labelled or referred to by Lo et al. 1996 as such). In Lo et al. (2005), they state that they have discontinued using this aggregation step. Further considerations The methods used by Neira (2011) have been found to be completely reproducible and have been independently verified by both IMAS and SARDI. In addition to using an inappropriate method to reproduce Neira’s results, Wadsley appears to have misunderstood the procedures used in the literature, following the wrong citations in attempting to justify his claims. Scientists familiar with DEPM studies or common statistical terminology would not have the same problem. Nancy Lo, widely considered to be the doyen of DEPM, provided an unsolicited critique of Wadley’s original discussion paper, and reached a similar conclusion to ours (Attachment 4) However, as DEPM studies are likely to come under greater scrutiny (and from individuals with a limited fisheries background), we suggest the following: • Citations for standard statistical methods should be to statistical papers (rather than other DEPM papers) and names of corresponding R functions to be stated in the text (where applicable). • The analysis method and data should be publicly and readily available. • While seminal papers in DEPM (e.g. Lo et al. 1996) are frequently cited, recent papers that are more closely aligned with the methods in the study should be cited. • Recognising the inherent variability of the input data, multiple statistical methods should be investigated as part of sensitivity analysis when reporting DEPM studies. Cubillos, L.A., Ruiz, P., Claramunt, G., Gacitua, S., Nunez, S., Castro, L.R., Riquelme, K., Alarcon, C., Oyarzun, C. and Sepulveda, A. (2007). Spawning, daily egg production, and spawning stock biomass estimation for common sardine (Strangomera bentincki) and anchovy (Engraulis ringens) off central southern Chile in 2002. Fisheries Research 86: 228-240. Lo, N.C.H., Macewicz B.J. and Griffith D.A. (2005). Spawning biomass of Pacific sardine (Sardinops sargax), from 1994-2004 off California. CalCOFI Report 46, 93-112. Lo, N.C.H., Ruiz, Y.A.G., Cervantes, M.J., Moser, H.G. and Lynn, R.J. (1996). Egg production and spawning biomass of Pacific sardine (Sardinops sagax) in 1994, determined by the daily egg production method. CalCOFI Report 37, 160-174. Neira, F.J. (2011) Application of daily egg production to estimate biomass of jack mackerel, Trachurus declivis – a key fish species in the pelagic ecosystem of south-eastern Australia. IMAS Report, Neira, F.J. and Lyle, J.M. (2011). DEPM-based spawning biomass of Emmelichthys nitidus (Emmelichthyidae) to underpin a developing mid-water trawl fishery in south-eastern Australia. Fisheries Research 110, 236-243. Picquelle, S. and Stauffer, G. (1985). Parameter estimation for an egg production method of northern anchovy biomass assessment. In R. Lasker (Editor), An egg production method for estimating spawning biomass of pelagic fish: application to the northern anchovy, Engraulis mordax, pp: 7-15. NOAA Technical Report NMFS 36. Ward,T.M, Burch,P., McLeay, L.J., and Ivey, A.R. (2011): Use of the Daily Egg Production Method for Stock Assessment of Sardine, Sardinops sagax; Lessons Learned over a Decade of Application off Southern Australia, Reviews in Fisheries Science, 19, 1-20. Wadsley, A. Super Trawler: The UTAS Vice-Chancellor must investigate. Tasmania Times 2012-08-26. http://tasmaniantimes.com/index.php?/weblog/article/super-trawler-the-utas-vice-chancellor-must-investigate/
{"url":"http://www.afma.gov.au/2012/09/report-by-the-institute-of-marine-and-antarctic-studies-reproducing-the-mortality-model-in-neira-2011/","timestamp":"2014-04-19T04:19:22Z","content_type":null,"content_length":"29491","record_id":"<urn:uuid:fd125858-c92c-4def-a008-7a810c2396c7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
State of the Art A 222 page progress report of the COCONUT project is available as Algorithms for Solving Nonlinear Constrained and Optimization Problems: The State of the Art (ps.gz,699K; pdf, 2387K) The goal of this document is to summarize the state of the art of algorithms for solving nonlinear constrained and optimization problems. These problems have received attention in different research areas. As a result different approaches exist to solve them. Each of the chapters below attempts to summarize the techniques developed in one particular area. In Chapter 2 a summary of nonlinear local optimization techniques is given. It covers the state of the art in the area of traditional numerical analysis. With respect to the other techniques described in this document, this area is the oldest and most established one. The algorithms are able to handle large scale nonlinear optimization problems. However, as the title indicates, these algorithms perform local optimization. That is, the solutions they produce are locally optimal, but not necessarily globally optimal. In certain problem classes local optimality implies global optimality. This is for example the case for convex problems. For these problems nonlinear local optimization algorithms can provide good approximations to global optima. Note however that these solutions remain approximations. Indeed local optimization algorithms do not provide conservative bounds on rounding errors. Many techniques for solving nonlinear problems use derivative information. For a long time derivative information was obtained by numerical approximation. For example finite differences were used to approximate gradients. About 20 years ago a different technique was developed to compute derivatives. This technique called automatic differentiation is able to compute derivatives that are exact up to machine precision, provided the computer codes that define the functions are available. Automatic differentiation has since found its way in the area of numerical analysis. It also is extensively used by interval arithmetic and constraint satisfaction techniques. In this case the derivatives are not evaluated at a given point but over a given box instead. By taking proper care of the rounding error, this process can produce conservative enclosures of the derivatives over the box. In this project we focus on problems that can be stated under the form of arithmetic expressions. In Chapter 3 we review the state of the art in automatic differentiation for this type of problems. Over the last decade a number of researchers have become interested in the area of nonlinear global optimization. Chapter 4 provides an overview of the techniques that have been developed so far for solving these problems, and put a number of methods, including heuristic ones, in perspective. The chapter summarises a number of results on optimality conditions and certificates which most global optimization methods exploit in one form or another. It discusses various branching methods that can be used and also describes techniques related to linear programming and mixed integer linear The area of constraint programming solves optimization problems by exploiting the constraints of the problem to eliminate infeasible or suboptimal instantiations. This area has long been focused mainly on discrete problems. The interest in continuous domain problems is relatively recent. In contrast to interval arithmetic or global optimization, this area has mainly concentrated on algorithms for propagating nonlinear constraints. These constraint propagation techniques are reviewed in Chapter 5. The various techniques described in this document are frequently used in isolation. However, one cannot say that one technique outperforms and is more general than all others. It turns out that the techniques are often complementary in performance and applicability. For example approximations by linear interval systems usually perform well close to a solution whereas propagation techniques are often most effective far from the solution. Also some techniques may not be applicable on some problems. For example linear programming techniques cannot be used to solve nonlinear problems. However, it can be used for the linear part of a nonlinear problem. The state of the art in solver cooperation is presented in Chapter 6. In a number of engineering applications, it is not possible or desired to find an optimal solution. Instead one wishes to explore the sets of solutions of a problem and proceed to incrementally refine the problem formulation. This is often the case in the area of engineering design for example with non-routine design problems. The constraint propagation techniques described in Chapter 5 are well adapted to solve this type of problems. However, the enclosure of the sets of solutions they provide may not be very accurate. To address this problem a number of complete enumeration techniques have developed. They are reviewed in Chapter 7.
{"url":"http://www.mat.univie.ac.at/~neum/glopt/coconut/StArt.html","timestamp":"2014-04-19T19:34:46Z","content_type":null,"content_length":"5693","record_id":"<urn:uuid:98d366e0-e298-4e5f-a545-0ad89e010528>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Alternatives to the Big Bang - G.F.R. Ellis Annu. Rev. Astron. Astrophys. 1984. 22: 157-184 Copyright © 1984 by . All rights reserved 2.5. Imlications The discussion above makes clear the nature of the available alternatives if one is to avoid the conclusion that the Universe originates in a SHBB. One can question 1. the nature of the observed redshifts (Equations 1-3) by adopting either a different theory of light propagation or a different astrophysical interpretation; 2. the conservation laws and/or gravitational field equations (Equations 5-7); 3. the nature of matter in the Universe, e.g. by assuming some effective contribution to the matter stress tensor that violates the energy conditions (Equation 9). In each case one can avoid the existence of an initial singularity but must carefully consider if the resulting theory gives a satisfactory account of the microwave background radiation and element abundances. An additional possibility is to consider different geometric assumptions. One can question 4. the assumption of exact spatial homogeneity and isotropy. Then at least one of the shear, vorticity, and acceleration will be nonzero (so Equation 10 does not follow from Equation 8). Singularities will still occur in the past (45, 102), but they can be so different from the SHBB in their geometry and physics as to represent quite different initial situations.
{"url":"http://ned.ipac.caltech.edu/level5/Sept01/Ellis/Ellis2_5.html","timestamp":"2014-04-16T19:20:08Z","content_type":null,"content_length":"3048","record_id":"<urn:uuid:99fae474-32c2-4a92-a2e9-d453514d2771>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Are proofs in mathematics based on sufficient evidence? Irving ianellis at iupui.edu Fri Jul 16 13:00:44 EDT 2010 In my previous post, I began by saying that Monroe Eskew, Michael Barany and Vaughan Pratt raise some important, and I think related, questions, principally historical, and in particular concerning the question of: (1) whether my interpretation of Russell's criticisms of Euclid reflect what Russell may have in fact had in mind; (2); whether the same criticisms that I claim Russell raised of Euclid could not just as well be directed at other mathematicians; (3) where and how to draw a line between a computation, an axiomatic system, a formal deductive system; and (4) whether Euclid's Elements and Aristotle's Analytics present deductive systems. I might have added in my previous post that, not so long after Russell in "On Teaching Euclid" questioned the stringency and correctness of some of Euclid's proofs, William Betz in Intuition and Logic in Geometry" (The Mathematics Teacher II (1909-10), 3 31) argued that Euclid was no longer the model of mathematical rigor that he had been historically and that the prime exemplar was now Hilbert. In that previous post dealt with (1) and (4), the latter in particular in connection with (1). As I turn to a consideration of (2) and (3), I would observe that these are similarly connected. In the course of the discussions about Euclid, reference was had to Reviel Netz's The Shaping of Deduction in Greek Mathematics: A Study in Cognitive History and to Netz's understanding in that work of the nature of Euclid's methods of demostration (and of Greek mathematics generally). Examining reviews of Netz s book reveals a mixed reaction (I admit to relying on the reviews, having not yet myself gotten hold of Netz's book). Historian and philosopher of logic Paolo Mancosu noted, for example, that Netz uses "deduction" as virtually synonymous with "argumentation". Historians of mathematics Len Bergren and Jens Hoyrup, both specialists in ancient and medieval mathematics, find Netz employing "deduction" to mean diagrammatic reasoning, and point out that, in carrying out his demonstrations, Euclid often refers back to previous results, but does not stipulate anything that we would today recognize as an explicit logical chain of rules for proceeding from one proposition to the next. If, as I suggested in the previous post, formal deductive system = axiomatic system + inference rule(s) with the inference rule(s) explicitly stated at the outset, and cited in a proof for letting from one line of the proof to the next, then, in our terminology, Euclid would be seen by each of the reviewers (Mancosu, Bergren, Hoyrup) as providing us with an axiomatic system, but not a formal deductive system. The other aspect of Netz's thesis, as defined in the reviews that I have examined, argues that Euclid and ancient Greek mathematics is "formal" in the sense only that Euclid and his colleagues treated mathematical objects linguistically, rather than taking a metaphysical position with regard to their ontological status. Thus, for example, labeled diagrams were being dealt with, rather than physical lines, circles, squares, etc., or, if they were Platonists, as ideal lines, circles, squares, etc. having some kind of extra-linguistic existence. (The reviews of I scanned are: Nathan Sidoli, Educational Studies in Mathematics, Vol. 58, No. 2 (2005), pp. 277-282; Paolo Mancosu, Early Science and Medicine, Vol. 6, No. 2 (2001), pp. 132-134 ; Markus Asper, Gnomon, 75. Bd., H. 1 (2003), pp. 7-12 ; J. L. Berggren, Isis, Vol. 94, No. 1, 50th Anniversary of the Discovery of the Double Helix (Mar., 2003), pp. 134-136; Jens Hoyrup, Studia Logica: An International Journal for Symbolic Logic, Vol. 80, No. 1 (Jun., 2005), pp. 143-147. (I should also perhaps mention that I am otherwise unfamiliar with the work Sidoli and Asper.) The one point upon which the reviewers agree is that Netz's book is an important contribution to the literature on history of classical Greek In responding to the issue of whether Russell's criticisms of Euclid, to the extent that they it is an axiomatic system rather than a formal deductive system, or, as Russell asserted, not rigorously logical, might also be applied to the work of other great mathematicians, including those listed by Monroe, namely Gauss, Weierstrass, Cayley, Cauchy, etc., I would begin by noting that, unlike Peano in the Arithmetices principia to be a formal deductive system, or the claims which were made on behalf of Euclid's Elements for its being THE exemplary model of rigorous logical proof, mathematicians such as Gauss, et alia, were not claiming to devise either formal deductive systems or even axiomatic systems, but were, in the case for example, of an Euler or a Gauss, working on solving specific mathematical problems, and it has been recognized that much of their work was "computational" (or, in the 18th century, taken as a synonym, "algorithmic"). Even in the case of Weierstrass, the intent was to build analysis upon the basis of a strict definition of the limit concept, presented in terms of functions and built from the elements of the real continuum (as an historical point, it should be noted (a) that much of what we have in print of Weierstrass's rigorization of analysis comes from the edition of his lectures by Dedekind; and (b) that it is probably Otto Stolz who adopted and provided expositions of Weierstrass s approach, and starting in his textbook Vorlesungen ueber allgemeine Arithmetik: nach den neueren Ansichten (1885-86) to develop the foundations of Weierstrass's real analysis in the style of a formal deductive system. This is NOT in the least to suggest that Frege's 1879 Begriffsschrift was not the first effort to undertake the foundations of arithmetic and analysis within a formal deductive system, of course. The differences are that (i) Frege's was the first explicit declared effort to do so, and specifically in terms of logic, and (ii) his work was for the most part ignored until Russell began to call attention to it in his 1903 Principles of Mathematics. I will, hopefully, in one more installment, have an opportunity to explicitly take an historical look at the issue of (3), of whether, and if so where and how, to draw a distinction between computational, axiomatic, and formal deductive approaches or styles, or whether, and if so how, they belong to a continuum. Irving H. Anellis Visiting Research Associate Peirce Edition, Institute for American Thought 902 W. New York St. Indiana University-Purdue University at Indianapolis Indianapolis, IN 46202-5159 URL: http://www.irvinganellis.info More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2010-July/014916.html","timestamp":"2014-04-18T18:16:17Z","content_type":null,"content_length":"9439","record_id":"<urn:uuid:39871eef-dda2-4d66-a086-ab09f03dcc61>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Radius of convergence This is just a geometric series, so we need the absolute value of the ratio, $e^{\sin(x)}$ in this case, to be less than one. So this series converges for all values of x in the set $S=\left\{x:|e^{\ sin(x)}|=e^{\sin(x)}<1\right\}$ Note that we have that $0<e^x<1$ iff $x<0$. So in our case we need $\sin(x)<0$ or that $\pi<x<2\pi$ or $3\pi<x<4\pi$ or in general $(2n-1)\pi<x<2n\pi~~n\in\mathbb{Z}$
{"url":"http://mathhelpforum.com/calculus/69423-radius-convergence.html","timestamp":"2014-04-20T18:01:36Z","content_type":null,"content_length":"40388","record_id":"<urn:uuid:c7e2b44f-5dc0-4ccd-ab6d-6911aa9a9eec>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Odd squares May 27th 2009, 06:37 PM Odd squares Good night. I have a proof question here that I needs validating. I was also wondering if I could receive alternative solutions for the question. Thanks in advance for the help Prove that the square of an odd integer is always of the form $8q+1$ where $q$ is an integer We wish to prove that $8q+1=k^2$, where $k$ is an odd integer Let $k$ be an odd integer of the form: $2I+1$ As $I$is an integer, it can either be odd or even Case 1: $I$is odd, i.e. it is of the form: $2m+1$ $(2I+1)^2= 4(2m+1)^2+4(2m+1)+1$ $8(2m^2+3m+1)+1$ Let $2m^2+3m+1=q$ Case 2: $I$is even, i.e. it is of the form $2m$ $(2I+1)^2= 4(2m)^2+4(2m)+1$ $8(2m^2+m)+1$ Let $2m^2+m=q$ Proven in both cases, hence the statement is true. End of solution. Q.E.D I welcome your critiques and say thanks in advance, May 27th 2009, 08:15 PM Hello, I-Think! Your proof is correct. Prove that the square of an odd integer is always of the form $8q+1$ where $q$ is an integer. Let the odd integer be: . $n \:=\:2p+1\,\text{ for some integer }p.$ . . Then: . $n^2 \:=\:(2p+1)^2 \:=\:4p^2 + 4p + 1$ Here is a very sneaky step . . . $\text{We have: }\;n^2 \;=\;4\underbrace{p(p+1)} + 1\;\;{\color{blue}[1]}$ . . . . . . . . two consecutive integers With two consecutive integers, one of them is even. . . Hence, their product is even: . $p(p+1) \:=\:2q\,\text{ for some integer }q.$ Then ${\color{blue}[1]}$ becomes: . $n^2 \;=\;4(2q) + 1 \;=\;8q+1 \quad\hdots\;\text{ta-}DAA!$
{"url":"http://mathhelpforum.com/algebra/90759-odd-squares-print.html","timestamp":"2014-04-18T11:10:29Z","content_type":null,"content_length":"10949","record_id":"<urn:uuid:177ce272-5c10-419f-ae99-d3dd6774e09e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
couple problems i have April 20th 2009, 05:54 PM #1 Apr 2009 couple problems i have I need help with a few problems I have left. 1. Given : r = cscēΘ how do I find the range of all possible values of r and also how do I find a Cartesian equation without fractions 2. Given : √(1+3cosēΘ(2-cosēΘ)) dΘ how do i show that the expression under the radical is positive for all angle To solve the first one, it's best to change the problem to Cartesian first, just to see it much easier. In order to do that, you'll want to do this: $r \times sin^2\Theta=1$ $r^2 \times sin^2\Theta = r$ *multiply both sides by r* $y^2 = r$ *substitute $y^2$ for $r^2 sin^2\Theta$ * $y^2 = \pm\sqrt{x^2 + y^2}$ *substitute in $\pm\sqrt{x^2 + y^2}$ for r* Now you have your Cartesian. So, now let's solve for x, much easier than to solve for y. You should get $\pm y \sqrt{y^2 - 1} = x$ Now, since we want the inverse, just look at the Domain of the above function and you have the Range you wanted originally, which is $\Re/(-1,1)$ to answer part 2, we look at first $cos^2\Theta$ and see that for $cos\Theta$ is negative when $\Theta<img src=$\pi/2,3\pi/4)" alt="\Theta$cos^2\theta$ is positive for all $\Theta$ and since the largest value of $cos^2\Theta$ is 1, then 2-1 is 1, hence positive. Hope this helps. Thanks soooooo much! You helped me out a ton!! Thanks also for explaining how it works. April 20th 2009, 07:12 PM #2 April 20th 2009, 07:21 PM #3 Apr 2009
{"url":"http://mathhelpforum.com/calculus/84737-couple-problems-i-have.html","timestamp":"2014-04-17T13:45:04Z","content_type":null,"content_length":"36807","record_id":"<urn:uuid:c33da00b-9480-457c-8c22-eaec8dc086a4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Henri Poincaré died 100 years ago today. Henri Poincaré died 100 years ago today. He is most famous for the conjecture (now theorem) which carries his name and which remained open for almost 100 years, until Grigori Perelman announced a proof in 2003. But the conjecture isn't all there was to Poincaré. One of his teachers reportedly described him as a "monster of maths" who, perhaps because of his poor eyesight, developed immense powers of visualisation, which must have helped him particularly in his work on geometry and topology. He has been hailed one of the last people whose understanding of maths was truly universal. And he also thought about the philosophy of mathematics. He believed that intuition has an important role to play in maths, and anticipated the work of Kurt Gödel, who proved that maths cannot ever be completely formalised. Finally, and extremely pleasingly for us here at Plus, Poincaré was one of the few scientists of his time to share his knowledge by writing numerous popular science articles. You can find out more about the Poincaré conjecture and related maths in these Plus articles: And there is more on Poincaré's life and work on the MacTutor history of maths archive.
{"url":"http://plus.maths.org/content/comment/reply/5732","timestamp":"2014-04-21T12:21:21Z","content_type":null,"content_length":"22203","record_id":"<urn:uuid:13f0eec9-6130-445c-a061-e217ec021f78>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Let {G[p1],G[p2], . . .} be an infinite sequence of graphs with G[pn] having pn vertices. This sequence is called K[p]-removable if G[p1] ≅ K[p], and G[pn] − S ≅ G[p(n−1)] for every n ≥ 2 and every vertex subset S of G[pn] that induces a K[p]. Each graph in such a sequence has a high degree of symmetry: every way of removing the vertices of any fixed number of disjoint K[p]’s yields the same subgraph. Here we construct such sequences using componentwise Eulerian digraphs as generators. The case in which each G[pn] is regular is also studied, where Cayley digraphs based on a finite group are used.
{"url":"http://opensiuc.lib.siu.edu/math_articles/39/","timestamp":"2014-04-21T10:44:23Z","content_type":null,"content_length":"20593","record_id":"<urn:uuid:03c16c71-1923-4c93-b237-5c928558249d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Rowlett ACT Tutor Find a Rowlett ACT Tutor ...ACT Math problems test reasoning skills as well as math knowledge. It can be difficult for students to finish this section in time, particularly if they’ve forgotten some math fundamentals. My ACT Math sessions cover test-taking strategies, test practice, and review of Prealgebra, Algebra 1, Algebra 2, Geometry and basic trigonometry topics as needed. 15 Subjects: including ACT Math, reading, writing, geometry ...As far as the tutoring space goes, I'll hold session in public libraries or any other better place you know including your home. I don't mind traveling to a place where you feel comfortable. I look forward to hearing from you and helping you to accomplish the goals with your studies.Currently, I am working on my masters on Chemistry at Texas Woman's University. 19 Subjects: including ACT Math, chemistry, physics, geometry ...As a Biochemistry major, I have been exposed to and mastered many advanced chemistry concepts. I have many years experience in this field. I have taught at various institutions and universities, both in Mexico and the United States. 37 Subjects: including ACT Math, reading, chemistry, English ...After determining where and why a student is struggling, whether it be fundamentals, problem solving approach or something else, I put together a plan that begins will small successes. Even a small success can do a lot for boosting confidence. And lack of confidence is often one of the things that holds students back from being successful in math. 11 Subjects: including ACT Math, geometry, GRE, algebra 1 ...I also tutor students regularly on Precalculus and Calculus. I am a physicist with both a bachelor and master’s degree in physics. As a physicist, I am well exposed to mathematical concepts which includes trigonometry. 25 Subjects: including ACT Math, chemistry, geology, TOEFL
{"url":"http://www.purplemath.com/rowlett_act_tutors.php","timestamp":"2014-04-18T04:01:46Z","content_type":null,"content_length":"23480","record_id":"<urn:uuid:0ccc05a7-1619-42b3-9e04-a39ca89c9521>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Tewksbury Algebra Tutor Find a Tewksbury Algebra Tutor I am a great Tutor. Proof is that all my three boys- now 40, 38, 30- all did very well through my tutoring in High School and beyond. I strongly believe in Life-Long Learning; that's why I applied as an Engineering Master with my company VTech in 2004, when I was 63, to study with Suffolk University for my MBA, where I graduated in 2006 as BEST IN CLASS.-Rolf S. 28 Subjects: including algebra 1, reading, English, physics ...Through this work I assist them in determining the best path for their ultimate career and academic options. I also assist students in writing highly literate, targeted personal statements, ensuring that they are presenting an engaged and authentic self to admissions officers. I am a consulting... 41 Subjects: including algebra 1, chemistry, English, reading ...I find real life examples and a crystal clear explanation are crucial for success. My schedule is flexible as I am a part time graduate student. I am new to Wyzant but very experienced in tutoring, so if you would like to meet first before a real lesson to see if we are a good fit, I am willing to arrange that.I was a swim teacher for 8 years at Swim facilities and summer camps. 19 Subjects: including algebra 1, algebra 2, chemistry, Spanish ...I also worked as a teaching assistant and ran a series of classes during MIT's Independent Activities Period (IAP) for other students, faculty and staff. I can meet near Alewife, Harvard or MIT and at your house by previous arrangement.I have presented before groups as large as 4,500, led course... 63 Subjects: including algebra 2, GRE, English, writing ...I was a GMAT instructor for Princeton Review and Kaplan As someone who has had to juggle many subjects, papers, projects, tests, and deadlines in my undergraduate and graduate studies, I am delighted to help and have helped numerous students throughout the years to improve their study effectivene... 67 Subjects: including algebra 1, algebra 2, English, calculus
{"url":"http://www.purplemath.com/Tewksbury_Algebra_tutors.php","timestamp":"2014-04-18T21:35:39Z","content_type":null,"content_length":"23956","record_id":"<urn:uuid:86bbd4d6-f0d1-47b8-b716-29322368a6e7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Identity This one is easy: Please use the hide tag when you get it. Last edited by bobbym (2009-10-17 17:52:57) In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=12735","timestamp":"2014-04-19T05:22:38Z","content_type":null,"content_length":"16368","record_id":"<urn:uuid:36bc630f-284e-4ad2-9225-42430776a552>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00649-ip-10-147-4-33.ec2.internal.warc.gz"}