content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Summary: From Grushin to Heisenberg via an
isoperimetric problem
Nicola Arcozzi and Annalisa Baldi
The Grushin plane is a right quotient of the Heisenberg group.
Heisenberg geodesics' projections are solutions of an isoperimetric
problem in the Grushin plane.
1 Introduction
It is a known fact that there is a correspondence between isoperi-
metric problems in Riemannian surfaces and sub-Riemannian geome-
tries in three-dimensional manifolds. The most significant example
is the isoperimetric problem in the plane, corresponding to the sub-
Riemannian geometry of the Heisenberg group H.
We briefly recall this connection following the exposition in [Mont].
Consider, on the Euclidean plane, the one-form = 1
2 (xdy - ydx),
which satisfies d = dx dy and which vanishes on straight lines
through the origin. By Stokes' Theorem, the signed area enclosed by
a curve is . Let c : [a, b] R2 be a curve. For each s in [a, b],
let s be the union of the curve c restricted to [a, s], of the segment
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/112/2500106.html","timestamp":"2014-04-18T15:47:41Z","content_type":null,"content_length":"8073","record_id":"<urn:uuid:7ab2b97c-9d88-404f-b41b-dc616397b617>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Predicting Disease Onset from Mutation Status Using Proband and Relative Data with Applications to Huntington's Disease
Journal of Probability and Statistics
Volume 2012 (2012), Article ID 375935, 19 pages
Research Article
Predicting Disease Onset from Mutation Status Using Proband and Relative Data with Applications to Huntington's Disease
^1Department of Biostatistics, Mailman School of Public Health, Columbia University, 722 West 168th Street, New York, NY 10032, USA
^2Department of Statistics, Texas A&M University, College Station, TX 77843, USA
^3Departments of Neurology and Psychiatry and Sergievsky Center and the Taub Institute, Columbia University Medical Center, New York, NY 10032, USA
^4Department of Psychiatry and Biostatistics (Secondary), University of Iowa, Iowa City, IA 52242, USA
Received 15 December 2011; Accepted 22 February 2012
Academic Editor: Yongzhao Shao
Copyright © 2012 Tianle Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Huntington's disease (HD) is a progressive neurodegenerative disorder caused by an expansion of CAG repeats in the IT15 gene. The age-at-onset (AAO) of HD is inversely related to the CAG repeat
length and the minimum length thought to cause HD is 36. Accurate estimation of the AAO distribution based on CAG repeat length is important for genetic counseling and the design of clinical trials.
In the Cooperative Huntington's Observational Research Trial (COHORT) study, the CAG repeat length is known for the proband participants. However, whether a family member shares the huntingtin gene
status (CAG expanded or not) with the proband is unknown. In this work, we use the expectation-maximization (EM) algorithm to handle the missing huntingtin gene information in first-degree family
members in COHORT, assuming that a family member has the same CAG length as the proband if the family member carries a huntingtin gene mutation. We perform simulation studies to examine performance
of the proposed method and apply the methods to analyze COHORT proband and family combined data. Our analyses reveal that the estimated cumulative risk of HD symptom onset obtained from the combined
data is slightly lower than the risk estimated from the proband data alone.
1. Introduction
Huntington’s disease (HD) is a severe, autosomal dominantly inherited neurodegenerative disorder that affects motor, cognitive, and psychiatric function and is uniformly fatal. HD is caused by the
expansion of CAG trinucleotide repeats at the huntingtin gene (IT15) [1, 2]. Affected individuals typically begin to show motor signs around 30–50 years of age and typically die 15–20 years after the
disease onset [3]. Despite identification of the causative gene, there is currently no treatment that modifies disease progression.
One large genetic epidemiological study of HD, the Cooperative Huntington’s Observational Research Trial (COHORT), including 42 Huntington study group research centers in North America and Australia,
was initiated in 2005 and concluded in 2011 [4–6]. Participants in COHORT (probands) underwent a clinical evaluation and DNA from whole blood was genotyped for the length of the CAG-repeat huntingtin
mutation. Since 2005, COHORT probands from sites with IRB approval have participated in family history interviews and have provided information on HD affection status in their family members. While
CAG repeat length is ascertained in probands, the high cost of conducting in-person interviews of family members prevents the collection of all family members’ blood samples. However, family members’
age-at-onset (AAO) of HD and vital status are obtained through systematic interviews of the probands or the family members themselves. Although a relative’s HD genotype is unavailable, the
corresponding distribution of the HD gene can be estimated based on the relative’s relationship with the proband, the proband’s mutation status, and assumptions regarding within-family similarity of
CAG length [7, 8].
In a genetic counseling setting, subjects with CAG repeats of 36 or greater are defined as carrying the HD mutation (carrier; [9]), and CAG less than 36 is defined as screened negative, or noncarrier
[9]. It is known that there is an inverse association between the CAG repeat length and AAO of HD, that is, the longer the repeat length, the earlier the motor onset [10]. Modeling such a
relationship as well as the conditional distribution of HD onset given CAG repeat length accurately and precisely is important for genetic counseling and the design of clinical trials for HD. The AAO
of HD onset is subject to right censoring by constraints of the observation periods. Carriers who have not been diagnosed with HD are right-censored for AAO. Several formulae were proposed in the
literature to estimate the survival function of age at HD diagnosis given CAG repeat length (e.g., [9–11]). Langbehn et al. [10] have shown that the standard semiparametric survival models, such as
the Cox proportional hazards model, do not fit the HD data and proposed a new logistic-exponential parametric model. Specifically, the conditional distribution of HD onset given the CAG repeat length
is modeled as a logistic function, with a location and a scale parameter both depending on CAG through nonlinear relationships. Using a large clinical data set, they observed that separate
exponential relationships with CAG length gave excellent empirical goodness of fit to both the mean AAO and its variance. Other parametric models, such as Gamma distribution, have also been proposed
in the literature [12, 13]. Langbehn et al. [14] examine several AAO models in the literature and show the superior performance of Langbehn et al. [10] in terms of predicting the two-year probability
of new HD diagnosis with independent prospective data.
None of the aforementioned existing methods can be directly used to analyze COHORT family data because family members are not always genotyped and their HD mutation status is unknown. The inclusion
of family data contributes additional information; however, the unobserved HD mutation sharing status in family members (CAG-elongated or not) complicates the analysis. To see this, note that the
affected parent carrying huntingtin mutation has a 50% chance of transmitting the mutation to an offspring. An added complexity is that the likelihood of the offspring having a higher CAG repeat than
the parent is higher if the parent is the father. Since the offspring is not genotyped, whether he or she carries expanded CAG repeats is unknown. In this work, we treat the unknown huntingtin gene
sharing status in first-degree family members (CAG-elongated or not) as missing data and use the EM algorithm to carry out the maximum likelihood estimation of the proband and family data jointly.
Conditionally on the transmission status in family members, we use the logistic-exponential model in Langbehn et al. [14] to model the AAO as a function of CAG repeat length. We perform simulation
studies to examine finite sample performances of the proposed methods. Finally, we apply these methods to analyze the COHORT proband and family combined data. Our results show a slightly lower
estimated cumulative risk of HD symptom onset using the combined data compared to using proband data alone.
2. Methods
We start by introducing some notations. For the th subject, let denote the age-at-onset of HD, let be the event indicator, let denote the censoring time, and let . Let denote the CAG repeat length.
Langbehn et al. [10] model distribution of given by a logistic function. The cumulative distribution function (CDF) given is and the density function is Here is a location parameter depending on the
covariate and is a scale parameter depending on . Let denote the survival function of HD onset. The location and scale parameters have the following relationship with the mean and variance of given :
Various parametric functions for the location and scale parameters were compared in Langbehn et al. [10, 14], and the exponential function provides the best fit. Therefore, we use the same model
where Substitute these into and to obtain a parametric model for the distribution of AAO of HD with six parameters, . Langbehn et al. [10] fitted estimates of .
2.1. Proband-Only Analysis
First, consider probands’ data where all ’s are observed. Since a subject’s AAO of HD is subject to the right censoring, the likelihood function is and the log-likelihood is The maximum likelihood
estimate (MLE) of the parameters, , can be obtained via a general-purpose optimization algorithm such as Newton-Raphson or Nelder-Mead implemented in the R program version 2.13.1. The
variance-covariance matrix of is estimated by the inverse of the estimated Hessian matrix The standard error of the estimated survival function, , is then estimated by the Delta method, that is,
where the gradient vector Since the parameters are estimated by maximum likelihood, it is straightforward to carry out likelihood ratio tests (LRTs) to compare the model fit from the COHORT data with
the one obtained by applying parameters from other studies such as Langbehn et al. [10] to the COHORT data. Here, twice the difference in the log-likelihood follows an asymptotic chi-square
distribution with 6 degrees of freedom.
2.2. Incorporating Family Members
Next, we consider incorporating family members’ AAO data. We do not directly observe whether a family member shares the huntingtin mutation with the proband, but we do have data regarding family
members’ age-at-onset of the first symptoms, as well as the family members’ current ages. When we incorporate the additional family data, the likelihood for the survival takes a mixture form. Let
denote the probability of the th subject sharing a deleterious allele with a proband and therefore becoming a carrier. Such probabilities are calculated based on Mendelian transmission and a family
member’s relationship to the proband [8]. For example, offspring and siblings of a carrier proband have a probability of 50% of receiving the huntingtin allele that contains the CAG expansion
(Homozygotes for HD are extremely rare since prevalence of HD in general population is rare). We assume that, conditioning on a family member receiving the expanded huntingtin allele, the CAG repeat
length is the same as observed in the proband, although this is a simplification [7]. For subjects who receive a wild-type allele (CAG < 36), their probability of developing HD is zero, thus , and .
For the family members, the likelihood is where the above second term follows from the assumption that noncarriers do not develop HD. Note that for all carrier probands we observe , thus the
likelihood reduces to (2.5).
The above likelihood can be maximized by a combination of EM and Newton-Raphson algorithms. Let denote the unobserved carrier status indicator for the th family member (i.e., indicates a family
member receives a mutation and indicates otherwise). Then the complete data log-likelihood is At the th iteration of the E-step, we compute the conditional expectation of the complete data
log-likelihood, given the observed data. Essentially, we compute In the M-step, we update by maximizing the weighted log-likelihood using the Newton-Raphson algorithm developed for the proband data.
Since for the combined analysis, the parameters are estimated by maximizing the likelihood through an EM algorithm, the standard asymptotic theory applies and the standard errors of parameters can be
estimated by inverting the expected or observed information matrix based on the log-likelihood of the observed data. When there is missing data and an EM algorithm is used to obtain the MLE, the
information matrix based on the observed data likelihood can be difficult to compute analytically or computationally. In such situations, Louis [15] proposed to compute the observed information
matrix in terms of the conditional moments of the first and second derivatives of the complete data log likelihood which can be obtained easily under the EM algorithm framework. In some cases, these
moments are easier to compute than the corresponding derivatives of the incomplete, observed data log-likelihood.
However, in our application, the derivatives of the observed data log likelihood are easy to compute. Thus, we computed the gradient and Hessian matrix of the observed data log-likelihood directly
and estimated the standard errors of by the inverse of the Hessian matrix and estimated the standard errors of by the Delta method similar to the proband-only analysis. Simulation studies in the next
section show satisfactory performance of this direct and relatively simpler approach.
3. Simulation Studies
We conducted two simulation studies closely related to the observed COHORT data to illustrate the performance of the Newton-Raphson optimization and the EM algorithm [16]. In all our optimization
procedures, we centered both and . Since the direct optimization and EM algorithm need reasonable initial values, we fitted two nonlinear least square (NLS) to the observed sample mean and variance
of the AAO on subjects with . To be specific, we fit where and are the sample mean and variance for all subjects with , respectively. The six NLS estimators were used as the initial values for
further optimization. We denoted the estimated from the centered data as . For each simulation, the uncentered were then calculated based on and the sample mean of and .
We restricted simulations to CAG repeat lengths between 41 and 56 to guard against sensitivity to the extremely high or low CAG repeats to be consistent with Langbehn et al. [10]. For the analysis of
proband data, we generated a sample of 2000 subjects, each with a CAG length ranging from 41 to 56 that follows a multinomial distribution in which the probability equals to the observed proportion
of in the COHORT proband data set. The failure times were simulated from the distribution (2.1), where the parameters were fixed at the values fitted from the COHORT proband data (see next section
for their values). The censoring times, , were generated from a rescaled Beta distribution with a scale and shape parameter of four. The parameters for the Beta distribution were chosen so that the
proportion of censored subjects is the same in the simulated data and the observed COHORT proband data.
For the analysis of the combined proband and family data, we generated a sample of 4000 subjects. We assume the same proportion of the probands and relatives as observed in the combined COHORT data.
For the family members, the probabilities were generated by resampling the observed ’s in the COHORT data. With a given for each subject, we simulated his or her huntingtin carrier status from a
Bernoulli distribution with success probability . For family members simulated to receive an expanded CAG repeat (carriers), their CAG repeats were set to be the same as the probands and their
failure times were simulated from (2.5) with fixed at estimates from the COHORT combined data. For noncarrier family members, their failure times were set to be infinity and their . We used the same
censoring distribution for generating as in the first simulation study.
We provide simulation results of the proband only and combined analyses in Tables 1 and 2. We present mean , empirical standard deviation of , and the mean estimated standard error of at various ages
in. We see from these tables that mean is very close to true in both studies. The mean estimated standard errors of are close to the empirical standard deviations, indicating that the estimation of
variability is appropriate. Figures 1 and 2 present three curves of at = 41, 46, 50 and their 95% empirical confidence intervals for the proband data and combined data, respectively. We see that
coincide with the circles representing true at various ages.
4. COHORT Data Analysis Results
COHORT is a multicenter observational study of individuals in the HD community. COHORT recruitment is open to subjects who have HD symptoms and signs (manifest HD), subjects who have an expanded CAG
repeat but have not yet developed symptoms of HD (presymptomatic), subjects who have an HD affected parent but have not been tested and do not have symptoms (at risk), subjects who have an affected
grandparent (secondary risk), and control subjects who are not at risk for HD. Information available on participating probands include genetic status (whether or not they carry HD mutation, and the
number of CAG repeats), clinical diagnosis of HD, and the timing of symptom onset and timing of diagnosis. In our analyses, only probands with expanded CAG () and their family members were included.
Details of the cohort are cited in a publication in press [6].
We first describe the proband and family data in the COHORT study. Information on CAG repeat length and age was available for 1357 probands with CAG repeats varying from 36 to 100 (Table 3). There
were 3409 first-degree relatives available from 675 probands. We do not have information on whether some of the probands are from the same family. We show the descriptive statistics for the relatives
stratified by relationship type in Table 4. Each proband potentially has three versions of age-at-the-first-symptom (rater’s report, subject’s self-report, and a family member’s report). We gave the
rater reported AAO of symptom the highest priority. If the rater reported version is not available, we then used subject report. If neither rater nor subject’s self-report is available, we then used
the family member’s report. Twenty-one subjects whose self-reported and rater-reported AAO of symptom differed by greater than 15 years were removed. Our proband data set has 1151 subjects with CAG
length between 41 and 56 and was used for the proband-only analysis. Similar to Langbehn et al. [10], we restricted the analysis to CAG repeat lengths between 41 and 56 to guard against sensitivity
to the extremely high or low CAG repeats and against bias due to likely under ascertainment (relative to the population) of subjects with CAG length between 36 and 40.
Information on CAG repeat length, age at time of evaluation and the probability of being a carrier (receiving huntingtin mutation from the proband) was available for 2851 family members of 1151
probands. In the proband data set, both individuals with manifest HD and presymptomatic carriers (24%) are included. Their age-at-diagnosis and age-at-first- motor sign were recorded. Among 1151
probands, 876 (76%) subjects had experienced HD onset and the average AAO of the HD diagnosis was 44 years of age (standard deviation: 10.7). There were 54% females and 94% Caucasians. Our combined
proband and family data set has 4002 subjects. In this combined data set, 51% were females and 35% subjects had experienced HD onset. Among the 4002 subjects, 467 are singletons (probands with no
family member included). The other 3535 subjects belong to 623 pedigrees with an average size of 5.674 (sd = 2.609) members. In the combined data, there are two different probabilities of being a
carrier: (1199 subjects with known CAG expansions or known HD onset) or (2803 subjects). Among the 2851 family members, 966 are parents of the probands, 1095 are siblings of the probands, and 790 are
children of the probands.
When using the age-at-diagnosis in our proband data as , the estimated cumulative risk of HD is The estimated parameters for the CDF from the proband-only analysis are slightly different from the
ones obtained from Langbehn et al. [10]. Our estimated mean and standard deviation of the AAO of HD is about 1 to 3 years later than the ones obtained in Langbehn et al. [10], and the standard
deviation (SD) is slightly smaller (Table 5). In addition, the estimated CDF is smaller for most values using COHORT data. We ran a joint likelihood ratio test on the goodness-of-fit of parameters
obtained in Langbehn et al. [10] and the value was less than 0.001 (test statistic = 66.0). When analyzing the age-at-first-symptom in our proband data, the estimated cumulative risk of HD is We
present curves for age-at-diagnosis and age-at-symptom at various CAG lengths and their 95% confidence intervals for the proband data in Figure 3. It can be seen that with a given , the estimated
probability of having the first symptoms of HD is higher than the probability of a diagnosis of HD at the same age. This is consistent with the intuition that symptoms of HD will be observed before a
diagnosis. The mean AAO of first symptom is estimated to be about 2 years earlier than AAO of diagnosis (Table 5) and the standard deviation of the former is slightly larger, indicating that reported
age-at-first-symptom is more variable. It is unclear to what extent this difference represents true physical variability in illness development versus possibly lower reliability in the retrospective
reporting of symptom onset [17].
As a sensitivity analysis, we compared the estimated CDF based on the parametric model with a nonparametric Kaplan-Meier estimator for subjects with a given . Figure 4 presents this comparison using
probands’ age-at-diagnosis data. We show in the figure that the parametric model fit is consistent with the Kaplan-Meier fit. However, as expected, the confidence interval for the parametric model
estimate at a given age is narrower than the Kaplan-Meier estimate (results not shown). The figure comparing age-at-symptom models is similar and therefore omitted.
We reanalyzed only the AAO of the first symptom using the combined proband and family data, since the age-at-diagnosis was not available for family members who are not seen in person. The estimated
cumulative risk of HD at age is The corresponding curves at various CAG lengths and their 95% confidence intervals are shown in Figure 5. In Table 5, we compare the estimated mean and SD of the AAO
from the proband and combined data. We can see that the estimated mean AAOs for several CAGs are similar regardless of whether family members are included. The SD estimated from the model is larger
for the combined data. This is a reflection of the observed data in that there is a wider range of AAO in the combined data than in the proband data. For example, the SD for CAG = 41 of the former is
11 years, whereas it is 10 years in the probands, and the SD for CAG = 42 is 10 in the combined and 8 in the probands.
One of the utilities of the estimated curves is to estimate the conditional probability of having an HD onset (or staying HD free) in the next five or ten years, given a subject has not had an onset
by a given age. Similar to Langbehn et al. [10], in Table 6, we present such conditional probabilities in five-year intervals for a subject without HD at age 40 and with given CAG repeats. For
example, a 40-year presymptomatic subject with a CAG of 42 has a probability of 34% (CI: 32%, 36%) for developing HD in the next 10 years (by age 50), while for a subject with a CAG of 50 this
probability increases to 0.93 (CI: 0.91, 0.95).
5. Discussion
We propose methods to predict disease risk from a known mutation (or to estimate the penetrance function). For most complex diseases, predicting the AAO of a disease from genetic markers such as
single-nucleotide polymorphisms (SNPs) continue to be a challenging issue [18]. Even with diseases like HD where the gene is identified, the predictive model can be complicated: a special feature of
HD is that the mutation severity is quantifiable and varies significantly among the affected population. This contrasts with the typical categorical approach needed, for example, in genome-wide
association studies. The proposed methods are also applicable to other expanded trinucleotide repeat diseases similar to HD.
One of the contributions of this work is to use the family data as well as the proband data to maximize available information in building a model. Our results reveal that the estimated risk obtained
from the combined proband and family data is slightly lower than the risk estimated from the proband data alone. It is possible that the proband data consists of a biased clinical sample of gene
positive or HD-affected subjects (e.g., subjects with more severe disease or with earlier onset may be more likely to participate; presymptomatic subjects might be undersampled) and is therefore not
a fair representative sample of the entire HD population, especially underrepresenting subjects at risk. The plausibility of such underascertainment is so strong for CAG lengths of 40 or less [7]
that we did exclude observations within that range from analysis. The family data may be a better representative of the population since the family members are included in the analysis only through
the inclusion of the probands. Although proband may participate the study because they had HD or they had more severe symptoms of HD, the relatives were not included based on their CAG repeat lengths
or affection status. Of course, some of the family members will not share an expanded CAG repeat huntingtin with the probands and therefore are noncarriers who will never develop HD.
Note that our estimated cumulative risk of onset of a positive HD diagnosis in the proband data is also slightly lower than Langbehn et al. [10] which also examined age-at-HD diagnosis. We estimated
later mean AAO for each CAG repeat length shorter than 54 than did Langbehn et al. [10]. For example, the mean AAO of HD diagnosis for probands with a CAG of 42 in the former data was 3 years later
and, for a CAG of 43, it was 4 years later (Table 3). On average, for all subjects with a CAG between 41 and 50, the mean AAO in Langbehn data was 2 years earlier than in the COHORT data. More
detailed comparisons are presented in Table 5. There are several possible reasons for these differences. The model end point, AAO, should probably be considered to be slightly different in the two
models. The outcome in Langbehn et al. [10] was earliest age at which a clinician documented an irreversible objective sign of the illness. This may occur earlier than the point at which an actual
diagnosis of manifest HD is given. (Many clinicians wait until there are several such signs.) This may also occur, however, at a point that is later than the proband’s or family’s first report of
subjective symptoms or their first perception of disease signs. In the CAG range of 41–49, the Langbehn et al. means are very close to the symptom onset means in the current data. For longer CAG
lengths, the Langbehn et al. estimates more closely resemble the current models for disease diagnosis. Possible systematic variability between the clinicians in the two studies may also account for
the differences in the estimates.
Other potential differences between the data sources include potential research-center-specific heterogeneity in diagnostic and rating conventions and slight variations in the methods used to
determine CAG repeat length. In the Langbehn study, these were measured by a variety of laboratories while in the COHORT they were all measured in the same laboratory.
We do note that the differences between the fitted models here and those in Langbehn et al. are substantially smaller than differences among other formulae in the literature [14]. AAO probabilities,
conditioned on current age, are especially similar. In HD research and genetic counseling, these conditional probabilities are perhaps the most commonly used statistic deriving from these formulae.
Finally, the logistic-exponential form of the parametric model proposed in Langbehn et al. [10] does indeed fit the empirical AAO distributions quite well in the COHORT data. This validates use of
this relatively complicated survival model for HD AAO research and may encourage considerations of quantitative biological mechanisms that would generate exponential relationships between CAG and
both AAO and its variance.
There has often been ambiguity in the modeling literature concerning the exact meaning of HD “onset.” The first onset of observable signs or reportable symptoms of HD generally occurs before the
actual diagnosis of clinically manifest HD is given. Much of the earlier modeling literature, reviewed in Langbehn et al. [14], does not clearly address this distinction, although the resultant
formulas have often been used for subsequent prediction of HD diagnosis [14]. The event modeled in Langbehn et al. [10] was “the first time that neurological signs representing a permanent change
from the normal state was identified in a patient.” This might be considered to the concept of “subject’s first noted symptom" rather than age of diagnosis. Nonetheless, this model has been used
frequently as a predictor of future diagnosis in HD [14]. In the current study, we do distinguish between first symptom onset and diagnosis.
Here, we assumed Mendelian transmission of huntingtin without interference so that the CAG length does not change from parents to offspring. There are several possible violations of these
assumptions. CAG lengths do, in reality, vary somewhat among family members, and those inheriting the gene from their father have, on average, a slightly longer CAG repeat length than their father.
The probability of this occurring is much lower if inheritance is from the mother [19]. An explanation is that there are many more biological opportunities for the CAG length to change in the
father’s process of sperm formation than in the mother’s process of egg formation. These processes and their dynamics have been studied extensively in vitro [7, 20], but we know of no well-verified
in vivo dynamic population genetics models. Assuming the CAG length does not change from father to offspring may lead to a slightly lower estimated risk for affected fathers of probands.
Consistent with Langbehn et al. [10] and other studies [20, 21], we estimated reduced penetrance for lower CAG repeat lengths (≤40). We point out that the parameter estimates from the current model
do not include subjects with CAG less than 41; therefore, the risk estimates for these subjects are extrapolations. However, it is conceivable that as long as the inverse relationship between AAO and
CAG still holds for the lower CAGs, the life time disease risk for these subjects will be less than 100%, since the life time risk for a CAG of 41 is about 100%.
In the literature, no proportional odds model has been fitted to model the age-at-onset of HD. Proportional odds model, or along a similar line, transformation model, belongs to the semiparametric
model framework and is beyond the scope of this paper. We are currently investigating semiparametric models other than the Cox proportional hazards model.
Finally, we stress that our current model does not include other observed covariates, such as additional genetic polymorphisms. In addition, we assumed conditional independence of family members’
age-at-onset (AAO) of HD given their CAG repeats. This assumption implies that we do not account for residual correlation among family members’ AAO caused by factors other than the CAG repeats, such
as life style factors. When there exists such residual correlation, point estimates from our current approach are still consistent hence still valid, although the standard error estimates are no
longer correct. A practical limitation of using family members’ AAO data is that they may be less reliable than the data directly collected from the probands. This limitation applies to all other
diseases, especially those with late onset. This limitation can be more pronounce when there is incomplete penetrance and variability of phenotype. Future work would consider incorporating such
measurement error in the analysis. Lastly, the proposed methods do not include possible unobserved effects that may be site or clinician-specific and perhaps related to the interpretation of the
point of “onset.” Future research will focus on incorporating observed covariates and adding family-specific random effects to account for residual familial aggregation.
Y. Wang’s research is supported by NIH Grants R03AG031113-01A2 and R01NS073671-01. Samples and/or data from the COHORT study, which receives support from HP Therapeutics, Inc., were used in this
study. The authors thank the Huntington Study Group COHORT investigators and coordinators who collected data and/or samples used in this study, as well as participants and their families, who made
this work possible.
1. C. A. Ross, “When more is less: pathogenesis of glutamine repeat neurodegenerative diseases,” Neuron, vol. 15, no. 3, pp. 493–496, 1995. View at Scopus
2. C. A. Ross and S. J. Tabrizi, “Huntington's disease: from molecular pathogenesis to clinical treatment,” The Lancet Neurology, vol. 10, pp. 83–98, 2010.
3. T. Foroud, J. Gray, J. Ivashina, and P. M. Conneally, “Differences in duration of Huntington's disease based on age at onset,” Journal of Neurology Neurosurgery and Psychiatry, vol. 66, no. 1,
pp. 52–56, 1999. View at Scopus
4. K. Kieburtz and Huntington Study Group, “The unified Huntington's disease rating scale: reliability and consistency,” Movement Disorder, vol. 11, pp. 136–142, 1996.
5. E. R. Dorsey, C. A. Beck, M. Adams, et al., “TREND-HD communicating clinical trial results to research participants,” Archives of Neurology, vol. 65, no. 12, pp. 1590–1595, 2008.
6. E. R. Dorsey and Huntington Study Group COHORT Investigators, “Characterization of a large group of individuals with Huntington disease and their relatives enrolled in the COHORT study,” PLoS ONE
, vol. 7, no. 2, Article ID e29522, 2012.
7. D. Falush, E. W. Almqvist, R. R. Brinkmann, Y. Iwasa, and M. R. Hayden, “Measurement of mutational flow implies both a high new-mutation rate for huntington disease and substantial under
ascertainment of late-onset cases,” The American Journal of Human Genetics, vol. 68, pp. 373–385, 2000.
8. Y. Wang, L. N. Clark, E. D. Louis et al., “Risk of Parkinson disease in carriers of Parkin mutations: estimation using the kin-cohort method,” Archives of Neurology, vol. 65, no. 4, pp. 467–474,
2008. View at Publisher · View at Google Scholar · View at Scopus
9. D. C. Rubinsztein, J. Leggo, R. Coles et al., “Phenotypic characterization of individuals with 30–40 CAG repeats in the Huntington disease (HD) gene reveals HD cases with 36 repeats and
apparently normal elderly individuals with 36–39 repeats,” American Journal of Human Genetics, vol. 59, no. 1, pp. 16–22, 1996. View at Scopus
10. D. R. Langbehn, R. R. Brinkman, D. Falush, J. S. Paulsen, and M. R. Hayden, “A new model for prediction of the age of onset and penetrance for Huntington's disease based on CAG length,” Clinical
Genetics, vol. 65, no. 4, pp. 267–277, 2004. View at Publisher · View at Google Scholar · View at Scopus
11. O. C. Stine, N. Pleasant, M. L. Franz, M. H. Abbott, S. E. Folstein, and C. A. Ross, “Correlation between the onset age of Huntington's disease and length of the trinucleotide repeat in IT-15,”
Human Molecular Genetics, vol. 2, no. 10, pp. 1547–1549, 1993.
12. C. Gutierrez and A. MacDonald, Huntington Disease and Insurance. I: A Model of Huntington Disease, Genetics and Insurance Research Centre (GIRC), Edinburgh, UK, 2002.
13. C. Gutierrez and A. MacDonald, “Huntington disease, critical illness insurance and life insurance,” Scandinavian Actuarial Journal, vol. 4, pp. 279–313, 2004.
14. D. R. Langbehn, M. R. Hayden, and J. S. Paulsen, “CAG-repeat length and the age of onset in Huntington disease (HD): a review and validation study of statistical approaches,” American Journal of
Medical Genetics, vol. 153, no. 2, pp. 397–408, 2010.
15. T. Louis, “Finding the observed information matrix when using the EM algorithm,” Journal of the Royal Statistical Society, Series B, vol. 44, pp. 226–233, 1982.
16. N. M. Laird and J. H. Ware, “Random-effects models for longitudinal data,” Biometrics, vol. 38, no. 4, pp. 963–974, 1982. View at Scopus
17. K. Marder, G. Levy, E. D. Louis et al., “Accuracy of family history data on Parkinson's disease,” Neurology, vol. 61, no. 1, pp. 18–23, 2003. View at Scopus
18. J. Kang, J. Cho, and H. Zhao, “Practical issues in building risk-predicting models for complex diseases,” Journal of Biopharmaceutical Statistics, vol. 20, no. 2, pp. 415–440, 2010. View at
Publisher · View at Google Scholar · View at Scopus
19. B. Kremer, E. Almqvist, J. Theilmann et al., “Sex-dependent mechanisms for expansions and contractions of the CAG repeat on affected Huntington disease chromosomes,” American Journal of Human
Genetics, vol. 57, no. 2, pp. 343–350, 1995. View at Scopus
20. C. T. McMurray, “Mechanisms of trinucleotide repeat instability during human development,” Nature Reviews Genetics, vol. 11, no. 11, pp. 786–799, 2010. View at Publisher · View at Google Scholar
· View at Scopus
21. R. R. Brinkman, M. M. Mezei, J. Theilmann, E. Almqvist, and M. R. Hayden, “The likelihood of being affected with Huntington disease by a particular age, for a specific CAG size,” The American
Journal of Human Genetics, vol. 60, no. 5, pp. 1202–1210, 1997.
|
{"url":"http://www.hindawi.com/journals/jps/2012/375935/","timestamp":"2014-04-18T12:51:02Z","content_type":null,"content_length":"226064","record_id":"<urn:uuid:279a8cc6-2284-4b53-b877-b57f5101a37b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The number of people flying first class on domestic flights
Author Message
The number of people flying first class on domestic flights [#permalink] 14 Jul 2009, 09:25
5% (low)
bigoyal Question Stats:
Director 42%
Joined: 03 Jun 2009 (01:00) correct
Posts: 805 57% (00:18)
Location: New Delhi wrong
WE 1: 5.5 yrs in IT based on 33 sessions
Followers: 56 The number of people flying first class on domestic flights rose sharply in 1990,
doubling the increase of
the previous year.
A. doubling the increase of
B. doubling that of the increase in
C. double as much as the increase of
D. twice as many as the increase in
E. twice as many as the increase of
ISB 2011-12 thread | Ask ISB Alumni @ ThinkISB
All information related to Indian candidates and B-schools | Indian B-schools accepting GMAT scores
Self evaluation for Why MBA?
Re: double vs twice [#permalink] 24 Feb 2010, 19:58
Joined: 12 Jan 2010
This post received
Posts: 10 KUDOS
Followers: 0 A sounds right. Waiting to see the OA.
Kudos [?]: 8 [1] , given:
Re: double vs twice [#permalink] 29 May 2011, 08:37
This post received
bigoyal wrote:
The number of people flying first class on domestic flights rose sharply in 1990, doubling the increase of the previous year.
Joined: 04 Sep 2010
A. doubling the increase of
Posts: 85 B. doubling that of the increase in
C. double as much as the increase of
Followers: 1 D. twice as many as the increase in
E. twice as many as the increase of
Kudos [?]: 5 [1] , given:
11 Simple and Tricky and not many explanation on this. I encountered it in the GMAT prep test and chose random as I couldn't figure out .. though I suspected A and C
later on what I understood - after comma whatever comes need to modify/explain the sentence... absolute phrase
In such scenario, C, D & E type of construction does not work .. Correct construction would be something like this ...The man is twice as old as his son
So we are left with a and b; Out of this a is correct as ... B changes the meaning
Re: The number of people flying first class on domestic flights [#permalink] 23 Aug 2012, 08:09
This post received
Expert's post
Hi All,
The number of people flying first class on domestic flights rose sharply in 1990,
doubling the increase of
the previous year.
Let’s first understand the
meaning of this sentence
. The year 1990 experienced a sharp rise in the number of people flying first class on domestic flights. This rise doubled the increase seen previous year.
Error Analysis:
This sentence uses verb-ing modifier “doubling” preceded by a comma. This means that this modifier will modify the entire preceding clause. Usage of “doubling” is correct
here because it correctly presents the result of the preceding clause. There was a rise sharp in the number of specific passengers. This rise doubled the increased witnessed
the previous year. Hence there is no error in this sentence.
a. doubling the increase of:
for the reason stated above.
e-GMAT Representative
b. doubling that of the increase in:
Joined: 02 Nov 2011
Posts: 1577
1. There is no antecedent of pronoun “that”.
Followers: 1062
2. When we say “increase in something”, the phrase means that “something” has increased itself. Hence, this phrase does not make sense in this choice as it suggests that
Kudos [?]: 2536 [1] , “the previous year” increased itself.
given: 164
c. double as much as the increase of:
1. Here “double”, a noun modifier has no particular noun to refer to.
2. The correct way to say is “double the increase” and not “double as much as the increase…”.
d. twice as many as the increase in:
1. Noun modifier “twice” dos not have a noun to refer to.
2. Use of “many” for uncountable noun “increase” is incorrect.
3. Repeats the idiom error of choice B.
e. twice as many as the increase of:
This choice repeats the first two errors of Choice D.
Hope this helps.
Free trial:Click here to start free trial (100+ free practice questions)
Free Session: September 14: Learn how to define your GMAT strategy, create your study plan and master the core skills to excel on the GMAT. Click here to attend.
Re: double vs twice [#permalink] 14 Jul 2009, 09:32
bigoyal wrote:
The number of people flying first class on domestic flights rose sharply in 1990, doubling the increase of the previous year.
Joined: 01 Aug 2008
A. doubling the increase of
Posts: 770 B. doubling that of the increase in
C. double as much as the increase of
Followers: 3 D. twice as many as the increase in
E. twice as many as the increase of
Kudos [?]: 56 [0], given:
99 A sounds right to me. "the number of people" is singular so that makes C, D, and E wrong.
A sounds better than B ..
Director Re: double vs twice [#permalink] 14 Jul 2009, 09:37
Joined: 05 Jun 2009 I think here 'increase in' is better.
Posts: 852 Will go with B.
WE 1: 7years (Financial _________________
Services - Consultant,
BA) Consider kudos for the good post ...
My debrief : journey-670-to-720-q50-v36-long-85083.html
Followers: 8
Kudos [?]: 159 [0],
given: 106
Senior Manager Re: double vs twice [#permalink] 14 Jul 2009, 09:44
Joined: 25 Mar 2009 I will say A. This is a tricky one.
Posts: 305
Followers: 6
Kudos [?]: 49 [0], given:
Re: double vs twice [#permalink] 14 Jul 2009, 09:53
Now it seems in B 'that' has no referent. So incorrect choice.
Joined: 05 Jun 2009
Only A left!
Posts: 852
Am I right?
WE 1: 7years (Financial
Services - Consultant, _________________
Consider kudos for the good post ...
Followers: 8 My debrief : journey-670-to-720-q50-v36-long-85083.html
Kudos [?]: 159 [0],
given: 106
Intern Re: double vs twice [#permalink] 14 Jul 2009, 09:55
Joined: 03 Jul 2009 IMO : A
Posts: 7 "doubling the increase of" seems correct.
Followers: 0
Kudos [?]: 0 [0], given:
Re: double vs twice [#permalink] 15 Jul 2009, 00:06
ugimba wrote:
Hussain15 bigoyal wrote:
Retired Moderator The number of people flying first class on domestic flights rose sharply in 1990, doubling the increase of the previous year.
Status: The last round A. doubling the increase of
B. doubling that of the increase in
Joined: 18 Jun 2009 C. double as much as the increase of
D. twice as many as the increase in
Posts: 1319 E. twice as many as the increase of
Concentration: Strategy, A sounds right to me. "the number of people" is singular so that makes C, D, and E wrong.
General Management
A sounds better than B ..
GMAT 1: 680 Q48 V34
No one has explained the reasons
Followers: 54
@ugimba: Why you are not considering C,D & E?? What is th philosphy behind "the number of people"???
Kudos [?]: 429 [0],
given: 157 _________________
[ From 470 to 680-My Story ] [ My Last Month Before Test ]
[ GMAT Prep Analysis Tool ] [ US. Business School Dashboard ] [ Int. Business School Dashboard ]
I Can, I Will
Re: double vs twice [#permalink] 15 Jul 2009, 05:15
Hussain15 wrote:
ugimba wrote:
bigoyal wrote:
The number of people flying first class on domestic flights rose sharply in 1990, doubling the increase of the previous year.
ugimba A. doubling the increase of
B. doubling that of the increase in
Director C. double as much as the increase of
D. twice as many as the increase in
Joined: 01 Aug 2008 E. twice as many as the increase of
Posts: 770 A sounds right to me. "the number of people" is singular so that makes C, D, and E wrong.
Followers: 3 A sounds better than B ..
Kudos [?]: 56 [0], given: No one has explained the reasons
@ugimba: Why you are not considering C,D & E?? What is th philosphy behind "the number of people"???
'the number of' is singular and C, D, E are all talking about plural things so comparision here is not correct.
look into
MGMAT SC
book for 'the number of' usage. that book has good examples.
Hussain15 Re: double vs twice [#permalink] 15 Jul 2009, 05:37
Retired Moderator Thanks for explanation.
Status: The last round My
Joined: 18 Jun 2009 MGMAT books
Posts: 1319 are on their way to Pakistan from US.
Concentration: Strategy, Posted from my mobile device
General Management
GMAT 1: 680 Q48 V34
[ From 470 to 680-My Story ] [ My Last Month Before Test ]
Followers: 54 [ GMAT Prep Analysis Tool ] [ US. Business School Dashboard ] [ Int. Business School Dashboard ]
Kudos [?]: 429 [0], I Can, I Will
given: 157
Intern Re: double vs twice [#permalink] 15 Jul 2009, 08:36
Joined: 05 Sep 2006 A sounded right.. thanks for the explanation too..
Posts: 17
Followers: 0
Kudos [?]: 0 [0], given:
Re: double vs twice [#permalink] 15 Jul 2009, 11:30
hey.....can any1 explain...in wat contexts v make use f d following phrases :
Joined: 30 May 2009
Posts: 81 Doubled..
Two times...
Followers: 1
i believe der is subtle diff in use of des 3, but i don knw exactly hw to do tht...
Kudos [?]: 5 [0], given:
Re: double vs twice [#permalink] 16 Jul 2009, 22:14
Thanks a lot guys. OA is A.
WhyabloodyMBA wrote:
hey.....can any1 explain...in wat contexts v make use f d following phrases :
Joined: 03 Jun 2009
Posts: 805 Doubled..
Two times...
Location: New Delhi
i believe der is subtle diff in use of des 3, but i don knw exactly hw to do tht...
WE 1: 5.5 yrs in IT
Thats a nice question. I'm also looking for some explanations on this.
Followers: 56
ISB 2011-12 thread | Ask ISB Alumni @ ThinkISB
All information related to Indian candidates and B-schools | Indian B-schools accepting GMAT scores
Self evaluation for Why MBA?
Re: double vs twice [#permalink] 18 Jul 2009, 10:05
bigoyal wrote:
Thanks a lot guys. OA is A.
WhyabloodyMBA wrote:
hey.....can any1 explain...in wat contexts v make use f d following phrases :
Manager Doubled..
Two times...
Joined: 30 May 2009
i believe der is subtle diff in use of des 3, but i don knw exactly hw to do tht...
Posts: 81
Thats a nice question. I'm also looking for some explanations on this.
Followers: 1
as far as i cud make it,...it's sumwat like this :
Kudos [?]: 5 [0], given:
2 Twice - the stuff(noun)
eg:- Includes twice as many students.
Double - the number.
eg:- The number of students has doubled.
Two times - i don't know..
please correct.
Re: double vs twice [#permalink] 19 Jul 2009, 21:21
WhyabloodyMBA wrote:
as far as i cud make it,...it's sumwat like this :
Twice - the stuff(noun)
Director eg:- Includes twice as many students.
Joined: 03 Jun 2009 Double - the number.
eg:- The number of students has doubled.
Posts: 805
Two times - i don't know..
Location: New Delhi
please correct.
WE 1: 5.5 yrs in IT
Agree with you.
Followers: 56
Also the Manhattan SC says that "Two Times" and "Twice" is same. I guess the only difference would be - wordiness.
ISB 2011-12 thread | Ask ISB Alumni @ ThinkISB
All information related to Indian candidates and B-schools | Indian B-schools accepting GMAT scores
Self evaluation for Why MBA?
Manager Re: double vs twice [#permalink] 20 Jul 2009, 10:34
Joined: 21 Jun 2009 IMO
twice - number and something that can be counted. - twice the sum
Posts: 156 double - something that cannot be counted - double the volume
Followers: 3
Kudos [?]: 6 [0], given:
Re: double vs twice [#permalink] 20 Jul 2009, 17:08
WhyabloodyMBA wrote:
bigoyal wrote:
Thanks a lot guys. OA is A.
WhyabloodyMBA wrote:
hey.....can any1 explain...in wat contexts v make use f d following phrases :
Two times...
i believe der is subtle diff in use of des 3, but i don knw exactly hw to do tht...
Senior Manager
Thats a nice question. I'm also looking for some explanations on this.
Joined: 16 Apr 2009
as far as i cud make it,...it's sumwat like this :
Posts: 342
Twice - the stuff(noun)
Followers: 1
eg:- Includes twice as many students.
Kudos [?]: 38 [0], given:
14 Double - the number.
eg:- The number of students has doubled.
Two times - i don't know..
please correct.
Twice is an adverb and not a noun ,you can check this on
what this means is that twice can not be used as an object of the prepositional phrases
Always tag your question
adalfu Re: double vs twice [#permalink] 24 Feb 2010, 13:26
Senior Manager whatthehell wrote:
Joined: 08 Dec 2009 IMO
twice - number and something that can be counted. - twice the sum
Posts: 420 double - something that cannot be counted - double the volume
Followers: 4 i think you can use double for countable things... double the number of apples, the number of cats doubled, etc,
Kudos [?]: 82 [0], given: _________________
kudos if you like me (or my post)
gmatclubot Re: double vs twice [#permalink] 24 Feb 2010, 13:26
|
{"url":"http://gmatclub.com/forum/the-number-of-people-flying-first-class-on-domestic-flights-80878.html?kudos=1","timestamp":"2014-04-18T13:53:49Z","content_type":null,"content_length":"226463","record_id":"<urn:uuid:eb25bf63-2d1d-4336-91d1-3bacd0170f83>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How many blades should a HAWT have?
Recently a participant asked the question '3 blades? 5 blades? 20 blades?' in the Wind Professionals Group I am member off in Linkedin. There are many variables that effect number of blade selection,
such as wind speed, TSR ( Tip Speed Ratio ), weight, drag, cost and so on. Commertial three bladed turbines operate very efficiently in wide range of wind speed and TSR due to their pitch adjustment
mechanisms. Due to so many variables it is hard to answer this question easly. However if we keep wind speed and TSR constant and not to wory about cost, this question becomes reletively easy to
We know from physics that the power is the work done in the unit time. Unit time being a second, the more work a turbine does in a given second the more power we can extract from it. Lets us ask
following question to our selves. What is the lowest RPM (Revolution Per Minute) a single bladed turbine can make without lousing its efficiency drastically? Please note the emphesize on word lowest.
It is 60 RPM or one revolution per second. Why? Because it should sweep whole rotor area in one second. If turbine rotates slower than this, it will miss some air particles and it's output power will
be low. However if it rotates faster than one revolution per second it will produce more power which is better. But faster rotating blades create more stresses on overall structure. Therefore one
bladed turbine should make at least one revolution per second to cover all the rotor surface to produce optimum power. However if rotates faster, it will produce more power.
What will be RPM of two bladed turbine to produce optimum power? With same thinking, it would be 30 RPM, that is half revolution per second for the two bladed turbine. This is because while one blade
sweeping one half of the circle area, the other blade is sweeping the other half. By using same analogy a three bladed turbine will make 20 RPM (one third of revolution per second) and a four bladed
turbine will make 15 RPM (one fourth revolution per second). I will STRESS one more time that, if turbines rotates faster then these, up to a certain point, they will be more efficient. For example
if a single bladed turbine making 2 revolution per second will be more efficient than same turbine making one revolution per second. This is because wherever the blade is during its revolution, wind
is escaping in the apposite end, by turning faster it will catch some of them. But there is an upper limit how fast a blade should turn, if it turns much faster, it would act like a solid surface and
stresses will be enormous in the system. Following video shows what happens when a three bladed wind turbine rotation speed exceeds its design limit for a given wind speed.
Now we know that the more blade the turbine has the slower it can rotate to cover the rotor sweep area. Since the rotor rotation also depent on wind speed and TSR, now we know that there are some
relationship between wind speed, TSR and the blade numbers. But we still don't know how many blades we should have for a given wind speed and TSR. The wind speed and TSR are the two constrains to
answer to this question. The wind speed is the biggest and the most important constrain nature put for the wind turbine and we do not have any control on it. You can build worlds most efficient and
powerful turbine and put it in a location where there is no wind, you will get nothing from it. The second constrain is the TSR and PLEASE DO NOT FORGET THAT TSR IS NOT APPLICABLE TO DRAG TYPE VAWT.
Unlike to the wind speed which we have no control, TSR constrain is set by designer of HAWT. The TSR number for a given wind speed is always taken as high as possible so that we extract more power
due to high rotation. Please note that for 10m/s wind speed TSR could be 7 or 8 ( I do not know actual number) but say for 30m/s wind speed, TSR could be 1 or less. This is because high wind put a
lot of pressure on system. If we kept TSR at 8, blade tip speed would be 240m/s. I hope you watched the wind turbine destruction video given above. That destruction happened because TSR vas high for
that wind speed. I assume that this happened because pitch control mechanism failed to adjust TSR to lower values. By changing pitch angle in one direction the blade TSR can be reduced and turning it
in the apposite direction TSR can be increased. Now by using the wind speed constrain put by nature and TSR constrain set by the designer and in light of RPM discussion we explained before we can
find maximum rotor diameter for given number of blades.
For the argument sake lets build a HAWT which will work with optimum performance for the wind speed in between 10m/s to 20m/s with TSR being 6 ( this is our choice, we as well chose another number
such as 5 or any reasonable number ). Since this turbine will work between 10m/s and 20m/s and our turbine does not have pitch control mechanism, we should design the turbine for average wind speed
of 15m/s.
The tip speed of a given blade can be found seperatly
from two formula given below.
TSR = V_tip_speed/V_wind_speed = 6
V_tip_speed = 6 * V_wind_speed = 6* 15 = 90 (1)
V_tip speed = Angular_velocity * R (2)
Angular_velocity = 2*PI*n/60 (3)
where n is being revolution per minute (RPM) and R is the radius of the blade.
By using formula (1), (2) and (3) we can find that
2*R*PI*n/60 = 90 ==> n*R = 860 (approximetly) (4)
From our previous discussion we know that for a single bladed turbine lowest RPM is 60.
If we use formula (4) with n = 60 we will find that R = 14.33. This is optimum diameter for one bladed turbine for 15m/s wind velocity and TSR = 6. Note that circumference of the circle with radius
14.33m is 90m which is equal to turbine blade tip speed. This means that turbine blade comes its starting point in a second. If we take smaller diameter for one bladed turbine, it will rotate faster
and efficiency will increase but area will decrease. Since area decreased by square of radius, power output will drop accordingly. If the blade get longer than 14.33 the efficiency of the turbine
will drop due to blade's inability to sweep all the rotor surface. However the extra length added to the blade will create an annulus segment around the optimum circle which will increase power
output. However in some point efficiency loss by sweeping less area and increased annulus will be balanced out and further increasing the length will not be feasible to produce single bladed turbine.
When this point is reached, the blade number should be increased. There will be always a window where N bladed turbine and N+1 bladed turbine can perform equally well.
Lets find optimum diameter for 1,2,3 and 4 bladed HAWT.
By using the formula (4) and minimum RPM for a given blade number we can conclude that
For one bladed turbine optimum radius is,
60*R = 860 ==> R = 14.33m.
For two bladed turbine optimum radius is,
30*R = 860 ==> R = 28.66m.
For three bladed turbine optimum radius is,
20*R = 860 ==> R = 43m.
And for four bladed turbine it is,
15*R = 860 ==> R = 57.33m.
To demonstrate this concept following flash animation for 1,2,3 and 4 bladed HAWT has been made. Please note that from the time you press "Rotate" button, to the time the rotation stops, represent 1
second time interval. Note that during this time interval 1 bladed turbine makes 1 revolution, 2 blade turbine makes half revolution, 3 bladed turbine makes one third and 4 bladed turbine makes
quarter rotation. However in each case whole rotor area swept by the blades. It is also important to note that tips of every blade travel a distance of 90m. This is because V_tip_speed is 90m/s and
time interval is 1 second. The circumference of each swept area is 90,180,270 and 360 respectively for 1,2,3 and 4 bladed turbines for this design.
Please note that for a 4 bladed turbine it has been said that, optimum diameter should be 57.33m, but we may see 3 bladed turbine rotates in this range. This is because, while due to less area the 3
bladed turbine sweep it might louse some power, however by increased annulus around optimum radius it gains additional power. Since annulus increase by square of the radius gain will be much higher
than lost. If the wind speed, TSR, density of the air and efficiency kept constant the only one variable determines the output of the turbine is the area that extract power from the wind. Note that I
did not say the rotor sweep area. Assume that we have a 3 bladed turbine operating its optimum radius of 43m. The power output of this turbine will be C*43**2 where C is a constant including in it
air density, PI and cube of the wind velocity which we assumed all fixed values.
Therefore power output of three bladed turbine in its optimum radius will be,
P = 43*43*C = 1849*C Hp.
Now assume that we used same 3 bladed turbine in the optimum diameter
of 4 bladed turbine which is 57.33.
What will be the power output? Is it
P = 57.33*57.33*C = 3286.73*C Hp ?
We think that it will be,
P = 3*3286.73*C/4 = 2465*C Hp.
Why? This is because blades are not sweeping whole rotor area. Please look at following animation and figure out what will be the output of one and two bladed turbines if their rotor diameter is
taken to be 57.33m.
Hint: For 1 blade power output will be 821.7*C Hp and for two blade turbine it will be 1643.4*C Hp.
What will happen if we keep TSR at 6 and change wind speed to 30m/s? Since wind speed doubled and TSR kept constant, the tip speed of blades will be 180m/s rather than 90m/s. This will cause rotation
speed (angular velocity) to double too. Following flash animation demonstrates what happens when rotation speed is increased.
Note that even this doubled speed one bladed turbine is not efficient when its rotor diameter is taken as 57,33m it sweep half of the rotor area but misses air particles in the other half. The two
bladed turbines perform its optimum level since all the rotor area swept. For three bladed turbine blades sweeps 1.5 times of its rotor area per second. You can observe blade colors (shown on yellow
blade) as Dark where the blade passed from there twice and Light where the blade passes once in one second. For four bladed turbines two blades pass from any given point per second. Blades sweep
whole rotor area twice. Here there are only darker regions since blades pass every location twice.
It should be emphasized that the discussions given above is not representation of real world application. This is very crude approximation for how many blades we might need for given design
consideration. The turbines we see far away lumbering slowly are a technological marvels and very sophisticated machines with their variable pitch and computer control mechanisms. They work very
efficiently in wide range of wind speeds. However approximation given above gives you some logical explanation what is the appropriate blade number for a given turbine.
(c) www.flapturbine.com. All rights reserved. www.flapturbine.com content, use of this content is expressly prohibited without the prior written consent of www.flapturbine.com.
|
{"url":"http://www.flapturbine.com/how_many_blade.html","timestamp":"2014-04-19T14:47:05Z","content_type":null,"content_length":"30865","record_id":"<urn:uuid:eaa620ce-c438-49c9-9f8a-88a3ee7bdfd3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
|
perplexus.info :: Numbers : Animal Magic
Can you find one five figure number, with distinct digits between 1 and 9, which satisfies all four of the following equations?
SNAKE * 2 = MERES
COYPU * 8 = POODLE
TIGER * 13 = BEWAIL
OKAPI * 14 = HIJACK
Repeated letters within an equation refer to the same digit. The same letter appearing in different equations does not necessarily refer to the same digit.
The puzzle can be solved from just two equations. The other two are for fun/reference.
This is similar to Can't see the wood for the trees
|
{"url":"http://perplexus.info/show.php?pid=6016","timestamp":"2014-04-18T16:16:00Z","content_type":null,"content_length":"13829","record_id":"<urn:uuid:2818a019-ad0e-4706-ae6c-7ddb9fd17564>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US7173633 - Method and system for inversion of detail-in-context presentations
This application is a continuation of U.S. patent application Ser. No. 09/932,088, filed Aug. 20, 2001, now U.S. Pat. No. 6,727,910, and incorporated herein by reference, which claims priority from
Canadian Patent Application Nos. 2,328,794 and 2,341,965, filed Dec. 19, 2000 and Mar. 23, 2001, respectively, and incorporated herein by reference.
The invention relates to the field of computer graphics processing. More specifically, the invention relates to detail-in-context presentations and the inversion of distortions in detail-in-context
Since the advent of video display terminals as the primary interface to the computer, making the best use of the available screen space has been a fundamental issue in user interface design. This
issue has been referred to as the “screen real estate problem”. The necessity for effective solutions to this problem is growing as the ability to produce and store visual information in great
volumes is outstripping the rate at which display technology is advancing. One solution to the screen real estate problem is the use of detail-in-context presentation techniques. Detail-in-context
presentations are useful for displaying large amounts of information on limited-size computer screens.
Now, in the detail-in-context discourse, differentiation is often made between the terms “representation” and “presentation”. A representation is a formal system, or mapping, for specifying raw
information or data that is stored in a computer or data processing system. For example, a digital map of a city is a representation of raw data including street names and the relative geographic
location of streets and utilities. Such a representation may be displayed visually on computer screen or printed on paper. On the other hand, a presentation is a spatial organization of a given
representation that is appropriate for the task at hand. Thus, a presentation of a representation organizes such things as the point of view and the relative emphasis of different parts or regions of
the representation. For example, a digital map of a city may be presented with a work route magnified to reveal street names. Thus, detail-in-context presentations allow for magnification of a
particular region of interest (the “focal region”) in a representation while preserving visibility of the surrounding representation. In other words, in detail-in-context presentations focal regions
are presented with an increased level of detail without the removal of contextual information from the original representation. In general, a detail-in-context presentation may be considered as a
distorted view (or distortion) of a portion of the original representation where the distortion is the result of the application of a “lens” like distortion function to the original representation.
For reference, a detailed review of various detail-in-context presentation techniques may be found in Carpendale's A Framework for Elastic Presentation Space (Carpendale, Marianne S. T., A Framework
for Elastic Presentation Space (Burnaby, British Columbia: Simon Fraser University, 1999)).
One shortcoming of the prior art detail-in-context presentation methods is their inability to effectively invert distortions in a detail-in-context presentation back to an original or undistorted
presentation of the representation. The ability to perform such an inversion or inverse mapping would be of great value in extending the capabilities of detail-in-context presentations to
applications such as image editing. For example, the editing of a focal region in a representation may be facilitated more easily in a distorted presentation rather than in an undistorted
The ability to perform an inverse mapping is also necessary for applications involving the subsequent distortion of a previously distorted presentation. In other words, inversion would allow a
presentation system user to accurately position or reposition one or more distortion producing “lenses” within a given presentation that has already been distorted. Hence, the distorted presentation
ultimately viewed by the user may be the end result of a series of distortion steps wherein the individual distortion steps are not known or are difficult to invert. In fact, the need for inversion
arises whenever it is necessary to position a lens based on observed coordinates in the distorted presentation. This is so because the lens may be directly generated only from coordinate information
in the undistorted presentation. As such, an inversion is necessary to produce the source coordinates for generating the lens.
Moreover, inversion provides a means to calculate real distances in an undistorted presentation based on locations within one or more lenses in a corresponding distorted presentation. For example, if
a user wants to know the distance in the undistorted presentation between the focal points of two separate lenses in a corresponding distorted presentation of a map, such as the distance between a
current location and a destination location, this distance can be computed via inversions of the focal points of these lenses.
Several systems are known which provide techniques for converting distorted or warped three-dimensional (3D) images into corrected, undistorted, or dewarped two-dimensional (2D) images. In U.S. Pat.
No. 6,005,611 (Gullichsen, et al.), a system is disclosed wherein a distorted image captured by a wide-angle or fisheye lens is corrected through the use of a specially generated polynomial transform
function that maps points from the distorted image into rectangular points. A more complex transform function is described in U.S. Pat. No. 5,185,667 (Zimmerman). In U.S. Pat. No. 5,329,310
(Liljegern, et al.) a similar objective is achieved in the context of motion picture images through the use of multiple lens (camera and projector) transfer functions. The result being the ability to
project an image, taken from a particular point of view, onto a screen, especially a curved wide angle screen, from a different point of view, to be viewed from the original point of view, without
distortion. In U.S. Pat. No. 5,175,808 (Sayre), a method and apparatus for non-affine image warping is disclosed that uses displacement tables to represent the movement of each pixel from an original
location in a source image to a new location in a warped destination image. Through these displacement tables and a resampling method, the need for inversion of the underlying transform equation that
specify the distortion or warp is eliminated. Finally, in U.S. Pat. No. 4,985,849 (Hideaki), look-up tables are used in combination with the forward evaluation of the transform equation in order to
avoid the step of transform equation inversion. However, none of these systems disclose a method and system for inverting distortions in a manner that is optimized for detail-in-context
A need therefore exists for a method and system that will allow for the effective inversion of distortions in detail-in-context presentations. Therefore, it is an object of the present invention to
obviate or mitigate at least some of the above mentioned disadvantages.
The invention provides a method and system for the inversion of distortions in detail-in-context presentations. According to one aspect of the invention, a method is provided that allows a distortion
in a detail-in-context presentation to be inverted. The method comprises the steps of locating a first approximation point in an undistorted surface for the inversion of a point in a distorted
surface, determining if the approximation point is acceptable as an inversion of the point in the distorted surface, locating a next approximation point in the undistorted surface if the first
approximation point is not acceptable, and repeating this process until an acceptable approximation point is located for the inversion of the point in the distorted surface. According to another
aspect of the invention, the use of this method to obtain the distance between points on an undistorted surface from the relative distances between corresponding points on a plurality of distorted
surfaces in a detail-in-context presentation is provided. According to another aspect of the invention, a data processing system is provided. This data processing system has stored therein data
representing sequences of instructions which when executed cause the above-described method to be performed. The data processing system generally has an input device, a central processing unit,
memory, and a display.
The invention may best be understood by referring to the following description and accompanying drawings which illustrate the invention. In the drawings:
FIG. 1 is a cross-sectional view of a presentation illustrating a point X on a distorted surface and a first approximation point P[o ]for its inversion back to an original basal plane in accordance
with the preferred embodiment;
FIG. 2 is a cross-sectional view of a presentation illustrating the displacement of a first approximation point P[o ]onto a distorted surface by application of a distortion function D resulting in a
point P[o] ^D in accordance with the preferred embodiment;
FIG. 3 is a cross-sectional view of a presentation illustrating the projection of a point P[o] ^D onto a line RVP-X resulting in a point P[o] ^P in accordance with the preferred embodiment;
FIG. 4 is a cross-sectional view of a presentation illustrating the projection of a point P[o] ^P onto a basal plane resulting in a second approximation point P[1 ]and a corresponding displaced point
P[1] ^D in accordance with the preferred embodiment;
FIG. 5 is a cross-sectional view of a presentation illustrating the projection of a point P[1] ^D onto a line RVP-X resulting in a point P[1] ^P which is then projected onto a basal plane resulting
in a third approximation point P[2 ]in accordance with the preferred embodiment;
FIG. 6 is a block diagram of a data processing system in accordance with the preferred embodiment;
FIG. 7 is a flow chart illustrating an iterative method for inversion in accordance with the preferred embodiment.
In the following description, numerous specific details are set forth to provide a thorough understanding of the invention. However, it is understood that the invention may be practiced without these
specific details. In other instances, well-known software, circuits, structures and techniques have not been described or shown in detail in order not to obscure the invention. The term data
processing system is used herein to refer to any machine for processing data, including the computer systems and network arrangements described herein. The term “Elastic Presentation Space” or “EPS”
is used herein to refer to techniques that allow for the adjustment of a visual presentation without interfering with the information content of the representation. The adjective “elastic” is
included in the term as it implies the capability of stretching and deformation and subsequent return to an original shape. EPS graphics technology is described by Carpendale in A Framework for
Elastic Presentation Space (Carpendale, Marianne S. T., A Framework for Elastic Presentation Space (Burnaby, British Columbia: Simon Fraser University, 1999)) which is incorporated herein by
reference. Basically, in EPS graphics technology, a two-dimensional visual representation is placed onto a surface; this surface is placed in three-dimensional space; the surface, containing the
representation, is viewed through perspective projection; and the surface is manipulated to effect the reorganization of image details. The presentation transformation is separated into two steps:
surface manipulation or distortion and perspective projection.
In general, the invention described herein provides a method and system for the inversion of distortions in detail-in-context presentations. The method and system described is applicable to
detail-in-context navigation within computer graphics processing systems including EPS graphics technology and to computer graphics processing systems in general.
According to one aspect of the invention, a method is described that allows a distortion in a detail-in-context presentation to be inverted. The method comprises the steps of locating a first
approximation point in an undistorted surface for the inversion of a point in a distorted surface, determining if the approximation point is acceptable as an inversion of the point in the distorted
surface, locating a next approximation point in the undistorted surface if the first approximation point is not acceptable, and repeating this process until an acceptable approximation point is
located for the inversion of the point in the distorted surface.
According to another aspect of the invention, the use of this method to obtain the distance between points on an undistorted surface from the relative distances between corresponding points on a
plurality of distorted surfaces in a detail-in-context presentation is described.
According to another aspect of the invention, a data processing system is described. This data processing system has stored therein data representing sequences of instructions which when executed
cause the above-described method to be performed. The data processing system generally has an input device, a central processing unit, memory device, and a display.
Referring to FIG. 6, there is shown a block diagram of an exemplary data processing system 600 according to one embodiment of the invention. The data processing system is suitable for implementing
BPS graphics technology. The data processing system 600 includes an input device 610, a central processing unit or CPU 620, memory 630, and a display 640. The input device 610 may be a keyboard,
mouse, trackball, or similar device. The CPU 620 may include dedicated coprocessors and memory devices. The memory 630 may include RAM, ROM, databases, or disk devices (e.g., a computer program
product). And, the display 640 may include a computer screen or terminal device. The data processing system 600 has stored therein data representing sequences of instructions (e.g., code) which when
executed cause the method described herein to be performed. Of course, the data processing system 600 may contain additional software and hardware a description of which is not necessary for
understanding the invention.
Referring to FIGS. 1 through 7 the method of one embodiment of the invention will now be described. With this method, a point in an undistorted presentation or data space is found, which when
distorted, yields a specified point in a distorted presentation or data space. Then, if desired, the inversion of the entire distorted presentation or data space to an original undistorted
presentation or data space may be obtained as the inverse mapping of the locus of points in the distorted presentation or data space. The method is iterative and makes use of the distortion process
itself as a component in an approximation technique for computing the inverse of the distortion.
Referring to FIG. 1, there is shown a cross-sectional view of a presentation 100 in accordance with EPS graphics technology and in accordance with the preferred embodiment. EPS graphics technology
employs viewer-aligned perspective projections to produce detail-in-context presentations in a reference view plane 101 which may be viewed on a display 640. Undistorted two-dimensional (2D) data
points are located in a basal plane 110 of a three-dimensional (3D) perspective viewing volume 120 which is defined by extreme rays 121 and 122 and the basal plane 110. A reference viewpoint (RVP)
140 is located above the centre point of the basal plane 110 and reference view plane 101. Points in the basal plane 110 are displaced upward onto a distorted surface 130 which is defined by a
general three-dimensional distortion function D. The direction of the viewer-aligned perspective projection corresponding to the distorted surface 130 is indicated by the line F[o]-F 131 drawn from a
point F[o ] 132 in the basal plane 110 through the point F 133 which corresponds to the focus or focal region of the distorted surface 130. The method of the present invention locates a point P[i ]in
the basal plane 110 that corresponds to a point X 150 on the distorted surface 130 through a series of steps, involving iteration and approximation, as follows. Successive approximations of the point
P[i ]are represented by the subscript i where i≧0.
Referring to FIG. 7, where there is shown a flow chart 700 illustrating the method of one embodiment of the invention, and again referring to FIG. 1, at step 1, an acceptable tolerance δ is selected
for the magnitude of the difference between the point X 150 on the distorted surface 130 and the point P[i] ^D, where P[i] ^D represents the result of the mapping of a point P[i ]in the basal plane
110 onto the distorted surface 130 through the function D. The value of δ is application dependent. For example, an acceptable δ could be less than half the width of a pixel for a typical display
surface such as a monitor 640. In general, successive approximations of the point P[i ]will continue until the magnitude of the difference between the point X 150 and the point P[i] ^D is less than
δ, that is, until |P[i] ^D−X|<δ.
At step 2, a first approximation point P[o ] 160 for the inversion of X 150 is located at the intersection point in the basal plane 110 of a line RVP-X 170 drawn through RVP 140, X 150, and the basal
plane 110. Here, i=0.
Referring to FIG. 2, at step 3, point P[o ] 160 is displaced onto the distorted surface 130 by the application of D. The resultant point on the distorted surface 130 is represented by P[o] ^D 210.
At step 4, the magnitude of the difference between the point X 150 and the point P[o] ^D 210 is calculated. If |P[o] ^D−X|<δ, then an acceptable value for the inversion of the point X 150 will have
been found and the method is complete for the point X 150. The method may then proceed with the inversion of another point on the distorted surface 130. If |P[o] ^D−X|>δ, then the method will
continue and will generate a next approximation for the inversion of the point X 150.
Referring to FIG. 3, at step 5, the point P[o] ^D 210 is projected onto the line RVP-X 170 to locate the point P[o] ^P 310 which is the closest point to P[o] ^D 210 on the line RVP-X 170.
Referring to FIG. 4, at step 6, the point P[o] ^P 310 is projected onto the basal plane 110 to produce the next approximation P[1 ] 410 for the inversion of the point X 150. This projection is made
in the direction parallel to a line F-F[o], that is, in the direction from point F 133 to point F[o ] 132 parallel to the line F[o]-F 131. Alternately, this direction can also be established by
applying the distortion function D to any point in the basal plane within the lens extent (as defined by the distortion function D). The direction of the displacement of such a point by the
distortion function D will be antiparallel to the line F-F[o]. The point P[1 ] 410 is thus located on the basal plane 110 at the point of intersection of the basal plane 110 and a line 420 drawn
parallel to the line F[o]-F 131 and passing through the point P[o] ^P 310. Now, i=1 and a second iteration may begin from step 3.
Referring to FIG. 5, a second iteration is illustrated resulting in point P[2 ] 510 as an approximation of the inversion of the point X 150.
Again referring to FIG. 1, in certain cases such as folding, which is the lateral displacement of a focal region 133 through shearing of the viewer-aligned vector defining the direction of distortion
131, it is possible for successive approximations for P[i ]to diverge. This may be caused, for example, by a fold in which a undistorted area of the basal plane 110 is hidden by a portion of the
distorted surface when viewed from RVP 140 such that a line drawn through RVP 140, the distorted surface, and the basal plane 110 intersects the distorted surface at multiple points. In these
circumstances, a bisection of approximation points P[i ]may be used to search for the desired intersection of RVP-X 170 with the basal plane 110.
The method of the embodiment of the invention described above may be used to obtain the distance between points on an undistorted surface from the relative distances between corresponding points on
one or more distorted surfaces in a detail-in-context presentation. For example, if one point is selected on a first distorted surface and a second point is selected on a second distorted surface,
both surfaces being contained in a detail-in-context presentation, then the distance between these two points on the undistorted surface may be found by first inverting each point on each distorted
surface, using the method of the invention, to obtain corresponding points on the undistorted surface. Then, the required distance may be calculated as the magnitude of the difference between the two
inverted points.
To reiterate and expand, the method and system of the present invention includes the following unique features and advantages: it facilitates the location of a point in an undistorted presentation
which, when distorted, yields a specified point in a distorted presentation and then, from this point, the inversion of the entire distorted presentation back to the original undistorted presentation
may be accomplished through the inverse mapping of the locus of the points in the distorted presentation; it employs an iterative approach to inversion to facilitate general distortion functions; in
other words, having knowledge of the location of the point to be inverted in the distorted presentation and through an iterative process, it computes a series of points in the undistorted
presentation space until a point is found, which when displaced by a general distortion function, yields a point that is coincident with the point to be inverted in the distorted presentation; it is
not specific to a particular distortion function or transform equation; it does not require the maintenance of a copy of the undistorted presentation in computer memory; it does not use look-up
tables and hence does not put unacceptable demands on computing system resources, including memory, especially for undistorted presentations that are large in size; and, it may be used to obtain the
distance between points in an undistorted presentation from the relative distances between corresponding points on one or more distorted surfaces in a detail-in-context presentation.
Although the invention has been described with reference to certain specific embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the spirit
and scope of the invention as outlined in the claims appended hereto.
|
{"url":"http://www.google.com.au/patents/US7173633","timestamp":"2014-04-21T11:05:03Z","content_type":null,"content_length":"91475","record_id":"<urn:uuid:7f15d97d-6e82-4381-8bfc-37dc82178bf5>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
numpy.polynomial.hermite_e.hermeder(c, m=1, scl=1, axis=0)[source]¶
Differentiate a Hermite_e series.
Returns the series coefficients c differentiated m times along axis. At each iteration the result is multiplied by scl (the scaling factor is for use in a linear change of variable). The argument
c is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series 1*He_0 + 2*He_1 + 3*He_2 while [[1,2],[1,2]] represents 1*He_0(x)*He_0(y) + 1*He_1(x)
*He_0(y) + 2*He_0(x)*He_1(y) + 2*He_1(x)*He_1(y) if axis=0 is x and axis=1 is y.
c : array_like
Array of Hermite_e series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding
m : int, optional
Number of derivatives taken, must be non-negative. (Default: 1)
Parameters : scl : scalar, optional
Each differentiation is multiplied by scl. The end result is multiplication by scl**m. This is for use in a linear change of variable. (Default: 1)
axis : int, optional
Axis over which the derivative is taken. (Default: 0).
New in version 1.7.0.
der : ndarray
Returns :
Hermite series of the derivative.
In general, the result of differentiating a Hermite series does not resemble the same operation on a power series. Thus the result of this function may be “unintuitive,” albeit correct; see
Examples section below.
>>> from numpy.polynomial.hermite_e import hermeder
>>> hermeder([ 1., 1., 1., 1.])
array([ 1., 2., 3.])
>>> hermeder([-0.25, 1., 1./2., 1./3., 1./4 ], m=2)
array([ 1., 2., 3.])
|
{"url":"http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.polynomial.hermite_e.hermeder.html","timestamp":"2014-04-21T04:35:44Z","content_type":null,"content_length":"10934","record_id":"<urn:uuid:51b8a96d-ad58-439e-abf6-79a58f32b81f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: October 2006 [00392]
[Date Index] [Thread Index] [Author Index]
Re: Convert expression to polynomial
• To: mathgroup at smc.vnet.net
• Subject: [mg70442] Re: Convert expression to polynomial
• From: Bill Rowe <readnewsciv at sbcglobal.net>
• Date: Mon, 16 Oct 2006 02:35:51 -0400 (EDT)
On 10/15/06 at 12:18 AM, diana.mecum at gmail.com (Diana) wrote:
>I am generating a list of partial sums which look like:
>x = 1 + (-t + t^2)^(-1) + 1/((-t + t^2)^2*(-t + t^4)) + 1/((-t +
>t^2)^4*(-t + t^4)^2*(-t + t^8))
>I then try to calculate the PolynomialQuotient[Numerator[x],
>Denominator[x], t], etc., and I get an error saying that x is not a
>polynomial function.
>I tried to find the command to put everything over a common
>denominator, but was unable to find this. Can someone help?
I think what you want is Together i.e., the common denominator
would be
(t - 1)^7*t^7*(t^2 + t + 1)^2*(t^6 + t^5 + t^4 + t^3 +
t^2 + t + 1)
To reply via email subtract one hundred and four
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Oct/msg00392.html","timestamp":"2014-04-19T09:40:08Z","content_type":null,"content_length":"34800","record_id":"<urn:uuid:410f9de3-6a09-4d3b-b7bd-a8bf1415a4b7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: July 2001 [00434]
[Date Index] [Thread Index] [Author Index]
Re: Algorithm
• To: mathgroup at smc.vnet.net
• Subject: [mg30087] Re: [mg30072] Algorithm
• From: Daniel Lichtblau <danl at wolfram.com>
• Date: Fri, 27 Jul 2001 03:52:22 -0400 (EDT)
• References: <200107260520.BAA01183@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Yannis.Paraskevopoulos at ubsw.com wrote:
> Hi all,
> lately I've been trying to translate to mathematica the following
> algorithm and it seems that everything is horribly slow and frustrating
> to a degree that I think that your kind help will be necessary.
> The algorithm goes like that:
> 1)define t_i=2*i/100 ,i=1,2,...,500
> and X_t0=X
> 2)for j=1,2,...,10,000
> Do
> a) generate x_i from N(0.05,1),p_i from N(0,1) and
> let y_i be 0, with probability 0.9 and 1 with probability
> 0.1
> b) calculate ln(X_t(i))=ln(X_t(i-1))+(x_i)+(y_i)*(p_i)
> c) for the smallest i<=500 such that ln(X_t(i))<=0 set
> W_j=1-0.4*X_t(i). Otherwise W_j=0
> 3) calculate final=1-Sum[W_j/10,000, {j,1,10,000}]
> any help will be much appreciated.
> best regards
> yannis
> [...]
I think you may be having problems for a few reasons. First, you may not
be familiar with some relevant Mathematica standard add-on package
functions that can handle the generation according to distributions.
Second, your specification of the problem could stand some refinement.
For example, so far as I can tell, the "t" array (time steps?) is really
not needed; we can work with a common index. Moreover there is an
initial value, "X", not provided. Also the phrase "Otherwise W_j=0" is
unclear; I am guessing you meant "...if no such i exists"?
I gave an initial value of 0 for the logarithm of what you refer to as
X_t0 (because it is primarily the logs we work with anyway). Making
whatever assumptions seem reasonable to interpret the algorithm
description, I get the code below. It ran in around 12 minutes on a 300
MHz machine, in version 4.1 of Mathematica.
In[1]:= <<Statistics`
In[2]:= len = 500;
In[3]:= biglen = 10000;
In[4]:= initlogbigxx = 0;
In[5]:= Timing[
ww = Table[
xx = RandomArray[NormalDistribution[.05,1], {len}];
pp = RandomArray[NormalDistribution[0,1], {len}];
yy = RandomArray[BernoulliDistribution[.1], {len}];
prev = initlogbigxx;
logbigxx = Table[
prev = prev + xx[[i]] + yy[[i]]*pp[[i]],
smallindx = Position[logbigxx, _?Negative, 1, 1];
If [smallindx==={}, 0, 1 - 0.4*Exp[logbigxx[[smallindx[[1,1]]]]]],
Out[5]= {716.35 Second, Null}
In[7]:= InputForm[Take[ww,20]]
{0.9179154777814722, 0.8966881389164947, 0.9604955637123224,
0.6978081187113281, 0.6387668067441197, 0.9383270710223752, 0,
0.8077776474444456, 0.9758281835484863, 0.6906528869276962,
0.8691539653461342, 0.7665120788037165, 0.9481427893555122, 0,
0.6751854977258047, 0.6533416011239968, 0.70666992443381,
0.9249455563234696, 0.6909018312677464, 0.7051942355651819}
This 12 minute run computed 2000 values each time through a loop of
10000 iterations. There may be ways to make it faster (e.g. if the first
i with log(Xi) tends to be small, generate xx et al values one at a
time until we find our negative value). That said, considering that alot
of the computations involve generation of random values, this is fairly
good speed. One sees the occasional response to the effect that
Mathematica cannot do effective numerical simulations and dedicated
numerical software must instead be used. For a problem such as the one
above I would not take such a suggestion very seriously.
Daniel Lichtblau
Wolfram Research
• References:
□ Algorithm
☆ From: Yannis.Paraskevopoulos@ubsw.com
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2001/Jul/msg00434.html","timestamp":"2014-04-20T03:34:20Z","content_type":null,"content_length":"37984","record_id":"<urn:uuid:73e2c418-62e4-4632-87f6-45bf82f4e6e2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Risk Free Option Trading
Risk Free Option Trading Using Arbitrage Techniques
People are always wondering if there is a way that you can engage in risk free option trading and get away with it. Is it possible that, once you enter your position, there is 100 percent certainty
that you will make a profit?
The answer is 'yes'.
In this article, we will discuss how risk free option trading works, but need to preface our remarks by saying that we assume you understand how stock options work and in particular, concepts such as
'in the money' 'out of the money' etc ... 'time decay' 'strike price' 'assignment at expiry' and 'expiry date'. If you're a bit more advanced and know what 'implied volatility' means, it will be a
bonus but not essential.
If you don't understand the above concepts, you need do some basic reading first, then return and have a look at this.
How Risk Free Option Trading is Structured
You can do it one of two ways. The first way will require a larger amount of capital and therefore, your return on risk will be smaller. The second way achieves the same result but with less capital
Let's discuss the first way.
You have probably heard of a 'covered call'. This is where you purchase shares and simultaneously write (or sell) call options over the same number of shares. On the USA markets for example, it would
be multiples of 100 shares.
The vital part of this strategy, is that the written call options are "in the money". You want the current market price of the stock to be above the strike price of the call options, at the time of
The next thing you do, is buy the same amount of put options, at the same strike price and expiration date as your 'sold' call options. Your put options will be 'out of the money' and will therefore
be cheaper than the written call options. The difference between option premiums from your sold and bought positions will produce a credit to your account.
Now here's the important part.
You need to ensure that the difference between the current market price of the underlying stock and the strike price of the bought and sold options, when you do this, is less than the credit you have
received from the call/put setup above. Don't forget to take brokerage costs into account, which would normally be around $90 to enter and exit the trade.
This difference is your locked in profit. Whatever happens from now on, you cannot lose money. Let's take an example to illustrate the point.
A Risk Free Option Trading Example
The market price of XYZ is currently $61.35. You buy 1,000 shares and simultaneously sell 10 x $60 call option contracts, receiving a premium of $4.90 per contract, or $4,900. You also buy 10 x $60
put option contracts at $3.10 per contract (they are 'out of the money' therefore cheaper) which costs you $3,100. The overall credit is $1,800.
The difference in option premiums above is $1.80 but the difference between $61.35 and $60.00 strike price is only $1.35. The 45 cents therefore is immediate locked in profit, no matter what happens
after that.
Let's say that by expiry date, the share price has risen to $65. Your bought put options will expire worthless and your sold call options will be $5,000 in loss. But your purchased shares will be
$3,650 in profit. The difference between these two is $1,350 loss. But you have received $1,800 credit from your option strategy so you make an overall $450 profit, less brokerage costs.
Call options by nature, are normally more expensive than put options, because their upside potential intrinsic value is unlimited, whereas the intrinsic value in put options can only be the
difference between the current share price and zero. But if you understand something about implied volatility in option pricing, you will understand that this may not always be the case.
Risk Free Option Trading - The Cheaper Way
Looking at the above, you're probably thinking that $61,350 is a lot of money to invest in shares for a tiny $450 profit at option expiration date. You would be right of course - it's only about 1
percent return on investment. But what if you could achieve the same result without such a large outlay? Would that be more attractive?
Remember, the only reason you bought shares in the above example, was to hedge against the loss on your sold call options. What if there was another way you could achieve the same result, but with
only about 5 percent of the outlay?
There are other derivative type instruments you can use to hedge your position instead of buying the shares, including futures and CFDs. For our purpose, let's illustrate with contracts for
difference (CFDs). CFDs don't have fixed 'strike prices' like option contracts, so instead of buying 1,000 shares, you can take advantage of this by going long 1,000 XYZ contracts for difference at
You would do exactly the same as outlined above, except that instead of needing $61,350 in your account to buy the shares, you only outlay 5 percent of the overall share value, which is $3,068 plus
brokerage, plus interest on the remaining 95 percent ($58,282) for the duration of the trade. If the share price rises to $65 your CFDs would be $3,068 in profit, replacing the share profit mentioned
earlier. A guaranteed profit of around $400 after brokerage on an outlay of $3,068 is about 31 percent return on investment, per option expiry cycle, totally risk free.
Now that's more like it!
Doing it in Reverse
Why limit yourself to selling calls and buying puts? You may be able to reverse the above structure, given option implied volatility at times. Under these conditions, why enter a CFD contract when
you can simply sell short 1,000 XYZ shares at $58.65 and collect $58,650 plus interest for the duration of the option period instead, then offset it with your sold $60 ITM put option and hedge it
with your bought OTM $60 call option. Put options will often become more expensive than calls, due to increased implied volatility, at the top of a trading range when a reversal is expected.
Final Thoughts
For the above risk free option trading strategy to work, you may have to do some homework, including researching broker fees for the above transactions and constructing a spreadsheet that will allow
you to quickly analyse the return on outlay, after brokerage.
For the cheaper strategy, using CFDs, you will want to ensure that your broker will accept the long CFD contract as an acceptable hedge against your 'naked' sold call options. In other words, a
broker who only provides option trading services may not recognize your CFDs in another broker account, so you may want to find one broker who offers both. Some CFD brokers such as IG Markets,
include both CFDs and options on indexes.
Finally, always always know what your broker fees are for the above, at entry and option expiry. They will be critical in determining how many option contracts you need to enter to make a profit.
If this all seems a bit too complicated, there is another strategy aptly named 'Victory Spreads' you may like to consider. These are a variation of the back-ratio spread, but with unlimited upside
profit potential and virtually zero downside risk - a truly "set and forget" type of option trade.
*************** ***************
Return to Option Trading Strategies Contents Page
Go to Option Trading Homepage
New! Comments
Have your say about what you just read! Leave me a comment in the box below.
|
{"url":"http://www.options-trading-mastery.com/risk-free-option-trading.html","timestamp":"2014-04-19T11:56:59Z","content_type":null,"content_length":"32009","record_id":"<urn:uuid:d5af365b-daac-406b-9203-4bdf3c8aa8e0>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum: Problem of the Week
Please keep in mind that this is a research project, and there may sometimes be glitches with the interactive software. Please let us know of any problems you encounter, and include the computer
operating system, the browser and version you're using, and what kind of connection you have (dial-up modem, T1, cable).
In the Fractris game, you will be trying to finish as many rows as possible using different combinations of fractions. The computer will start you off with a random fraction and then it will be
up to you to add your fractions to fill up the row exactly to score points. You will gain more points if you can avoid using the same fraction twice.
To start playing, click on the Run button. You will see a block fall immediately on the left of the grid. To add to that block, select a fraction on the right side by clicking on it. You should
then see a block fall into place. Keep adding blocks until you have filled the row. If you fill your row up without using any fraction more than once, it will disappear so you can start a new
row. Otherwise, your row will remain and you will start on the next row up. You'll have 250 seconds to finish as many rows as possible!
To play the game again, press the Reset button, then press the Run button.
1. If the computer sends down a 1/3 block, how can you finish the row with the fewest number of blocks and without using the same size block twice?
2. If the computer sent down 1/5, would you be able to fill the row? If so, how could you do it with the fewest blocks? If not, explain why not and tell how close you could get to completing the
3. What do all the fractions in the Fractris game (1/2, 1/3, 1/4, 1/6, 1/12, 5/12) have in common?
Bonus: What are all the different combinations of the fractions 1/2, 1/3, 1/4, 1/6. 1/12, and 5/12 that will sum to 1 without using any fraction twice? Explain how you know that you have found
all the ways.
|
{"url":"http://mathforum.org/escotpow/print_puzzler.ehtml?puzzle=42","timestamp":"2014-04-20T00:01:31Z","content_type":null,"content_length":"6756","record_id":"<urn:uuid:b06b18c1-3512-481b-9541-e86dd5f368a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pushing back Pi
September 1997
The decimal number system was introduced to Europe nearly 800 years ago and is a vast improvement on the previous system of Roman numerals (see "The life and numbers of Fibonacci" elsewhere in this
issue). But good though it is, the decimal number system cannot represent all numbers exactly.
Although sums like 4 divided by 33 result in values with an infinite number of digits to the right of the decimal point, they always have repeating patterns. We can use special dots placed above the
digits to show this.
Numbers like Pi, on the other hand, have no repeating pattern. So just how accurately do we know what it is? To find out you might like to talk to Yasumasa Kanada and his colleagues at the University
of Tokyo. They have recently broken the world record for calculating the most accurate approximation using a Hitachi supercomputer. The Japanese researches have calculated 50 billion decimal digits
of Pi, that's about 10 digits for every person in the world today.
So if this sequence of digits contains no repeating pattern does that mean it's completely random? It depends who you ask. Mathematicians have been grappling with the idea of just what makes a random
sequence for decades. A simple way of measuring randomness could have applications in a wide range of fields from the analysis of the stock market to the prevention of sudden infant death syndrome.
|
{"url":"http://plus.maths.org/content/pushing-back-pi","timestamp":"2014-04-18T00:40:10Z","content_type":null,"content_length":"23932","record_id":"<urn:uuid:ff2ba39a-7e7c-4a5b-b494-98d226acd014>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A generalization of short-cut fusion and its correctness proof
Results 1 - 10 of 13
, 2004
"... Parametric polymorphism constrains the behavior of pure functional programs in a way that allows the derivation of interesting theorems about them solely from their types, i.e., virtually for
free. Unfortunately, the standard parametricity theorem fails for nonstrict languages supporting a polymorph ..."
Cited by 36 (12 self)
Add to MetaCart
Parametric polymorphism constrains the behavior of pure functional programs in a way that allows the derivation of interesting theorems about them solely from their types, i.e., virtually for free.
Unfortunately, the standard parametricity theorem fails for nonstrict languages supporting a polymorphic strict evaluation primitive like Haskell's $\mathit{seq}$. Contrary to the folklore
surrounding $\mathit{seq}$ and parametricity, we show that not even quantifying only over strict and bottom-reflecting relations in the $\forall$-clause of the underlying logical relation --- and
thus restricting the choice of functions with which such relations are instantiated to obtain free theorems to strict and total ones --- is sufficient to recover from this failure. By addressing the
subtle issues that arise when propagating up the type hierarchy restrictions imposed on a logical relation in order to accommodate the strictness primitive, we provide a parametricity theorem for the
subset of Haskell corresponding to a Girard-Reynolds-style calculus with fixpoints, algebraic datatypes, and $\mathit{seq}$. A crucial ingredient of our approach is the use of an asymmetric logical
relation, which leads to ``inequational'' versions of free theorems enriched by preconditions guaranteeing their validity in the described setting. Besides the potential to obtain corresponding
preconditions for standard equational free theorems by combining some new inequational ones, the latter also have value in their own right, as is exemplified with a careful analysis of $\mathit{seq}
$'s impact on familiar program transformations.
- Conference record of the ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages , 2008
"... GADTs are at the cutting edge of functional programming and become more widely used every day. Nevertheless, the semantic foundations underlying GADTs are not well understood. In this paper we
solve this problem by showing that the standard theory of data types as carriers of initial algebras of fun ..."
Cited by 22 (4 self)
Add to MetaCart
GADTs are at the cutting edge of functional programming and become more widely used every day. Nevertheless, the semantic foundations underlying GADTs are not well understood. In this paper we solve
this problem by showing that the standard theory of data types as carriers of initial algebras of functors can be extended from algebraic and nested data types to GADTs. We then use this observation
to derive an initial algebra semantics for GADTs, thus ensuring that all of the accumulated knowledge about initial algebras can be brought to bear on them. Next, we use our initial algebra semantics
for GADTs to derive expressive and principled tools — analogous to the well-known and widely-used ones for algebraic and nested data types — for reasoning about, programming with, and improving the
performance of programs involving, GADTs; we christen such a collection of tools for a GADT an initial algebra package. Along the way, we give a constructive demonstration that every GADT can be
reduced to one which uses only the equality GADT and existential quantification. Although other such reductions exist in the literature, ours is entirely local, is independent of any particular
syntactic presentation of GADTs, and can be implemented in the host language, rather than existing solely as a metatheoretical artifact. The main technical ideas underlying our approach are (i) to
modify the notion of a higher-order functor so that GADTs can be seen as carriers of initial algebras of higherorder functors, and (ii) to use left Kan extensions to trade arbitrary GADTs for
simpler-but-equivalent ones for which initial algebra semantics can be derived.
- Journal of Functional Programming , 2005
"... Monads are commonplace programming devices that are used to uniformly structure computations with effects such as state, exceptions, and I/O. This paper further develops the monadic programming
paradigm by investigating the extent to which monadic computations can be optimised by using generalisatio ..."
Cited by 15 (7 self)
Add to MetaCart
Monads are commonplace programming devices that are used to uniformly structure computations with effects such as state, exceptions, and I/O. This paper further develops the monadic programming
paradigm by investigating the extent to which monadic computations can be optimised by using generalisations of short cut fusion to eliminate monadic structures whose sole purpose is to “glue
together ” monadic program components. We make several contributions. First, we show that every inductive type has an associated build combinator and an associated short cut fusion rule. Second, we
introduce the notion of an inductive monad to describe those monads that give rise to inductive types, and we give examples of such monads which are widely used in functional programming. Third, we
generalise the standard augment combinators and cata/augment fusion rules for algebraic data types to types induced by inductive monads. This allows us to give the first cata/augment rules for some
common data types, such as rose trees. Fourth, we demonstrate the practical applicability of our generalisations by providing Haskell implementations for all concepts and examples in the paper.
Finally, we offer deep theoretical insights by showing that the augment combinators are monadic in nature, and thus that our cata/build and cata/augment rules are arguably the best generally
applicable fusion rules obtainable.
- Fundamenta Informaticae , 2006
"... Parametric polymorphism constrains the behavior of pure functional programs in a way that allows the derivation of interesting theorems about them solely from their types, i.e., virtually for
free. Unfortunately, standard parametricity results — including so-called free theorems — fail for nonstrict ..."
Cited by 13 (5 self)
Add to MetaCart
Parametric polymorphism constrains the behavior of pure functional programs in a way that allows the derivation of interesting theorems about them solely from their types, i.e., virtually for free.
Unfortunately, standard parametricity results — including so-called free theorems — fail for nonstrict languages supporting a polymorphic strict evaluation primitive such as Haskell’s seq. A folk
theorem maintains that such results hold for a subset of Haskell corresponding to a Girard-Reynolds calculus with fixpoints and algebraic datatypes even when seq is present provided the relations
which appear in their derivations are required to be bottom-reflecting and admissible. In this paper we show that this folklore is incorrect, but that parametricity results can be recovered in the
presence of seq by restricting attention to left-closed, total, and admissible relations instead. The key novelty of our approach is the asymmetry introduced by left-closedness, which leads to
“inequational” versions of standard parametricity results together with preconditions guaranteeing their validity even when seq is present. We use these results to derive criteria ensuring that both
equational and inequational versions of short cut fusion and related program transformations based on free theorems hold in the presence of seq.
- Proceedings, Typed Lambda Calculus and Applications , 2007
"... Abstract. Initial algebra semantics is a cornerstone of the theory of modern functional programming languages. For each inductive data type, it provides a fold combinator encapsulating
structured recursion over data of that type, a Church encoding, a build combinator which constructs data of that ty ..."
Cited by 8 (5 self)
Add to MetaCart
Abstract. Initial algebra semantics is a cornerstone of the theory of modern functional programming languages. For each inductive data type, it provides a fold combinator encapsulating structured
recursion over data of that type, a Church encoding, a build combinator which constructs data of that type, and a fold/build rule which optimises modular programs by eliminating intermediate data of
that type. It has long been thought that initial algebra semantics is not expressive enough to provide a similar foundation for programming with nested types. Specifically, the folds have been
considered too weak to capture commonly occurring patterns of recursion, and no Church encodings, build combinators, or fold/build rules have been given for nested types. This paper overturns this
conventional wisdom by solving all of these problems. 1
- In Proceedings, Mathematics of Program Construction , 2008
"... Abstract. We present a low-effort program transformation to improve the efficiency of computations over free monads in Haskell. The development is calculational and carried out in a generic
setting, thus applying to a variety of datatypes. An important aspect of our approach is the utilisation of ty ..."
Cited by 7 (0 self)
Add to MetaCart
Abstract. We present a low-effort program transformation to improve the efficiency of computations over free monads in Haskell. The development is calculational and carried out in a generic setting,
thus applying to a variety of datatypes. An important aspect of our approach is the utilisation of type class mechanisms to make the transformation as transparent as possible, requiring no
restructuring of code at all. There is also no extra support necessary from the compiler (apart from an up-to-date type checker). Despite this simplicity of use, our technique is able to achieve true
asymptotic runtime improvements. We demonstrate this by examples for which the complexity is reduced from quadratic to linear. 1
- IN PARTIAL EVALUATION AND PROGRAM MANIPULATION, PROCEEDINGS , 2008
"... Free theorems feature prominently in the field of program transformation for pure functional languages such as Haskell. However, somewhat disappointingly, the semantic properties of so based
transformations are often established only very superficially. This paper is intended as a case study showing ..."
Cited by 6 (4 self)
Add to MetaCart
Free theorems feature prominently in the field of program transformation for pure functional languages such as Haskell. However, somewhat disappointingly, the semantic properties of so based
transformations are often established only very superficially. This paper is intended as a case study showing how to use the existing theoretical foundations and formal methods for improving the
situation. To that end, we investigate the correctness issue for a new transformation rule in the short cut fusion family. This destroy/build-rule provides a certain reconciliation between the
competing foldr/build- and destroy/unfoldr-approaches to eliminating intermediate lists. Our emphasis is on systematically and rigorously developing the rule’s correctness proof, even while paying
attention to semantic aspects like potential nontermination and mixed strict/nonstrict evaluation.
"... Abstract: Free theorems establish interesting properties of parametrically polymorphic functions, solely from their types, and serve as a nice proof tool. For pure and lazy functional
programming languages, they can be used with very few preconditions. Unfortunately, in the presence of selective str ..."
Cited by 5 (3 self)
Add to MetaCart
Abstract: Free theorems establish interesting properties of parametrically polymorphic functions, solely from their types, and serve as a nice proof tool. For pure and lazy functional programming
languages, they can be used with very few preconditions. Unfortunately, in the presence of selective strictness, as provided in languages like Haskell, their original strength is reduced. In this
paper we present an approach for restrengthening them. By a refined type system which tracks the use of strict evaluation, we rule out unnecessary restrictions that otherwise emerge from the general
suspicion that strict evaluation may be used at any point. Additionally, we provide an implemented algorithm determining all refined types for a given term. 1
- In Asian Symposium on Programming Languages, Proceedings , 2004
"... Abstract. We give a semantic footing to the fold/build syntax of programming with inductive types, covering shortcut deforestation, based on a universal property. Specifically, we give a
semantics for inductive types based on limits of algebra structure forgetting functors and show that it is equiva ..."
Cited by 4 (3 self)
Add to MetaCart
Abstract. We give a semantic footing to the fold/build syntax of programming with inductive types, covering shortcut deforestation, based on a universal property. Specifically, we give a semantics
for inductive types based on limits of algebra structure forgetting functors and show that it is equivalent to the usual initial algebra semantics. We also give a similar semantic account of the
augment generalization of build and of the unfold/destroy syntax of coinductive types. 1
- In Proc. ACM International Conference on Functional Programming , 2006
"... We present a unifying solution to the problem of fusion of functions, where both the producer function and the consumer function have one accumulating parameter. The key idea in this development
is to formulate the producer function as a function which computes over a monoid of data contexts. Upon t ..."
Cited by 1 (0 self)
Add to MetaCart
We present a unifying solution to the problem of fusion of functions, where both the producer function and the consumer function have one accumulating parameter. The key idea in this development is
to formulate the producer function as a function which computes over a monoid of data contexts. Upon this formulation, we develop a fusion method called algebraic fusion based on the elementary
theory of universal algebra and monoids. The producer function is fused with a monoid homomorphism that is derived from the definition of the consumer function, and is turned into a higher-order
function f that computes over the monoid of endofunctions. We then introduce a general concept called improvement, in order to reduce the cost of computing over the monoid of endofunctions (i.e.,
function closures). An improvement of the function f via a monoid homomorphism h is a function g that satisfies f = h ◦ g. This provides a principled way of finding a first-order function
representing a solution to the fusion problem. It also presents a clean and unifying account for varying fusion methods that have been proposed so far. Furthermore, we show that our method extends to
support partial and infinite data structures, by means of an appropriate monoid structure. Categories and Subject Descriptors D.3.2 [Programming Languages]: Language Classifications—Applicative
(functional) languages;
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2443863","timestamp":"2014-04-23T23:07:24Z","content_type":null,"content_length":"41885","record_id":"<urn:uuid:235481ef-cc38-413b-808c-f0ae26d63cdd>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New Article on the Non-Zero Value of the Higgs Field
Following on my series of articles on Fields and Particles, I’m building my next series of articles, on How the Higgs Field Works. (These sets of articles require a little math and physics
background, the sort you’d get in your first few months of a beginning university or pre-university physics class.)
The first article in the new series was an overview of The Basic Idea behind how the Higgs field works. I recently revised it to make it easier to read.
The next article, just completed, is about why and how the Higgs field becomes non-zero — to the extent that we understand it. (The following article will explain how the Higgs particle arises.)
2 responses to “New Article on the Non-Zero Value of the Higgs Field”
1. obviously like your web-site but you need to check the spelling on quite a few of
your posts. A number of them are rife with spelling problems and I in finding it very bothersome to inform the reality however I’ll definitely come back again.
2. Simultaneously? …
Now, for the first time, a new type of experiment has shown light behaving like both a particle and a wave simultaneously, …
I guess it all depends on how one defines simultaneously. The energy-time uncertainty principle states,
dE x dT ~ h-bar
… where the time T is the time in which the state of any observed variable, with an energy dE potential, remains unchanged. This definition of time is very different than the time in
Schrodinger’s equation, which formulates the motion of all particle/waves.
What this means is that instantaneity does not exist, which is required for any one particle (wave) to be in two states at the same time, simultaneously. There is a finite time interval for a
single state to change from one quantum number to another. So what the experiment is showing is a photon oscillating back and forth from particle to wave, but the time is so small we cannot sense
it. The time is so small we don’t even have the physics to understand what is happening in that tiny, tiny point in spacetime.
What I would like to know is how does the quantum number(s) vary with temperature? The conjecture being that at absolute “zero” temperature the photon should be exhibiting only wave-like
characteristics which it transforms into a particle as the temperature increases. In other words, the state of a point in space (the quanta of spacetime if you will) can be characterized by the
energy it contains, energy density.
Energy density … –> B(x,y,z,t) ~ variation of energy density in Schrodinger’s time.
This entry was posted in Higgs, Particle Physics and tagged fields, Higgs. Bookmark the permalink.
|
{"url":"http://profmattstrassler.com/2012/10/17/new-article-on-the-non-zero-value-of-the-higgs-field/","timestamp":"2014-04-18T13:55:41Z","content_type":null,"content_length":"101558","record_id":"<urn:uuid:ef5a527c-7496-4c3a-8cc7-3a802b92bff6>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Digital differential analyzer with an increment output function - Patent # 4293918 - PatentGenius
Digital differential analyzer with an increment output function
4293918 Digital differential analyzer with an increment output function
(4 images)
Inventor: Asakawa
Date Issued: October 6, 1981
Application: 06/086,444
Filed: October 19, 1979
Inventors: Asakawa; Yukio (Hitachiota, JP)
Assignee: Hitachi, Ltd. (Tokyo, JP)
Primary Malzahn; David H.
Attorney Or Craig and Antonelli
U.S. Class: 708/102
Field Of 364/702
U.S Patent 3555514; 3586837; 3598974; 3601591; 4106100
Abstract: A digital differential analyzer has a Y register, an adder, an arithmetic logic unit, an R register, a logic circuit for producing increments, and incremental registers. These circuit
components are controlled by a control unit storing previous programs. Another arithmetic/logic unit is provided which receives the data in the R register and increments.The Y
register converts the whole value data coming from the exterior of the digital differential analyzer into incremental data. When there is a difference between the data in the R
register and the data in the Y register, an increment .DELTA.z is produced and the increment .DELTA.z is added to the content of the R register. The increment .DELTA.z is successively
outputted until the contents of the R register is equal to that of the Y register.
Claim: I claim:
1. A digital differential analyzer having a function of producing an increment of a time-variable input signal and comprising:
a first latch register for latching first data representing an instant value of said input signal at a predetermined timing,
a first register for storing second data whose value is renewed when a new value of said second data is applied thereto,
a second latch register coupled to said first register for latching the second data stored in said first register,
a first arithmetic logic unit coupled to said first and second latch registers for performing a first operation in which a first value represented by said first data latched in said
first latch register has subtracted therefrom a second valuerepresented by said second data latched in said second latch register, and for performing a second operation which
determines whether or not said first value is equal to said second value,
a logic circuit coupled to said first and second latch registers and said first arithmetic logic unit for determining whether or not any increment is present in said first value with
respect to said second value, and for determining the sign ofany such increment from the signs of said first and second values, the sign of the result of said first operation and the
determination by said second operation,
a second register coupled to said logic circuit for storing a first signal indicative of the sign of said increment determined by said logic circuit,
a third register coupled to said logic circuit for storing a second signal indicative of the result of the determination by said logic circuit as to the presence of any increment in
said first value,
a second arithmetic logic unit coupled to said second and third registers for performing a third operation in which the value of said second data latched in said second latch register
is modified by an increment determined according to said firstand second signals stored in said second and third registers, and
means coupled between said second arithmetic logic unit and said first register for renewing the value of said second data stored in said first register by the result of said third
2. An analyzer according to claim 1, in which said third operation is effective to modify the value of said second data by a unit increment when said second signal indicates that
there exists any increment in said first value with respect tosaid second value.
3. An analyzer according to claim 2, further comprising means coupled between said second and third registers and said second arithmetic logic unit for adjusting the amount of said
unit increment relative to the value of said second data.
4. An analyzer according to claim 1, further comprising a fourth register for receiving and storing an instant value of said time-variable input signal and means for supplying the
stored instant value to said first latch register at saidpredetermined timing.
5. An analyzer according to claim 4, further comprising an adder coupled to said fourth register and said second and third registers for adding to the contents of said fourth register
a value of an increment stored in said second and thirdregisters as determined according to said first and second signals.
6. A method for producing an increment of a time-variable input signal by using a digital differential analyzer having a register whose content is initially set to zero, comprising
the steps of:
introducing to said digital differential analyzer an instant value of said time-variable input signal,
effecting a first operation in which the difference between said introduced instant value and the content of said register is determined,
effecting a second operation which determines whether or not said introduced instant value is equal to the content of said register,
determining whether or not there exists any increment in said introduced instant value with respect to the content of said register, and determining the sign of any such increment
from the results of said first and second operations,
producing a sum of the content of said register and a predetermined unit value depending on the results of said determining step,
renewing the content of said register by the result of said sum producing step, and
repeating the above-mentioned steps from said instant value introducing step.
Description: BACKGROUND OF THE INVENTION
This invention relates to a digital differential analyzer and more particularly to a digital differential analyzer of the kind in which the arithmetic operations for the differential
analysis of data are performed in accordance with a storedprogram.
As is commonly known, the digital differential analyzer (DDA) is used to solve various linear and nonlinear differential equations at high speed and yet with a relatively high degree
of precision and to produce signals representing complicatedcurves or complicated curved surfaces. The DDA basically performs an integrating operation but it can perform various kinds
of operations when the operation logic contained therein is properly modified. Further, if the operation logic is controlled bya computer with a given stored program, as described in
U.S. Pat. No. 3,555,514 entitled "Digital Computer Instruction Sequencing to Carry Out Digital Differential Analysis" and U.S. Pat. No. 3,274,376 entitled "Digital Differential
Analyzer inConjunction with a General Purpose Computer", a DDA arranged to perform proper processings of data can be designed. The DDA is a digital apparatus; however, if the
operation logic thereof is properly controlled, it can serve as an operator with afunction equivalent to that of an analog operator such as an integrator, a counter, an adder, a
servo, and a decision maker. Accordingly, the processes controlled by a conventional analog operator may also be controlled by using the DDA with theoperation logic controlled.
Compared with a control system exclusively used for analog operation, a control system using the DDA is inferior to the former in the response speed but superior to the same in
accuracy of the control. Further compared with a control systemusing a usual digital computer for control, the DDA control system is comparable with the digital computer control in
high accuracy in control but superior thereto in the response characteristic.
As described above, the DDA is useful in its performance, but the expression of input and output used in the operation of the DDA is different from that in the so-called general
purpose computer (GPC) as follows:
Now assume that y is a time-variable quantity and z is a function of y, and that it is desired to obtain a value of z with variation of the value of y. Generally, any of the DDA and
the GPC performs a specific operation repeatedly at apredetermined iteration timing in order to obtain such a value of z.
In the operation of the GPC, an instant value of y is latched at the beginning of each iteration cycle and subjected to a specific operation which directly produces the value of z
corresponding to the input value of y. That is, in the GPC, inputsof y.sub.0, y.sub.1, . . . y.sub.i are latched successively at the 0-th, 1st, 2nd, . . . i-th iteration cycles,
respectively, and as the results of the repeated operations, there are produced outputs of z.sub.0, z.sub.1, . . . z.sub.i, respectively.
On the other hand, the DDA uses, as inputs, y.sub.0, y.sub.1 -y.sub.0, y.sub.2 -y.sub.1, . . . y.sub.i -y.sub.i-1, respectively in the 0-th, 1st, 2nd, . . . i-th iteration cycles, and
produces, as outputs, z.sub.0, z.sub.1 -z.sub.0, z.sub.2-z.sub.1, . . . z.sub.i -a.sub.i-1, respectively. Hereinafter, the value of such y.sub.0, y.sub.1 . . . or z.sub.0, z.sub.1 . .
. will be called a whole value, while the value of such y.sub.1 -y.sub.0, y.sub.2 -y.sub.1 . . . or z.sub.1 -z.sub.0,z.sub.2 -z.sub.1 will be called a value of an increment or simply
an increment. For this reason, it is troublesome in its handling in practical use. More specifically, when the DDA is used with an automatic process control system, it is required
toapply to the DDA operator only increments extracted from a whole value of each of various signals obtained from the process and to accumulate the increments resulting from its
operation thereby to reconstruct the whole value to be applied as a controlsignal to the process.
Of those problems, the reconstruction of the whole value from increments, or the conversion of the increments into the whole value can be easily realized by using the contents of an
integrand register, as called hereinafter "Y register", of theDDA which stores an integrand y, since the Y register holds an accumulation of increments operated. The problem to still
be solved is how to obtain increments of various signals accurately and simply. The techniques so far proposed are: to providehardware between the process and the DDA for producing
increments of whole values of various signals from the process; or to input whole values of various signals from the process, as integrands y into the Y registers of a specific DDA
operator therebyproducing integration of y with respect to time, which is applied to a y increment generator composed of a combination of specific DDA operators so as to produce the
The former technique is uneconomical because of use of the special and additional hardware and, when the number of signals from the process from which increments are derived are
large, the additional hardware may be more expensive than the DDAproper. Also, an increased number of parts of the hardware which necessarily causes the reliability to degrade is
undesirable for the process control system which inherently needs a fairly higher reliability.
The latter technique needs a plurality (e.g. 3 to 4) of DDA operators. Further, it is difficult to stably obtain increments when the control system is designed with an intention of a
high response speed. Conversely, when the system is designedso as to stably obtain increments, the response speed is slow. Thus, it is very difficult to obtain an optimum compromise
between the requirements of stable increment derivation and high speed response.
SUMMARY OF THE INVENTION
In order to eliminate the defects of the prior arts, the invention provides a DDA having a function of converting a whole value into an increment value.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows a block diagram of an embodiment of a DDA according to the invention.
FIG. 2 shows a circuit diagram of a logic circuit as an example used in the circuit in FIG. 1.
FIG. 3 shows an equivalent block diagram for explaining the operation of an integrator.
FIGS. 4 and 5 show the symbols of an integrator and a servo which are constructed by DDAs.
FIGS. 6 and 7 show Karnaugh maps for studying a construction of a part of the logic circuit shown in FIG. 2.
FIGS. 8 and 9 show the symbols for illustrating the addition function of the DDA according to the invention and that of the conventional DDA.
DESCRIPTION OF THE PREFERRED EMBODIMENT
FIG. 1 shows a block diagram of a DDA which is an embodiment according to the invention. In FIG. 1, input units I.sub.1 to I.sub.n receive input signals in a given format from a
process to be controlled (not shown). Y registers Y.sub.1 toY.sub.n are coupled with the input units I.sub.1 to I.sub.n, by way of a data bus DB.sub.1. An adder AD is coupled with the
Y register Y.sub.1 to Y.sub.n by way of a data bus DB.sub.2, and to an incremental register (referred to as ".DELTA.Z"register"), as described later and an incremental sign register
(referred to as ".DELTA.Z' register"), as also described later, by way of data buses DB.sub.3 and DB.sub.4. The output of the adder AD is coupled with the Y register via a data
busDB.sub.5. A latch register LH.sub.1 latches the data in a given Y register at a given timing under control of a control unit SPC. A decoder DC produces logic `0` onto a signal line
C.sub.6 only when the data latched in the Y register is 0 and produceslogic `1` when the data takes any other value. An arithmetic logic unit ALU.sub.1 receives the output of the
latch register LH.sub.1 via a data bus DB.sub.8, the output of a latch register LH.sub.2 which latches any one of the data in R registersR.sub.1 to R.sub.n via a data bus DB.sub.9,
increment data via data buses DB.sub.3 and DB.sub.4, and a signal for providing an increment .DELTA.t to effect a time integration through a data bus DB.sub.6, respectively, and
performs a given operation undercontrol of the control unit SPC. A bit shifter SF shifts the incremental data coming through the data bus DB.sub.3 to a bit position to which the
incremental data is added in accordance with a signal from the control unit SPC which indicates the bitposition of the data latched in the latch register LH.sub.2 to which the
incremental data is added. Another arithmetic logic unit ALU.sub.2 performs a given operation of a signal applied thereto in accordance with a control signal delivered from thecontrol
unit SPC. A selector SL receives the output of the latch LH.sub.2 through the data bus DB.sub.9, and the add/substraction outputs of the arithmetic logic units ALU.sub.1 and ALU.sub.2
through the data buses DB.sub.10 and DB.sub.11, respectively,and selects one of them under control of a signal applied from the control unit SPC. The R registers R.sub.1 to R.sub.n
are coupled with the selector SL through a data bus DB.sub.12. The R register and the latch register LH.sub.2 are coupled with eachother through a data bus DB.sub.13. A logic circuit
LC serves as a .DELTA.Z increment generator, receiving the signal C.sub.6 from the decoder DC and further signals C.sub.1 to C.sub.3 to C.sub.5. In those signals, signals C.sub.1 and
C.sub.5 representsign bits of the data latched in the latch registers LH.sub.1 and LH.sub.2. The signal C.sub.3 becomes logic `1` when the data of the latch registers LH.sub.1 and
LH.sub.2 are equal to each other, i.e. when A=B holds. The signal C.sub.4 designates asign bit of the operation result of A.+-.B. .DELTA.Z' and .DELTA.Z" are the .DELTA.Z and
.DELTA.Z' registers which operate in accordance with the output of the logic circuit LC and a control signal from the control unit SPC. Output units O.sub.1 toO.sub.n produce the data
held in the Y register as a process control signal under control of a signal from the control unit SPC. The control unit SPC stores a predetermined program for performing a given
function of the DDA to issue necessary controlsignals and timing signals.
On the basis of the aforementioned construction of the DDA, the operation of the DDA, when it acts as an integrator which is the basic function of the DDA, will be described briefly.
Let it be assumed that the Y register Y.sub.2 and the Rregister R.sub.2 are used. The control unit SPC delivers commands for the use of those registers to the related components of
the DDA, so that the related gates of the data buses associated with the components are controlled at given timings. As aresult, the DDA operates with a function of an equivalent
circuit as shown in a block diagram of FIG. 3. As shown, the circuit is comprised of an adder AD, the arithmetic and logic unit ALU.sub.1 (it basically operates as an adder, in this
case), the Yregister Y.sub.2 storing the result of the operation by the adder AD and the R register R.sub.2 storing the result of the operation by the unit ALU.sub.1. As shown, the
circuit does not use the unit ALU.sub.2 in this case. In the figure, .DELTA.y.sub.1to .DELTA.y.sub. n are minute increments (referred to as input increments) of n input variables. At
the ith operation, the following integrand y is stored in the Y register Y.sub.2 ##EQU1## where a suffix i designates a value of the result of the i-thoperation and i-1 designates a
value of the operation result of the (i-1) operation preceding to the i-th operation.
In accordance with the equation (1), the increments of input variables are accumulated, and the result is stored as a whole value of the integrand y in the Y.sub.2 register.
An integral I of the integrand y with respect to an integral independent variable x can be obtained according to the well-known method of piecewise mensulation. That is, the integral
I of a curve y=f(x) can approximately be given by the equation##EQU2## where I.sub.1 indicates the integration from the first to the i-th division and y.sub.i .DELTA.x.sub.i indicates
the measurement of the i-th division.
In the equation (2), if the value of .DELTA.x is sufficiently small, an integral with a sufficient accuracy in practical use can be obtained. Assuming that u is the minimum
quantitized unit of the integral independent variable x and, .DELTA.x isdefined as follows:
when x increases by u, .DELTA.x.sub.i =+1
when x decreases by u, .DELTA.x.sub.i =-1
when a change of x is below u, .DELTA.x.sub.i =0 then, the equation (2) is
Therefore, a multiplier for calculating y.sub.i .DELTA.x.sub.i in the equation (2) may be omitted by using a control pulse, in place of .DELTA.x.sub.i, for controlling the operation.
The operation of the equation (3) is performed by thearithmetic logic unit ALU.sub.1 shown in FIG. 3 under control of the .DELTA.x.sub.i pulse. The integrated value I.sub.i is stored
in the R register R.sub.2 as r.sub.i. Since the capacity of the R register R.sub.2 is finite, however, an overflow mightoccur in the R register R.sub.2. Hence, y.sub.1 and r.sub.i are
constrained within scopes -1.ltoreq.y.sub.i <+1 and 0.ltoreq.r.sub.i <+1, respectively, and the overflows thereof are related to the increment .DELTA.z.sub.i of the output variable
asfollows: ##EQU3## As seen from the equation (4), when the value r.sub.i-1 +y.sub.i to be stored in the R register R.sub.2 is above 1, the increment of .DELTA.z.sub.i =+1 is produced
and the value of overflow (remainder) exceeding +1 is loaded as r.sub.iin the R register R.sub.2. When 0.ltoreq.r.sub.i-1 +y.sub.i <1 which is within the limitation of value to be
stored in the R register R.sub.2, the value of r.sub.i-1 +y.sub.i is stored as r.sub.i in the R register R.sub.2 and .DELTA.z.sub.i is made0, i.e. .DELTA.z.sub.i =0. When r.sub.i-1
+y.sub.i is smaller than 0, .DELTA.z.sub.i =-1 is outputted and r.sub.i-1 +y.sub.i +1.gtoreq.0, i.e. difference between r.sub.i-1 +y.sub.i and -1, is loaded as r.sub.i to the R
register R.sub.2. .DELTA.z.sub.iis a pulse representing a quantitized increment having a weight of u and is used as .DELTA.x.sub.i and .DELTA.y.sub.i to another DDA operator. In other
words, the data in the .DELTA.z register and .DELTA.z" register are introduced into the adder AD andthe unit ALU.sub.1, via the data buses DB.sub.3 and DB.sub.4, respectively, and are
used as increments.
When the DDA operates as an integrator, it is represented by a symbolic presentation as shown in FIG. 4 and the integral operation logic is arranged in the following ##EQU4##
When it operates as a servo, it is symbolized as shown in FIG. 5 and the operation logic of y.sub.i is defined by the equation (1) as in the case of the integrator. The operation
logic of .DELTA.z.sub.i is independent of .DELTA.x.sub.i andr.sub.i, and defined by ##EQU5## The equation (5) implies that, so long as y.sub.i .noteq.0, .DELTA.z.sub.i pulse is
exclusively dependent on the sign of y.sub.i. Such as operation logic is realized in such a manner that, either .DELTA.z.sub.i =+1 or.DELTA.z.sub.i =-1 is determined depending on a
state of a sign bit representing a sign of y.sub.i in the Y register and the condition of .DELTA.z.sub.i =0 is determined by detecting a bit state of each digit of the Y register.
Therefore, the increment.DELTA.z.sub.i may be formed by the logic circuit LC on the basis of the output signal C.sub.6 of the decoder DC shown in FIG. 1 and the signal C.sub.1 of the
sign bit of the Y register latched in the latch LH.sub.1. In other word, when the DDA operatesas a servo, it is enough to perform the control as mentioned above by the control unit
As described above, the DDA commonly uses the adder AD, the arithmetic logic unit ALU.sub.1, and the logic circuit LC, and controls a number of registers under control of the control
unit SPC so as to satisfy the objects and the characteristicsof the process control. In this way, the DDA digitally realizes the same control function as that of the conventional
analog control device. As described above, however, the DDA must extract increments from the whole value of various signals from theprocess, and this task is troublesome.
The present invention is to provide a DDA capable of providing increments from the whole value. To achieve this function, the DDA is provided with an additional arithmetic logic unit
ALU.sub.2 and operates in the following manner.
When the increment request is issued from the control unit SPC, an input signal from the signal point of the process from which the increments are extracted is introduced through the
input unit I, for example, I.sub.1, to the Y register Y.sub.1. Then, y.sub.i and r.sub.i-1 are read out from the Y register Y.sub.1 and the R register R.sub.1, respectively, and then
are loaded into the latch registers LH.sub.1 and LH.sub.2 to set the A and B inputs of the arithmetic logic unit ALU.sub.1. Simultaneously, r.sub.i-1 is set to the B input of the
arithmetic logic unit ALU.sub.2. In the next step, the control unit SPC directs the arithmetic logic unit ALU.sub.1 to caulculate A-B i.e. y.sub.i -r.sub.i-1 and judge whether A=B or
A.noteq.B. IfA=B, the unit ALU.sub.1 produces a state judge signal C.sub.3 =1. Depending on the states of C.sub.1 and C.sub.3 to C.sub.5, the control unit SPC directs the logic unit
LC to judge whether y.sub.i >r.sub.i-1, y.sub.i =r.sub.i-1, or y.sub.i<r.sub.i-1. On the basis of the judgement, the logic circuit LC produces .DELTA.z.sub.i pulse as described in
detail later. .DELTA.z.sub.i is set to the A input of the arithmetic logic unit ALU.sub.2 through the shifter SF. In this case, theshifter SF, according to the command from the
control unit SPC, shifts the .DELTA.z.sub.i pulse to a digit position in the A input where the .DELTA.z.sub.i pulse has a weight u and then sets it therein. The control unit SPC
instructs the arithmeticlogic unit ALU.sub.2 to calculate A+B=u+r.sub.i-1 and loads the result of the calculation as r.sub.i into the R register R.sub.1 by way of the selector SL. At
this step of the operation, a first cycle of the operation is completed.
The explanation will be given of how .DELTA.z.sub.i is set to the A input of the arithmetic logic unit ALU.sub.2, together with the construction of the logic circuit LC.
For convenience of explanation, the following condition or assumption is set up:
(1) Scope of y and r: -1.ltoreq.y<1, -1.ltoreq.r<1
(2) Expression of y and r: Binary expression of 16 bits, a negative number is represented by a complement of 2 and a sign is given by the most significant bit. Accordingly, the
minimum quantitized unit is 2.sup.-15. ##EQU6##
Under these conditions (1) to (7), the logic circuit LC is so designed as to produce an increment .DELTA.z in accordance with the logic of increment output operation defined as
follows: ##EQU7## In the above equation, .DELTA.z.sub.i is expressedby the unit u obtained by quantitizing y and r; however, the expression is the same as that in which .DELTA.z.sub.i
is expressed in terms of incremental pulses +1, 0, and -1 each having a weight u. According to this logic, when y changes from y.sub.i-1to y.sub.i, y.sub.i is given by
where .DELTA.y.sub.i is an increment. .DELTA.y.sub.i is expressed in terms of u
where m is an integer. In the operation of the incremental output operation, from the i-th iteration to the (i+m-1)th iteration, a total of m incremental pulses .DELTA.z having a
weight u are outputted, while r is corrected by u every iterationto finally be y=r at the (i+m-1)th iteration.
A generation rule of the increment .DELTA.z is shown in Table 1. At No. 1 the states of yi.ltoreq.0 and r.sub.i-1 .ltoreq.0 are established and hence C.sub.1 =0 and C.sub.5 =0 are
given in accordance with the above conditions (3) and (6). Dueto r.sub.i-1 -y.sub.i, C.sub.3 =0 and C.sub.4 =0 are given in accordance with the conditions (4) and (5). Further, in
accordance with the equation (6) of the definition of the incremental output operation logic and the condition (7) of the.DELTA.z.sub.i, .DELTA.z'.sub.i =0 and .DELTA.z".sub.i =1 are
TABLE 1 __________________________________________________________________________ No. y.sub.i r.sub.i-1 C.sub.1 C.sub.5 C.sub.3 C.sub.4 .DELTA. z.sub.i ' .DELTA. z.sub.i " y.sub.i,
r.sub.i-1 __________________________________________________________________________ 1 0 0 0 1 r.sub.i-1 < y.sub.i 2 y.sub.i .gtoreq. 0 r.sub.i-1 .gtoreq. 0 0 0 0 1 1 1 r.sub.i-1 >
y.sub.i 3 1 0 0 0 r.sub.i-1 = y.sub.i 4 0 0 0 1 r.sub.i-1 < y.sub.i 5 y.sub.i < 0 r.sub.i-1 < 0 1 1 0 1 1 1 -1 < y.sub.i - r.sub.i-1 < 0 6 1 0 0 0 r.sub.i-1 = y.sub.i 7 0 0 0 1 0 <
y.sub.i - r.sub.i-1 < 1 y.sub.i .gtoreq. 0 r.sub.i-1 < 0 0 1 8 0 1 0 1 1 .ltoreq. y.sub.i - r.sub.i-1 9 0 0 1 1y.sub.i - r.sub.i-1 < -1 y.sub.i < 0 r.sub.i-1 .gtoreq. 0 1 0 10 0 1 1 1
-1 .ltoreq. y.sub.i - r.sub.i-1 __________________________________________________________________________ < 0
Other conditions are obtained in the same manner. Since the arithmetic logic units ALU.sub.1 and ALU.sub.2 calculate the values within the scope defined by the condition (1), when the
result of y.sub.i -r.sub.i-1 is below -1 or not less than 1,the sign bit C.sub.4 of the output signal of the arithmetic logic unit ALU.sub.1 does not satisfy the condition (5). This
situation correspondings to those in the columns (8) and (9) in Table 1. Let us consider the logic circuit LC to satisfy thegeneration rule shown in Table 1 even in such a situation.
The Karnaugh maps relating to .DELTA.z'.sub.i and .DELTA.z".sub.i prepared on the basis of Table 1 are shown in FIGS. 6 and 7. As known, the Karnaugh map is used to express
.DELTA.z'.sub.i and.DELTA.z".sub.i in terms of C.sub.1, C.sub.3, C.sub.4 and C.sub.5 in accordance with the Boolean algebra. In FIG. 6, a cross-point between a column of C.sub.1
C.sub.5 =00 and a row of C.sub.3 C.sub.4 =00 designates a state of .DELTA.z'.sub.i in No. 1 inTable 1. This is correspondingly applied to other situations.
In accordance with the maps of FIGS. 6 and 7, .DELTA.z'.sub.i and .DELTA.z".sub.i are expressed by
In the equation (9), C.sub.3 is a logic NOT of C.sub.3, C.sub.1 C.sub.5 is a logic AND of C.sub.1 and C.sub.5, and C.sub.1 C.sub.5 +C.sub.1 C.sub.5 is a logic OR of C.sub.1 C.sub.5
and C.sub.1 C.sub.5. Accordingly, .DELTA.z'.sub.i is the logicAND of C.sub.3 and C.sub.4 (C.sub.1 C.sub.5 +C.sub.1 C.sub.5)+C.sub.1 C.sub.5. From the equation (10), .DELTA.z".sub.i is
equal to C.sub.3. The block diagram 21 shown in FIG. 2 is the expression of the equations (9) and (10) in terms of the logicalelements. In FIG. 2, reference numerals 28, 29, 31, 32,
41 and 42 are AND gates: 30 and 33 are OR gates. When the control unit SPC requests the increment output, logical `1` is applied to one of the inputs of each AND gate 41 and 42 to
enable thecircuit 21 to use the increments .DELTA.z'.sub.i and .DELTA.z".sub.i. The outputs of the logic elements with respect to C.sub.1, C.sub.3, C.sub.4 and C.sub.5 are tabulated
in Table 2. Table 2 is so designed that the relations between the respectivenumbers 1 to 10 and C.sub.1, C.sub.3, C.sub.4 and C.sub.5 are the same as those in Table 1. When both the
tables are compared with each other, those tables are coincident with each other with respect to C.sub.1, C.sub.3 , C.sub.4 and C.sub.5, andz'.sub.i and .DELTA.z".sub.i. Therefore,
the circuit 21 in FIG. 2 satisfies Table 1. The logic circuit for producing .DELTA.z for effecting the increment outputting is constructed as mentioned above.
TABLE 2 __________________________________________________________________________ Control Signal Logic Elements No. C.sub.1 C.sub.5 C.sub.3 C.sub.4 25 26 27 28 29 30 31 32 33 34
.DELTA.z.sub.i ' .DELTA.z.sub.i " __________________________________________________________________________ 1 0 0 0 0 1 1 1 0 1 1 0 0 0 0 0 1 2 0 0 0 1 1 1 1 0 1 1 0 1 1 1 1 1 3 0 0
0 0 0 0 0 0 1 9 1 0 0 0 1 1 0 0 1 1 1 0 1 1 1 1 10 1 0 0 1 1 1 0 0 1 1 1 1 1 1 1 __________________________________________________________________________
How to set the A input of the arithmetic logic unit ALU.sub.2 will now be described. As indicated by the condition (2) the unit u allowable for quantitizing y is 2.sup.-15.
Accordingly, u=2.sup.-h may be used as the unit for quantization for aninterger h which is 0<h.ltoreq.15. In this case, the equation (8) is rearranged as follows:
In other words, y is inputted into the Y register with the unit of 2.sup.-h. At this time, the shifter SF set the u in the A input of the arithmetic logic unit ALU.sub.2 so that ##
EQU8## The hardware of the above relations may be realized in sucha way that .DELTA.z'.sub.i is placed to fill the upper bit positions including the h-th bit in the A input, and
.DELTA.z".sub.i is placed at the (h+1)th bit position, and 0 is placed to fill the remaining lower bit positions.
The operations of the DDA as an integrator and a servo will be described referring again to FIG. 1, together with the logic circuit for producing the increment.
When the control unit SPC directs the circuit to operate as an integrator, the contents y.sub.i of the Y register and the contents r.sub.i-1 of the R register are read out and the
read-out ones are loaded into the latch registers LH.sub.1 andLH.sub.2, and then are set into the A and B inputs of the arithmetic logic unit ALU.sub.1.
After the settings, the increment pulses of independent variable are applied, through the data buses DB.sub.3, DB.sub.4 or DB.sub.6, to the unit ALU.sub.1 where the operation is
carried out for A+B or A-B i.e. y.sub.i +r.sub.i-1 or y.sub.i-r.sub.i-1. The result of the operation is stored as r.sub.i into the R register. In the storing into the R register, the
sign bit of the (A.+-.B) is ignored and a positive value r.sub.i given by the equation (4) is always loaded into the R register. The overflow of the R register is determined by the
logic circuit LC so as to satisfy the equation (4) by using the sign bit C.sub.1 of the y.sub.i, the sign bit C.sub.5 of r.sub.i-1 and the sign bit C.sub.4 of the output signal from
the arithmetic logicunit ALU.sub.1 and the increment .DELTA.z is produced on the basis of the determination.
Accordingly, the generation rule of the increment .DELTA.z when the DDA operates as an integrator is shown as in Table 3, and its logic circuit is expressed as shown in the block
diagram 22 in FIG. 2.
TABLE 3 ______________________________________ No. y.sub.i y.sub.i + r.sub.i-1 C.sub.1 C.sub.4 .DELTA.z.sub.i ' .DELTA.z.sub.i " ______________________________________ 1 0 .ltoreq.
y.sub.i 1 .ltoreq. y.sub.i + r.sub.i-1 0 1 0 1 2 0.ltoreq. y.sub.i 0 .ltoreq. y.sub.i + r.sub.i-1 0 0 0 0 3 y.sub.i < 0 0 .ltoreq. y.sub.i + r.sub.i-1 < 1 1 0 0 0 4 y.sub.i < 0
y.sub.i + r.sub.i-1 < 0 1 1 1 1 ______________________________________
When the control unit SPC specifies a servo, the incremental pulse is generated only dependent on the sign of the y.sub.i and whether y.sub.i is=0 or not, as described relating to the
equation (5). Therefore, the generation rule of increment.DELTA.z is given as shown in Table 4 and its logical circuit 23 is expressed as shown in FIG. 2.
TABLE 4 ______________________________________ y.sub.i C.sub.1 C.sub.6 .DELTA.z.sub.i ' .DELTA.z.sub.i " ______________________________________ y.sub.i > 0 0 1 0 1 y.sub.i = 0 0 0 0 0
y.sub.i < 0 1 1 1 1 ______________________________________
In the block diagram 22, reference numerals 43, 44 and 53, are AND gates. Only when the control unit SPC directs the integrator operation, the SPC applied a signal to one of the
inputs of each of the AND gate 43 and 44, so that the circuit 22operates as an integrator to produce increments.
In the block diagram 23, 56, 45 and 46 designate AND gates. When the control unit SPC directs the circuit 23 to operate as a servo, a signal is applied to one of the input terminals
of each AND gate 45 and 46, so that it operates as a servo toproduce increments.
As seen from the foregoing, the DDA according to the invention is able to produce increments of a signal representing the whole value of the signal from a process when the DDA is
applied to the process control. Therefore, the problem of theincrement generation of the conventional device can be solved.
An application of the invention will be described. In the DDA of the invention, the Y register portion, particularly the function thereof, is not modified, so that the inputting of
the increment input .DELTA.y.sub.k and the directly inputting ofthe whole value from exterior are performed in the same way as that of the other DDA devices. Accordingly, the
.DELTA.y.sub.k may be defined as shown in FIG. 8. In FIG. 8, an increment of y is expressed by ##EQU9## In other words, the total sum of ninput increments is outputted as an output
increment. Therefore, the DDA serves as an adder. The conventional adder uses a servo, as shown in FIG. 9. In this case, of the inputs .DELTA.y.sub.k, the n-th input is used to
negatively feed back .DELTA.z. The servo continues to output .DELTA.z until the total sum of the increment inputs .DELTA.y.sub.i .about..DELTA.y.sub.n-1 is cancelled by the input
.DELTA.y.sub.n. Therefore, when it produces .DELTA.z equal to the total sum of the increment inputs.DELTA.y.sub.1 to .DELTA.y.sub.n-1, y=0 is established and the generation of the
increment .DELTA.z stops. However, since the whole value is not accumulated in the Y register, the Y register can not produce a signal corresponding to the whole value toan external
device and the maximum number of the input increments to be added is n-1. On the other hand, in the present invention, since the whole value is accumulated in the adder, the R
register can produce the whole value to an external device andthe maximum number of the input increments to be added is n. Therefore, the device according to the invention may be used
not only for inputting a signal of the whole value delivered from the process but also for outputting a control signal of the wholevalue to the process.
* * * * *
Randomly Featured Patents
|
{"url":"http://www.patentgenius.com/patent/4293918.html","timestamp":"2014-04-19T20:17:05Z","content_type":null,"content_length":"53632","record_id":"<urn:uuid:4a279f3f-fe23-44dd-a51a-07b228d2d588>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Open Door Web Site : IB Physics : Electric Field Shapes
Electricity and Magnetism
Electric Field Shapes
Electric fields are often represented by electric lines of force.
A line of force is a line showing the direction of the force acting on a positive charge placed in the field.
The "density" of the lines represents the magnitude of the field strength.
To draw a diagram showing the shape of an electric field, imagine a small positive charge (a test charge) to be placed in the field at different points.
Field due to a single charge
Wherever the test charge is placed, the force will be directed away from the charge (or towards the charge if it is negative). Therefore, in this case, the shape of the field is radial.
Field due to two opposite point charges of equal magnitude
In this slightly more complicated case, a vector addition is needed to predict the direction of the line of force at the point considered.
By considering a number of such additions, we obtain the following shape.
Field due to two similar point charges of equal magnitude
The same process gives the following result.
At the centre of this field is a place where the magnitude of the electric field strength is zero. This is called a neutral point.
Field between two oppositely charged parallel plates
In between the plates the field is uniform except near the ends.
┃ Privacy Policy │ Copyright Information │ Sponsored Links │ Sponsored Pages │ Donating to the ODWS │ Advertising on the ODWS ┃
|
{"url":"http://www.saburchill.com/physics/chapters/0029.html","timestamp":"2014-04-19T17:11:56Z","content_type":null,"content_length":"14218","record_id":"<urn:uuid:873ea16e-0e73-4036-9ce4-85e1c8304b9c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MAC 1140 Self-assessment Exam Sanchez ... free ebook download
MAC 1140 Self-assessment Exam Sanchez ...
Size: 200 KB
Pages: n/a
Date: 2011-07-29
Related Documents
Size: 482 KB
Pages: n/a
Date: 2011-04-05
Directions: Show all work neatly in the space provided for each question. PENCIL ONLY a EMBED Equation. 3 EMBED Equation. 3 b 8 2. Solve: a EMBED.
faculty.mdc.edu/.../final exam self-assessments/final self-assessment..answers.doc
Size: 316 KB
Pages: n/a
Date: 2011-10-30
Exponential and Logarithmic functions Problem Answer 1. Give the number e to the nearest 5 decimal places 1. 2. 71828 2. Write the logarithmic equation for EMBED Equation.
faculty.mdc.edu/.../self-assessment/self assessment c-answers..and..
Size: 1.6 MB
Pages: n/a
Date: 2011-11-19
Math112vs. Math213 100points 1. Theanswersare: a 10points Z2xexdx 2xex 2ex C b 10points Zdxx2 1 1 2Zdxx 1 1 2Zdxx 1 1 2lnjx 1j 1 2lnjx 1j C 2. 10points dx x C. Since y 0 0 C C 8 andthereforey.
Size: 291 KB
Pages: n/a
Date: 2012-07-05
Problem Answer: a Give exact answer in simplest form b Round answer to the nearest thousandth 2. Find the midpoint of the line segment connecting.
faculty.mdc.edu/.../10. final exam self-assessments/final self..answers.doc
Size: 218 KB
Pages: n/a
Date: 2011-12-22
Polynomials and rational functions 1. For the polynomial function EMBED Equation. 3 y-intercept: __108____ d Local maximum points SHAPE MERGEFORMAT Use synthetic division.
faculty.mdc.edu/.../..and../self-assessments..and../self-assessment a - answers..and..
Size: 228 KB
Pages: n/a
Date: 2013-12-13
Self-Assessment Final 3-answers Name: EMBED Equation. 3 , use the Remainder Theorem and synthetic division to find P 9. Show the work. 2. Consider the polynomial function.
faculty.mdc.edu/.../final exam self-assessments/final self-assessment 3 - answers.doc
Size: 392 KB
Pages: n/a
Date: 2012-07-05
EMBED Equation. 3 , use the Remainder Theorem and synthetic division to find P -11 2. Consider the polynomial function EMBED Equation. 3 a Use synthetic division to find the zeros.
faculty.mdc.edu/.../final exam self-assessments/final self-assessment 2 -answers.doc
Size: 201 KB
Pages: n/a
Date: 2011-12-23
Polynomials and Rational Expression 1. Consider a polynomial function y P x of degree 3 with leading coefficient -5. a x 3 is a root or zero of the polynomial function therefore P 3 _0________.
faculty.mdc.edu/.../..and../self-assessments..and../self-assessment c-answers..and..
Size: 172 KB
Pages: n/a
Date: 2012-01-03
1. According to the factor theorem if x – c is a factor of a polynomial function P x then P c is a zero. If c is a real zero then x c is an x-intercept. For the polynomial function EMBED Equation.
faculty.mdc.edu/.../..and../self-assessments..and../self-assessment b - answers..and..
Size: 168 KB
Pages: n/a
Date: 2012-06-30
Functions and relations Name: 1. Consider the graph of the mathematical relation given below: a Is this relation a function a _Yes_________ b Is this relation a one-to-one.
faculty.mdc.edu/.../..self-assessments/self-assessment f - answers..and..
Size: 176 KB
Pages: n/a
Date: 2012-06-20
Functions and relations Name: 1. Find the linear function y mx b a with slope EMBED Equation. 3 and y-intercept 5. a EMBED Equation. 3 _ 2. Find the equation of the circle.
faculty.mdc.edu/.../..self-assessments/self-assessment b-answers..and..
Size: 214 KB
Pages: n/a
Date: 2011-06-07
I. Classify each of the following conics or degenerate form of a conic section as parabola, ellipse or hyperbola. a 3x2 -2xy -3y2 4x -3y -2 0. Give answer to the nearest.
faculty.mdc.edu/.../self-assessments/self assessment a - answers..
Size: 1.5 MB
Date: 2012-12-15
3. 091 OCW Scholar Self-Assessment Exam Crystalline Materials SolutionKey Write your answers on these pages. State your assumptions and show calculat.
Size: 288 KB
Pages: n/a
Date: 2011-07-25
Hypothesis testing Self-assessment exam A Classical approach Name: 1. True-false test. T As n increases the student s t distribution approaches the standard normal distribution.
faculty.mdc.edu/.../self-assessment/.../self assessment..answers.doc
Size: 289 KB
Pages: n/a
Date: 2011-10-20
Hypothesis testing - p-value approach Name: 1. True-false test. 2. Complete a According to the central limit theorem, for any population the distribution of sample means.
faculty.mdc.edu/.../self-assessment/.../self -assessment b - answers..
Size: 164 KB
Pages: n/a
Date: 2011-11-10
NAME: 1. If w EMBED Equation. 2 , find the gradient of w Solution: EMBED Equation. 3 2. Find the divergence of the vector field determined in problem. Evaluate the divergence.
faculty.mdc.edu/.../self-assessment/self-assessment exam..answers.doc
Size: 216 KB
Date: 2013-01-15
Visit - prepcast. com for more Exam Resources Page 1 12 by OSP International LLC. All rights reserved. Questionsfor Self- Assessment Session 11 Answer the following questions.
Size: 216 KB
Date: 2013-01-02
Visit - prepcast. com for more Exam Resources Page 1 12 by OSP International LLC. All rights reserved. Questionsfor Self- Assessment Session04 Answer the following questions.
Size: 215 KB
Date: 2012-12-30
Visit - prepcast. com for more Exam Resources Page 1 12 by OSP International LLC. All rights reserved. Questionsfor Self- Assessment Session06 Answer the following questions.
Size: 220 KB
Date: 2013-01-15
Visit - prepcas t. com for more Exam Resources Page 1 12 by OSP International LLC. All rights reserved. Questions Session02 Answer the following questions after studying.
|
{"url":"http://ebookbrowsee.net/self-assessment-a-answers-sequences-and-the-binomial-expansion-doc-d154130122","timestamp":"2014-04-20T06:58:28Z","content_type":null,"content_length":"56802","record_id":"<urn:uuid:7f48c237-db12-418f-9afa-375b6623ee22>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Items tagged with equation
I'm trying to solve this trigonometric equation:
e := 4·cos(t)+2·cos(2·t)=0. In the interval 0..2·Pi
However when i try:
solve([e, 0 <= t <= 2 Pi], t, AllSolutions, explicit)
I won't give my a straight answer.
I have also given Student:-Calculus1:-Roots a try:
Student:-Calculus1:-Roots(e, t=0..2·Pi)
But it will only give me an answer when i use the numeric...
|
{"url":"http://www.mapleprimes.com/tags/equation?page=9","timestamp":"2014-04-16T16:08:14Z","content_type":null,"content_length":"95781","record_id":"<urn:uuid:fa5450f6-8acd-496b-afd9-dffbc51dd08e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] linsolve.factorized was: Re: Using umfpack to calculate a incomplete LU factorisation (ILU)
[SciPy-user] linsolve.factorized was: Re: Using umfpack to calculate a incomplete LU factorisation (ILU)
Neilen Marais nmarais@sun.ac...
Thu Mar 22 09:34:12 CDT 2007
Hi Robert!
On Thu, 08 Mar 2007 18:10:30 +0100, Robert Cimrman wrote:
> Robert Cimrman wrote:
> Well, I did it since I am going to need this, too :-)
> In [3]:scipy.linsolve.factorized?
> ...
> Definition: scipy.linsolve.factorized(A)
> Docstring:
> Return a fuction for solving a linear system, with A pre-factorized.
> Example:
> solve = factorized( A ) # Makes LU decomposition.
> x1 = solve( rhs1 ) # Uses the LU factors.
> x2 = solve( rhs2 ) # Uses again the LU factors.
> This uses UMFPACK if available.
This is a useful improvement, thanks. But why not just extend
linsolve.splu to use umfpack so we can present a consistent interface? The
essential difference between factorized and splu is that you get to
explicity control the storage of the LU factorisation and get some
additional info (i.e. the number of nonzeros), whereas factorised only
gives you a solve function. The actual library used to do the sparse LU is
just an implementation detail that should abstracted wherever possible, no?
If nobody complains about the idea I'm willing to implement it.
> cheers,
> r.
you know its kind of tragic
we live in the new world
but we've lost the magic
-- Battery 9 (www.battery9.co.za)
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-March/011441.html","timestamp":"2014-04-20T03:37:49Z","content_type":null,"content_length":"4459","record_id":"<urn:uuid:6244f0fb-7eec-40f7-a97d-c9f3ee9f5def>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Test Credit Card Numbers | Credit Card Validation | Money Blue Book
Test Credit Card Numbers With Luhn Credit Card Validation
Published 7/11/08 (Modified 6/17/11)
By MoneyBlueBook
What Do the Credit Card Numbers Mean, and How Are Valid Credit Card Accounts Generated?
Have you ever wondered how credit card companies generate all those account numbers that appear on the face of the credit cards you carry around in your wallet? At first glance the numbers, while
neatly arranged, appear to be completely random. But would it surprise you to know that there is indeed meaning and actual mathematical methodology to the way the numbers are sequenced? The process
of generating real credit card numbers and validating them based on a proven mathematical formula is not only intriguing on an intellectual level and a hacker's dream, the carefully calculated way
the numbers are ordered is actually quite beautiful and elegant when you come to understand how it works.
Before I get down to explaining the anatomy of credit card numbers and discussing how credit card numbers are generated, I think it's prudent to remind everyone the intent of this article. The goal
of this blog post is not to encourage or get people thinking about how to go out and create fake credit card numbers on their own for improper means. The purpose is to shed some light on the science
behind the mathematical sequencing technology of valid credit card numbers and offer some insight into something that many of us frequently see and use everyday, but oftentimes don't pay much
attention to.
Please take in the information provided for purely academic and entertainment reasons. I'm not trying to encourage anyone to create fake credit card numbers and get themselves in trouble with the
law. For anyone even thinking about engaging in fake credit card number hacking, keep this in mind - using mathematically generated credit card numbers to purchase products over the Internet or in
real life is not only unethical and highly illegal, it's also not yet technologically possible (yet), based on the sheer probability of long shot odds of 1 in trillions. After reading everything I'm
about to say carefully, you'll also realize that there is no realistic way to generate actual working credit card numbers that could be used for anything but entertainment reasons. The math and
science behind generating authentic credit card numbers are only good for validation purposes and not sufficient for creating workable numbers as several highly encrypted numerical components are
still needed. So, with that obligatory disclaimer out of the way, here is a short guide on how anyone can generate and verify the authenticity of any credit card number.
Basic Background About Credit Card Numbers and How They Work
Rather than ask you to take out a credit card out of your wallet to examine it, I've provided a picture of a prototypical card - in this case, it's a Visa credit card. While different card types
offer different lengths of numerical digits, most major credit card issuers popular in the United States have 16 primary numbers on the front face of the card. Visa, MasterCard, and Discover cards
all have 16 digits. American Express is the only major credit card issuer in the U.S. with one less number - at 15 digits. Regardless of the length of numbers, their numerical sequencing is still
guided by the same Luhn validation formula, the mathematical check sum equation that makes all valid credit card numbers error free.
As you can see from the picture of the Visa card above, the very first 6 credit card number sequence is known as the issuer identification number (IIN) or bank identification number (BIN). These
first 6 numerical digits denote the credit card network and the banking institution the card is a member of. The issuer identifier number also incorporates the card type's special identifying
numerical prefix.
• All typical 16 digit Visa account credit card numbers start with a prefix of 4.
• All 16 digit MasterCard account numbers start with a prefix of 5.
• All 16 digit Discover account numbers start with a prefix of 6011.
• All 15 digit American Express credit card numbers start with a prefix of 37.
There is less randomization during this initial set of 6 digits as the numbers are determined purely by the card issuing source. Validation systems that want to go the extra mile in verifying
authenticity oftentimes scan this first numerical sequence to match the known bank and issuing location of the card with the provided customer billing address for further validation accuracy.
The lone digit at the very right end of the complete 15 or 16 digit credit card number sequence is known as the "check digit", which often is the final number that is computer generated to satisfy
the mathematical formulations of the Luhn check sum process. Meanwhile, in between the first 6 digits and the last single check digit is the actual personalized account number - the 8 or 9 digit
sequence given by the card issuer. For more basic background information about credit card numbers, check out this credit card features brochure for more useful knowledge about the embossed and
printed information found on your typical plastic credit card.
What's The Secret Behind The Luhn Algorithm, Also Known As The "Modulus 10" Or "Mod-10" Formula?
The Luhn Algorithm is the check sum formula used by payment verification systems and mathematicians to verify the sequential integrity of real credit card numbers. It's used to help bring order to
seemingly random numbers and used to prevent erroneous credit card numbers from being cleared for use. The Luhn Algorithm is not used for straight credit card number generation from scratch, but
rather utilized as a simple computational way to distinguish valid credit card numbers from random collections of numbers put together. The validation formula also works with most debit cards as
The Luhn formula was created and filed as a patent (now freely in the public domain) in 1954 by Hans Peter Luhn of IBM to detect numerical errors found in pre-existing and newly generated
identification numbers. Since then, it's primary use has been in the area of check sum validation, made popular with its use to verify the validity of important sequences such as credit card numbers.
Currently, almost all credit card numbers issued today are generated and verified using the Luhn Algorithm or Modulus, Mod-10 Formula. Needless to say, if you come upon some existing credit card
numbers that fail the Luhn algorithm when put to the test, it is safe to assume that they are not valid or genuine numbers.
The one thing to keep in mind is that validity in terms of passing the Luhn test only means that it is mathematically valid for computational compliance purposes. It does not guarantee that the
credit card number sequence is indeed a working number that is backed up with a valid credit card account on the card issuer's end. It is not unremarkable for one to artificially generate a
mathematically valid credit card number that passes the Luhn validation check, but still ultimately fails as a fake credit card number with no actual substance. The Luhn algorithm only validates the
15-16 digit credit card number and not the other critical components of a genuine working credit card account such as the expiration date and the commonly used Card Verification Value (CVV) and Card
Verification Code (CVC) numbers (used to prove physical possession of the debit or credit card).
The Nerdy Process Of Applying The Luhn Algorithm To The Creation and Validation Of Credit Card Number Sequences
For those who hate math or get scared when they encounter a bunch of scary looking mathematical formulas and numerically inspired descriptions, you are not alone. I personally hate math as an
academic subject and was rather terrible at it back in high school and college, but if you like visual, thinking puzzles like Sudoku, you'll like working with the Luhn Algorithm. It's pretty clever
and remarkably well put together. It's also pretty easy to explain.
1. First, you'll need to lay out all 15 or 16 numerical digits of the credit or debit card number. The Luhn Algorithm always starts from right to left, beginning with the rightmost digit on the
credit card face (the check digit). Starting with the check digit and moving left, double the value of every alternate digit. Non-doubled digits will remain the same. Remember that the check
digit is never doubled. For example, if the credit card is a 16 digit Visa card, the check digit would be the rightmost 16th digit. Thus you would double the value of the 15th, 13th, 11th, 9th
digits, and so on until all odd digits have been doubled. The even digits would be left the same.
2. For any digit that becomes a two digit number of 10 or more when doubled, add the two digits together. For example, the digit 5 when doubled will become 10, which turns into a 1 (when 1 and 0 are
added together). Likewise, the digit 9 when doubled will become 18, which becomes 9 (as 1 and 8 are added together). Obviously, 0 when doubled will remain 0.
3. Now, lay out the new sequence of numbers. The new doubled digits will replace the old digits. Non-doubled digits will remain the same. Thus, you should be able to come up with a new sequence of
15 or 16 numerical digits depending on card type.
4. Add up the new sequence of numbers together to get a sum total. If the combined tally is perfectly divisible by ten (ends in 0, like 60 for example), then the account number is mathematically
valid according to the Luhn formula. If not, the credit card number provided is not valid and thus fake or improperly generated.
An Example Of the Luhn Validation Technique In Action - Using Homemade Graphics
For the visual types like myself, let's use the American Express credit card on the right to better demonstrate the doubling and addition mathematics of the Luhn Algorithm. Follow the numbers and
you'll realize that it's not as difficult as it may first appear. It's actually very easy once you get the hang of it. You won't look at credit card numbers the same way ever again after you get a
good grip of it - I assure you. You'll find yourself testing credit card numbers for fun!
Ignoring the obvious Amex logo on the card, right of the bat it's clear the account number is that of an American Express number - denoted by the numerical prefix - "37". Now let's crunch the numbers
through the Luhn Algorithm using the following displayed Amex credit card number: 3759-876543-21001. It doesn't matter if the credit card number sequence has 15 numbers like the American Express or
16 numbers like Visa, MasterCard, or Discover, the Luhn validation check should be able to verify whether this card number is a mathematically authentic credit card number regardless. Follow the Luhn
steps from #1 to #4 below, starting with the rightmost check digit.
In this case, the total calculated sum was 57, which is not divisible by 10 (the added up sum does not end with zero). Thus the number fails the Luhn Algorithm validation check. According to the Luhn
test, this particular Amex credit card number is completely bogus and fake. The numbers were likely randomly slapped together. To make this particular set of numbers Luhn compliant and error free,
all we would have to do is change the all important "check digit" number from 1 to a 4, which would result in a total sum of 60, thereby becoming Luhn compliant.
If you want to test this mathematical theory out in real life, I recommend pulling out your own credit cards and spending a few seconds to run a quick Luhn screening on them just for your own
amusement and education. Pretty neat isn't it? If you want another credit card number to test on, try using the credit card number that is displayed on the cartoon "VIZA Card" [sic] that Bart Simpson
is holding up in the graphic at the top right of this article - the card is in the name of "Rod Flanders", and the credit card number is: 8525-4941-2525-4158. Tip: Just by looking at the prefix
numbers you probably should already be able to tell that the account number's completely random and fake.
Use The Luhn Formula To Validate Existing Accounts But Don't Attempt To Create and Use Fake Credit Card Numbers
Cracking and hacking the security codes found on credit cards is currently impossible. To calculate a workable 3 digit CVV2 security code, the algorithm requires a primary account number (PAN), the 4
digit expiration date, a special 3 digit service code, and a pair of DES keys. With such heavy encryption and billions to trillions of numerical possibilities, unless you have God-like mental
processing power and a fleet of super computers at your disposal, you won't be able to use brute force guessing attempts to crack the codes.
While it's good to use this type of information to education yourself on the inner workings of credit cards and mathematical validation theory, it's best to stay away from trying to further crack the
secret of credit card codes to come up with free workable account numbers. Don't use the Luhn Algorithm for anything else but personal entertainment and amusement.���� Please don't go around trying
to generate a list of fake credit card numbers on your own and trying to buy stuff with them. I know some of you out there may be tempted to try, but you'll just get yourself in trouble.
419 Responses to “Test Credit Card Numbers | Credit Card Validation | Money Blue Book”
1. Lettia says:
October 7, 2010 at 5:53 pm
Wow. I am amazed at some of these posts! They can't be serious! Anyway, I was hoping to be able to find out how to tell if a card is a bank card or a P card a Debt card an so on.
2. seriously says:
October 10, 2010 at 1:57 am
Seriously. Get a life everybody.
Tried this nifty thing with my visa and it worked, but my american express didn't. checked my math and everything...
3. Sandrine says:
October 10, 2010 at 9:39 am
Please i need just one valid visa credit card with full details and i will be very greatful to whomever helps me with one and i will reward that person with good.
Thanks and best regards
4. cuppycake143445254 says:
October 10, 2010 at 4:08 pm
it's around 4am here in our beautiful country, i should be sleeping between my 2 beautiful kids, but all the stupid people asking for complete credit card information have driven me laughin'
crazy! Raymond's intention of writing this blog was really great. I was asked by my mother to create a Paypal account for her using her CC, but it was denied, maybe because of incorrect mailing
address (as we transferred from the city down to province). I thought this article would help me have my mom's cc validated, but i found out that this article is more helpful than i thought
(though mommy's cc wasn't validated at all). And this article just became so much interesting upon reading all the crazy comments written by those effin' soy nuts!
To all the desperate freakin' fools out there, GO TO THE BANK NEAREST YOU AND PERSONALLY APPLY FOR A CREDIT CARD!
OH AND FYI! THE FBI PEOPLE WILL FLY ALL THE WAY FROM THE U.S. TO YOUR COUNTRY JUST TO HAVE YOU NAILED BEHIND BARS! NICE TRY PEEPZ!
RAYMOND, excellent article you have here buddy :)
5. Best says:
October 15, 2010 at 8:56 pm
I just wanna know how to get a credit card or create a credit card account online.
6. HAHAHAHAHAHA!!! says:
October 16, 2010 at 5:46 pm
Guts for you cuppycake!!! The FBI would not be able to fly to another country to arrest someone. They wouldn't get past customs because of Jurisdiction rights. They will have none in any other
country. That's why criminals in car chases( in USA) head for the border because the police cannot follow them there(to Mexico or something). If anyone could have jurisdiction to arrest in a
country they are not from is Interpol because they cover international laws. If the FBI try to make an arrest in another country they themselves will get arrested by the Local police. HAHAHA. And
to those dummies wanting credit card numbers. YOU WILL NEVER FIND ONE LEGALLY!!! NOONE WILL BE STUPID ENOUGH TO GIVE OUT THEIR OWN CREDIT CARD INFO TO RANDOM PEOPLE!!!!!!!!!!!!!!!!!!!
7. Hamid says:
October 20, 2010 at 11:42 am
I need a valid Visa card number with CVV2 and expiration date.
8. RAYMOND PLEASE HELP ME says:
October 21, 2010 at 10:18 pm
i want to know if there is a difference between debit cards and credit cards. and if there is, could you say what it is?
9. RAYMOND PLEASE HELP ME says:
October 21, 2010 at 10:20 pm
your answer would be extremely appreciated (for a project I'm working on).
10. Dave says:
November 9, 2010 at 6:05 am
I just came across your bloq today and i read about the Luhn Formula and was interested. I think I understand the basics behind it. I was wondering if the formula is the same for Mobile airtime
prepaid vouchers? Your post was very educative to me as I am computer engineering student and I do programming quite a lot with C++ and Java.
I kept laughing while going through the posts on the blog: from people begging you to those threatening to end their lives! Some people are sick. I couldn't help noting that if you obliged their
requests and gave them your own credit card numbers, you'd be bankrupt within one hour! I know statistics don't lie but I didn't like the statistics about Africa originating 75% of all the
inappropriate posts. Who are these bastards giving Africa a bad name?
I am from Kenya and I know that it is really stupid to expect someone to give a total stranger credit info or to generate a credit card number that really works (that means with a valid name,
verification code and expiry date). It can be possible - just like it is possible to be struck by lightning and yet billions of people go through their entire lifetime without ever getting to be
struck. Honestly, the time and energy required to hack into the credit card technology would be better spent getting a degree, securing a job and applying for a legit credit card! The sooner
people realize that in the their lives, the better for them. And the better for us all!
Thanks for educating me on the Luhn Formula. I will try to code a simple program for checking if a sample card number is compliant to the formula. it should be really simple.
11. Dave says:
November 9, 2010 at 6:14 am
Someone asked for the difference between Credit cards and Debit cards. I am not an expert but I can give my take on the issue.
A Debit card is a card issued by a bank or agency with which you have deposits. The card allows you to withdraw cash or buy items ONLY with the money that you already have in your account. Of
course banks can arrange overdrafts for you, which you will pay interest for but the basic fact is that you spend what you already have.
A Credit card, as the name, suggests allows you to spend on credit. You are allowed to withdraw cash at ATM or buy items with money that you have borrowed from the issuing agency. At the end of
the month or three months etc, you get a statement indicating all the money you spent on your credit card and you have to refund the agency - with interest, of course.
Raymond, correct me if I am wrong. I would appreciate additional information especially on the credit card.
I have a Debit Card and I know about that!
12. marten says:
November 20, 2010 at 4:26 pm
this really opened my mind again, i knew it once but forgot it, there's just one thing i need to ask you, could you give me a few valid visa numbers, fake of course, not from real cards, i'd like
to examine them because i doon't think i entirely understand it and that would really help me, thank you.
many thanks for placing this.
13. Jodi Berndt says:
December 8, 2010 at 7:47 pm
What machine do company's use to imprint the credit card numbers on the card?
14. Adam K. says:
December 13, 2010 at 5:40 am
Thank you for the article about credit card numbers. I thought it was quite interesting. Amusingly, my debit card validates as a credit card number... anyways.
have a great day!
15. Annoymus says:
December 15, 2010 at 8:11 pm
Could someone give me a valid Visa card number along with a expiration date and Security Code? Not a card that someone actually uses in ever day life of course.. I just want something that'll
work for free trails.
16. lost says:
December 17, 2010 at 3:50 am
Nice article.
Btw, it's homer simpson holding up the credit card :P
17. christof says:
December 23, 2010 at 2:24 pm
want a CC????????
18. Vinay Vyas (The Producer) says:
January 1, 2011 at 2:20 pm
Now look, Raymond made this article for our own info not for actual use by stupid people. You know there could be many FBI agents reading this and could arrest you? It is a federal crime to
misuse a credit card and could lead to jail. I myself got arrested because I used an uzi on a police car... Thats a different story though. Stop the idiopathy guys!
19. Evalyn Bede says:
February 13, 2011 at 9:33 am
thanks for great post. I been working with e-gold long time ago, now starting with liberty reserve and investing in hyips.
20. MoneyNing says:
October 7, 2012 at 11:23 am
I never knew this. I am not sure how or why I will ever need to know about the Luhn's credit card validation test but it is nice to know. This was a well written article with a great explanation.
Thank you so much for writing it. I greatly appreciate you sharing it with me.
|
{"url":"http://www.moneybluebook.com/how-to-create-and-generate-valid-credit-card-numbers/","timestamp":"2014-04-18T13:36:00Z","content_type":null,"content_length":"78597","record_id":"<urn:uuid:6f676aa0-e643-48ce-8040-389e5c94b660>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about quantum technologies on Azimuth
Network Theory I
2 March, 2014
Here’s a video of a talk I gave last Tuesday—part of a series. You can see the slides here:
• Network Theory I: electrical circuits and signal-flow graphs.
Click on items in blue, or pictures, for more information.
One reason I’m glad I gave this talk is because afterwards Jamie Vicary pointed out some very interesting consequences of the relations among signal-flow diagrams listed in my talk. It turns out they
imply equations familiar from the theory of complementarity in categorical quantum mechanics!
This is the kind of mathematical surprise that makes life worthwhile for me. It seemed utterly shocking at first, but I think I’ve figured out why it happens. Now is not the time to explain… but I’ll
have to do it soon, both here and in the paper that Jason Eberle are writing about control theory.
For now, besides the slides, the best place to read more about this program is here:
• Brendan Fong, A compositional approach to control theory.
The Elitzur–Vaidman Bomb-Testing Method
24 August, 2013
Quantum mechanics forces us to refine our attitude to counterfactual conditionals: questions about what would have happened if we had done something, even though we didn’t.
“What would the position of the particle be if I’d measured that… when actually I measured its momentum?” Here you’ll usually get no definite answer.
But sometimes you can use quantum mechanics to find out what would have happened if you’d done something… when classically it seems impossible!
Suppose you have a bunch of bombs. Some have a sensor that will absorb a photon you shine on it, and make the bomb explode! Others have a broken sensor, that won’t interact with the photon at all.
Can you choose some working bombs? You can tell if a bomb works by shining a photon on it. But if it works, it blows up—and then it doesn’t work anymore!
So, it sounds impossible. But with quantum mechanics you can do it. You can find some bombs that would have exploded if you had shone photons at them!
Here’s how:
Put a light that emits a single photon at A. Have the photon hit the half-silvered mirror at lower left, so it has a 50% chance of going through to the right, and a 50% chance of reflecting and going
up. But in quantum mechanics, it sort of does both!
Put a bomb at B. Recombine the photon’s paths using two more mirrors. Have the two paths meet at a second half-silvered mirror at upper right. You can make it so that if the bomb doesn’t work, the
photon interferes with itself and definitely goes to C, not D.
But if the bomb works, it absorbs the photon and explodes unless the photon takes the top route… in which case, when it hits the second half-silvered mirror, it has a 50% chance of going to C and a
50% chance of going to D.
• If the bomb doesn’t work, the photon has a 100% chance of going to C.
• If the bomb works, there’s a 50% chance that it absorbs the photon and explodes. There’s also a 50% chance that the bomb does not explode—and then the photon is equally likely to go to either C or
D. So, the photon has a 25% chance of reaching C and a 25% chance of reaching D.
So: if you see a photon at D, you know you have a working bomb… but the bomb has not exploded!
For each working bomb there’s:
• a 50% chance that it explodes,
• a 25% chance that it doesn’t explode but you can’t tell if it works,
• a 25% chance that it doesn’t explode but you can tell that it works.
This is the Elitzur–Vaidman bomb-testing method. It was invented by Avshalom Elitzur and Lev Vaidman in 1993. One year later, physicists actually did an experiment to show this idea works… but alas,
not using actual bombs!
In 1996, Kwiat showed that using more clever methods, you can reduce the percentage of wasted working bombs as close to zero as you like. And pushing the idea even further, Graeme Mitchison and
Richard Jozsa showed in 1999 that you can get a quantum computer to do a calculation for you without even turning it on!
This sounds amazing, but it’s really no more amazing than the bomb-testing method I’ve already described.
For details, read these:
• A. Elitzur and L. Vaidman, Quantum mechanical interaction-free measurements, Found. Phys. 23 (1993), 987–997.
• Paul G. Kwiat, H. Weinfurter, T. Herzog, A. Zeilinger, and M. Kasevich, Experimental realization of “interaction-free” measurements.
• Paul G. Kwiat, Interaction-free measurements.
• Graeme Mitchison and Richard Jozsa, Counterfactual computation, Proc. Roy. Soc. Lond. A457 (2001), 1175–1194.
The picture is from the Wikipedia article, which also has other references:
• Elitzur–Vaidman bomb tester, Wikipedia.
Bas Spitters pointed out this category-theoretic analysis of the issue:
• Robert Furber and Bart Jacobs, Towards a categorical account of conditional probability.
Centre for Quantum Mathematics and Computation
6 March, 2013
This fall they’re opening a new Centre for Quantum Mathematics and Computation at Oxford University. They’ll be working on diagrammatic methods for topology and quantum theory, quantum gravity, and
computation. You’ll understand what this means if you know the work of the people involved:
• Samson Abramsky
• Bob Coecke
• Christopher Douglas
• Kobi Kremnitzer
• Steve Simon
• Ulrike Tillman
• Jamie Vicary
All these people are already at Oxford, so you may wonder what’s new about this center. I’m not completely sure, but they’ve gotten money from EPSRC (roughly speaking, the British NSF), and they’re
already hiring a postdoc. Applications are due on March 11, so hurry up if you’re interested!
They’re having a conference October 1st to 4th to start things off. I’ll be speaking there, and they tell me that Steve Awodey, Alexander Beilinson, Lucien Hardy, Martin Hyland, Chris Isham, Dana
Scott, and Anton Zeilinger have been invited too.
I’m really looking forward to seeing Chris Isham, since he’s one of the most honest and critical thinkers about quantum gravity and the big difficulties we have in understanding this subject—and he
has trouble taking airplane flights, so it’s been a long time since I’ve seen him. It’ll also be great to see all the other people I know, and meet the ones I don’t.
For example, back in the 1990′s, I used to spend summers in Cambridge talking about n-categories with Martin Hyland and his students Eugenia Cheng, Tom Leinster and Aaron Lauda (who had been an
undergraduate at U.C. Riverside). And more recently I’ve been talking a lot with Jamie Vicary about categories and quantum computation—since was in Singapore some of the time while I was there.
(Indeed, I’m going back there this summer, and so will he.)
I’m not as big on n-categories and quantum gravity as I used to be, but I’m still interested in the foundations of quantum theory and how it’s connected to computation, so I think I can give a talk
with some new ideas in it.
Quantum Computing Position at U.C. Riverside
6 October, 2012
Here at U.C. Riverside, Alexander Korotkov wants to hire a postdoc in quantum measurement and quantum computing with superconducting qubits.
He writes:
The work will be mainly related to quantum feedback of superconducting qubits. The first experiment was published in Nature today. (Some News & Views discussion can be seen here.) The theory is
still rather simple and needs improvement.
Time Crystals
26 September, 2012
When water freezes and forms a crystal, it creates a periodic pattern in space. Could there be something that crystallizes to form a periodic pattern in time? Frank Wilczek, who won the Nobel Prize
for helping explain why quarks and gluons trapped inside a proton or neutron act like freely moving particles when you examine them very close up, dreamt up this idea and called it a time crystal:
• Frank Wilczek, Classical time crystals.
• Frank Wilczek, Quantum time crystals.
‘Time crystals’ sound like something from Greg Egan’s Orthogonal trilogy, set in a universe where there’s no fundamental distinction between time and space. But Wilczek wanted to realize these in our
Of course, it’s easy to make a system that behaves in an approximately periodic way while it slowly runs down: that’s how a clock works: tick tock, tick tock, tick tock… But a system that keeps
‘ticking away’ without using up any resource or running down would be a strange new thing. There’s no telling what weird stuff we might do with it.
It’s also interesting because physicists love symmetry. In ordinary physics there are two very important symmetries: spatial translation symmetry, and time translation symmetry. Spatial translation
symmetry says that if you move an experiment any amount to the left or right, it works the same way. Time translation symmetry says that if you do an experiment any amount of time earlier or later,
it works the same way.
Crystals are fascinating because they ‘spontaneously break’ spatial translation symmetry. Take a liquid, cool it until it freezes, and it forms a crystal which does not look the same if you move it
any amount to the right or left. It only looks the same if you move it certain discrete steps to the right or left!
The idea of a ‘time crystal’ is that it’s a system that spontaneously breaks time translation symmetry.
Given how much physicists have studied time translation symmetry and spontaneous symmetry breaking, it’s sort of shocking that nobody before 2012 wrote about this possibility. Or maybe someone
did—but I haven’t heard about it.
It takes real creativity to think of an idea so radical yet so simple. But Wilczek is famously creative. For example, he came up with anyons: a new kind of particle, neither boson nor fermion, that
lives in a universe where space is 2-dimensional. And now we can make those in the lab.
Unfortunately, Wilczek didn’t know how to make a time crystal. But now a team including Xiang Zhang (seated) and Tongcang Li (standing) at U.C. Berkeley have a plan for how to do it.
Actually they propose a ring-shaped system that’s periodic in time and also in space, as shown in the picture. They call it a space-time crystal:
Here we propose a space-time crystal of trapped ions and a method to realize it experimentally by confining ions in a ring-shaped trapping potential with a static magnetic field. The ions
spontaneously form a spatial ring crystal due to Coulomb repulsion. This ion crystal can rotate persistently at the lowest quantum energy state in magnetic fields with fractional fluxes. The
persistent rotation of trapped ions produces the temporal order, leading to the formation of a space-time crystal. We show that these space-time crystals are robust for direct experimental
observation. The proposed space-time crystals of trapped ions provide a new dimension for exploring many-body physics and emerging properties of matter.
The new paper is here:
• Tongcang Li, Zhe-Xuan Gong, Zhang-Qi Yin, H. T. Quan, Xiaobo Yin, Peng Zhang, L.-M. Duan and Xiang Zhang, Space-time crystals of trapped ions.
Alas, the press release put out by Lawrence Berkeley National Laboratory is very misleading. It describes the space-time crystal as a clock that
will theoretically persist even after the rest of our universe reaches entropy, thermodynamic equilibrium or “heat-death”.
First of all, ‘reaching entropy’ doesn’t mean anything. More importantly, as time goes by and things fall apart, this space-time crystal, assuming anyone can actually make it, will also fall apart.
I know what the person talking to the reporter was trying to say: the cool thing about this setup is that it gives a system that’s truly time-periodic, not gradually using up some resource and
running down like an ordinary clock. But nonphysicist readers, seeing an article entitled ‘A Clock that Will Last Forever’, may be fooled into thinking this setup is, umm, a clock that will last
forever. It’s not.
If this setup were the whole universe, it might keep ticking away forever. But in fact it’s just a small, carefully crafted portion of our universe, and it interacts with the rest of our universe, so
it will gradually fall apart when everything else does… or in fact much sooner: as soon as the scientists running it turn off the experiment.
Classifying space-time crystals
What could we do with space-time crystals? It’s way too early to tell, at least for me. But since I’m a mathematician, I’d be happy to classify them. Over on Google+, William Rutiser asked if there
are 4d analogs of the 3d crystallographic groups. And the answer is yes! Mathematicians with too much time on their hands have classified the analogues of crystallographic groups in 4 dimensions:
• Space group: classification in small dimensions, Wikipedia.
In general these groups are called space groups (see the article for the definition). In 1 dimension there are just two, namely the symmetry groups of this:
— o — o — o — o — o — o —
and this:
— > — > — > — > — > — > —
In 2 dimensions there are 17 and they’re called wallpaper groups. In 3 dimensions there are 230 and they are called crystallographic groups. In 4 dimensions there are 4894, in 5 dimensions there are…
hey, Wikipedia leaves this space blank in their table!… and in 6 dimensions there are 28,934,974.
So, there is in principle quite a large subject to study here, if people can figure out how to build a variety of space-time crystals.
There’s already book on this, if you’re interested:
• Harold Brown, Rolf Bulow, Joachim Neubuser, Hans Wondratschek and Hans Zassenhaus, Crystallographic Groups of Four-Dimensional Space, Wiley Monographs in Crystallography, 1978.
Quantizing Electrical Circuits
2 February, 2012
As you may know, there’s a wonderful and famous analogy between classical mechanics and electrical circuit theory. I explained it back in “week288″, so I won’t repeat that story now. If you don’t
know what I’m talking about, take a look!
This analogy opens up the possibility of quantizing electrical circuits by straightforwardly copying the way we quantize classical mechanics problems. I’d often wondered if this would be useful.
It is, and people have done it:
• Michel H. Devoret, Quantum fluctuations in electrical circuits.
Michel Devoret, Rob Schoelkopf and others call this idea quantronics: the study of mesoscopic electronic effects in which collective degrees of freedom like currents and voltages behave quantum
I just learned about this from a talk by Sean Barrett here in Coogee. There are lots of cool applications, but right now I’m mainly interested in how this extends the set of analogies between
different physical theories.
One interesting thing is how they quantize circuits with resistors. Over in classical mechanics, this corresponds to systems with friction. These systems, called ‘dissipative’ systems, don’t have a
conserved energy. More precisely, energy leaks out of the system under consideration and gets transferred to the environment in the form of heat. It’s hard to quantize systems where energy isn’t
conserved, so people in quantronics model resistors as infinite chains of inductors and capacitors: see the ‘LC ladder circuit’ on page 15 of Devoret’s notes. This idea is also the basis of the
Caldeira–Leggett model of a particle coupled to a heat bath made of harmonic oscillators: it amounts to including the environment as part of the system being studied.
A Quantum Hammersley–Clifford Theorem
29 January, 2012
I’m at this workshop:
• Sydney Quantum Information Theory Workshop: Coogee 2012, 30 January – 2 February 2012, Coogee Bay Hotel, Coogee, Sydney, organized by Stephen Bartlett, Gavin Brennen, Andrew Doherty and Tom Stace.
Right now David Poulin is speaking about a quantum version of the Hammersley–Clifford theorem, which is a theorem about Markov networks. Let me quickly say a bit about what he proved! This will be a
bit rough, since I’m doing it live…
The mutual information between two random variables is
$I(A:B) = S(A) + S(B) - S(A,B)$
The conditional mutual information between three random variables $C$ is
$I(A:B|C) = \sum_c p(C=c) I(A:B|C=c)$
It’s the average amount of information about $B$ learned by measuring $A$ when you already knew $C.$
All this works for both classical (Shannon) and quantum (von Neumann) entropy. So, when we say ‘random variable’ above, we
could mean it in the traditional classical sense or in the quantum sense.
If $I(A:B|C) = 0$ then $A, C, B$ has the following Markov property: if you know $C,$ learning $A$ tells you nothing new about $B.$ In condensed matter physics, say a spin system, we get (quantum)
random variables from measuring what’s going on in regions, and we have short range entanglement if $I(A:B|C) = 0$ when $C$ corresponds to some sufficiently thick region separating the regions $A$
and $B.$ We’ll get this in any Gibbs state of a spin chain with a local Hamiltonian.
A Markov network is a graph with random variables at vertices (and thus subsets of vertices) such that $I(A:B|C) = 0$ whenever $C$ is a subset of vertices that completely ‘shields’ the subset $A$
from the subset $B$: any path from $A$ to $B$ goes through a vertex in a $C.$
The Hammersley–Clifford theorem says that in the classical case we can get any Markov network from the Gibbs state
$\exp(-\beta H)$
of a local Hamiltonian $H,$ and vice versa. Here a Hamiltonian is local if it is a sum of terms, one depending on the degrees of freedom in each clique in the graph:
$H = \sum_{C \in \mathrm{cliques}} h_C$
Hayden, Jozsa, Petz and Winter gave a quantum generalization of one direction of this result to graphs that are just ‘chains’, like this:
Namely: for such graphs, any quantum Markov network is the Gibbs state of some local Hamiltonian. Now Poulin has shown the same for all graphs. But the converse is, in general, false. If the
different terms $h_C$ in a local Hamiltonian all commute, its Gibbs state will have the Markov property. But otherwise, it may not.
For some related material, see:
• David Poulin, Quantum graphical models and belief propagation.
Probabilities Versus Amplitudes
5 December, 2011
Here are the slides of the talk I’m giving at the CQT Annual Symposium on Wednesday afternoon, which is Tuesday morning for a lot of you. If you catch mistakes, I’d love to hear about them before
• Probabilities versus amplitudes.
Abstract: Some ideas from quantum theory are just beginning to percolate back to classical probability theory. For example, there is a widely used and successful theory of “chemical reaction
networks”, which describes the interactions of molecules in a stochastic rather than quantum way. If we look at it from the perspective of quantum theory, this turns out to involve creation and
annihilation operators, coherent states and other well-known ideas—but with a few big differences. The stochastic analogue of quantum field theory is also used in population biology, and here the
connection is well-known. But what does it mean to treat wolves as fermions or bosons?
Liquid Light
28 November, 2011
Elisabeth Giacobino works at the Ecole Normale Supérieure in Paris. Last week she gave a talk at the Centre for Quantum Technologies. It was about ‘polariton condensates’. You can see a video of her
talk here.
What’s a polariton? It’s a strange particle: a blend of matter and light. Polaritons are mostly made of light… with just enough matter mixed in so they can form a liquid! This liquid can form eddies
just like water. Giacobino and her team of scientists have actually gotten pictures:
Physicists call this liquid a ‘polariton condensate’, but normal people may better appreciate how wonderful it is if we call it liquid light. That’s not 100% accurate, but it’s close—you’ll see what
I mean in a minute.
Here’s a picture of Elisabeth Giacobino (at right) and her coworkers in 2010—not exactly the same team who is working on liquid light, but the best I can find:
How to make liquid light
How do you make liquid light?
First, take a thin film of some semiconductor like gallium arsenide. It’s full of electrons roaming around, so imagine a sea of electrons, like water. If you knock out an electron with enough energy,
you’ll get a ‘hole’ which can move around like a particle of its own. Yes, the absence of a thing can act like a thing. Imagine an air bubble in the sea.
All this so far is standard stuff. But now for something more tricky: if you knock an electron just a little, it won’t go far from the hole it left behind. They’ll be attracted to each other, so
they’ll orbit each other!
What you’ve got now is like a hydrogen atom—but instead of an electron and a proton, it’s made from an electron and a hole! It’s called an exciton. In Giacobino’s experiments, the excitons are 200
times as big as hydrogen atoms.
Excitons are exciting, but not exciting enough for us. So next, put a mirror on each side of your thin film. Now light can bounce back and forth. The light will interact with the excitons. If you do
it right, this lets a particle of light—called a photon—blend with an exciton and form a new particle called polariton.
How does a photon ‘blend’ with an exciton? Umm, err… this involves quantum mechanics. In quantum mechanics you can take two possible situations and add them and get a new one, a kind of ‘blend’
called a ‘superposition’. ‘Schrödinger’s cat’ is what you get when you blend a live cat and a dead cat. People like to argue about why we don’t see half-live, half-dead cats. But never mind: we can
see a blend of a photon and an exciton! Giacobino and her coworkers have done just that.
The polaritons they create are mostly light, with just a teeny bit of exciton blended in. Photons have no mass at all. So, perhaps it’s not surprising that their polaritons have a very small mass:
about 10^-5 times as heavy as an electron!
They don’t last very long: just about 4-10 picoseconds. A picosecond is a trillionth of a second, or 10^-12 seconds. After that they fall apart. However, this is long enough for polaritons to do lots
of interesting things.
For starters, polaritons interact with each other enough to form a liquid. But it’s not just any ordinary liquid: it’s often a superfluid, like very cold liquid helium. This means among other things,
that it has almost no viscosity.
So: it’s even better than liquid light: it’s superfluid light!
The flow of liquid light
What can you do with liquid light?
For starters, you can watch it flow around obstacles. Semiconductors have ‘defects’—little flaws in the crystal structure. These act as obstacles to the flow of polaritons. And Giacobimo and her team
have seen the flow of polaritons around defects in the semiconductor:
The two pictures at left are two views of the polariton condensate flowing smoothly around a defect. In these pictures the condensate is a superfluid.
The two pictures in the middle show a different situation. Here the polariton condensate is viscous enough so that it forms a trail of eddies as it flows past the defect. Yes, eddies of light!
And the two pictures at right show yet another situation. In every fluid, we can have waves of pressure. This is called… ‘sound’. Yes, this is how ordinary sound works in air, or under water. But we
can also have sound in a polariton condensate!
That’s pretty cool: sound in liquid light! But wait. We haven’t gotten to the really cool part yet. Whenever you have a fluid moving past an obstacle faster than the speed of sound, you get a ‘shock
wave’: the obstacle leaves an expanding trail of sound in its wake, behind it, because the sound can’t catch up. That’s why jets flying faster than sound leave a sonic boom behind them.
And that’s what you’re seeing in the pictures at right. The polariton condensate is flowing past the defect faster than the speed of sound, which happens to be around 850,000 meters per second in
this experiment. We’re seeing the shock wave it makes. So, we’re seeing a sonic boom in liquid light!
It’s possible we’ll be able to use polariton condensates for interesting new technologies. Giacobimo and her team are also considering using them to study Hawking radiation: the feeble glow that
black holes emit according to Hawking’s predictions. There aren’t black holes in polariton condensates, but it may be possible to create a similar kind of radiation. That would be really cool!
But to me, just being able to make a liquid consisting mostly of light, and study its properties, is already a triumph: just for the beauty of it.
Scary technical details
All the pictures of polariton condensates flowing around a defect came from here:
• A. Amo, S. Pigeon, D. Sanvitto, V. G. Sala, R. Hivet, I. Carusotto, F. Pisanello, G. Lemenager, R. Houdre, E. Giacobino, C. Ciuti, and A. Bramati, Hydrodynamic solitons in polariton superfluids.
and this is the paper to read for more details.
I tried to be comprehensible to ordinary folks, but there are a few more things I can’t resist saying.
First, there are actually many different kinds of polaritons. In general, polaritons are quasiparticles formed by the interaction of photons and matter. For example, in some crystals sound acts like
it’s made of particles, and these quasiparticles are called ‘phonons’. But sometimes phonons can interact with light to form quasiparticles—and these are called ‘phonon-polaritons’. I’ve only been
talking about ‘exciton-polaritons’.
If you know a bit about superfluids, you may be interested to hear that the wavy patterns show the phase of the order parameter ψ in the Landau-Ginzburg theory of superfluids:
If you know about quantum field theory, you may be interested to know that the Hamiltonian describing photon-exciton interactions involves terms roughly like
$\alpha a^\dagger a + \beta b^\dagger b + \gamma (a^\dagger b + b^\dagger a)$
where $a$ is the annihilation operator for photons, $b$ is the annihilation operator for excitons, the Greek letters are various constants, and the third term describes the interaction of photons and
excitons. We can simplify this Hamiltonian by defining new particles that are linear combinations of photons and excitons. It’s just like diagonalizing a matrix; we get something like
$\delta c^\dagger c + \epsilon d^\dagger d$
where $c$ and $d$ are certain linear combinations of $a$ and $b$. These act as annihilation operators for our new particles… and one of these new particles is the very light ‘polariton’ I’ve been
talking about!
Is Life Improbable?
31 May, 2011
Mine? Yes. And maybe you’ve wondered just how improbable your life is. But that’s not really the question today…
Here at the Centre for Quantum Technologies, Dagomir Kaszlikowski asked me to give a talk on this paper:
• John Baez, Is life improbable?, Foundations of Physics 19 (1989), 91-95.
This was the second paper I wrote, right after my undergraduate thesis. Nobody ever seemed to care about it, so it’s strange—but nice—to finally be giving a talk on it.
My paper does not try to settle the question its title asks. Rather, it tries to refute the argument here:
• Eugene P. Wigner, The probability of the existence of a self-reproducing unit, Symmetries and Reflections, Indiana University Press, Bloomington, 1967, pp. 200-208.
According Wigner, his argument
purports to show that, according to standard quantum mechanical theory, the probability is zero for the existence of self-reproducing states, i.e., organisms.
Given how famous Eugene Wigner is (he won a Nobel prize, after all) and how earth-shattering his result would be if true, it’s surprising how little criticism his paper has received. David Bohm
mentioned it approvingly in 1969. In 1974 Hubert Yockey cited it saying
for all physics has to offer, life should never have appeared and if it ever did it would soon die out.
As you’d expect, there are some websites mentioning Wigner’s argument as evidence that some supernatural phenomenon is required to keep life going. Wigner himself believed it was impossible to
formulate quantum theory in a fully consistent way without referring to consciousness. Since I don’t believe either of these claims, I think it’s good to understand the flaw in Wigner’s argument.
So, let me start by explaining his argument. Very roughly, it purports to show that if there are many more ways a chunk of matter can be ‘dead’ than ‘living’, the chance is zero that we can choose
some definition of ‘living’ and a suitable ‘nutrient’ state such that every ‘living’ chunk of matter can interact with this ‘nutrient’ state to produce two ‘living’ chunks.
In making this precise, Wigner considers more than just two chunks of matter: he also allows there to be an ‘environment’. So, he considers a quantum system made of three parts, and described by a
Hilbert space
$H = H_1 \otimes H_1 \otimes H_2$
Here the first $H_1$ corresponds to a chunk of matter. The second $H_1$ corresponds to another chunk of matter. The space $H_3$ corresponds to the ‘environment’. Suppose we wait for a certain amount
of time and see what the system does; this will be described by some unitary operator
$S: H \to H$
Wigner asks: if we pick this operator $S$ in a random way, what’s the probability that there’s some $n$-dimensional subspace of ‘living organism’ states in $H_1$, and some ‘nutrient plus environment’
state in $H_1 \otimes H_2$, such that the time evolution sends any living organism together with the nutrient plus environment to two living organisms and some state of the environment?
A bit more precisely: suppose we pick $S$ in a random way. Then what’s the probability that there exists an $n$-dimensional subspace
$V \subseteq H_1$
and a state
$w \in H_1 \otimes H_2$
such that $S$ maps every vector in $V \otimes \langle w \rangle$ to a vector in $V \otimes V \otimes H_2$? Here $\langle w \rangle$ means the 1-dimensional subspace spanned by the vector $w$.
And his answer is: if
$\mathrm{dim}(H_1) \gg n$
then this probability is zero.
You may need to reread the last few paragraphs a couple times to understand Wigner’s question, and his answer. In case you’re still confused, I should say that $V \subseteq H_1$ is what I’m calling
the space of ‘living organism’ states of our chunk of matter, while $w \in H_1 \otimes H_2$ is the ‘nutrient plus environment’ state.
Now, Wigner did not give a rigorous proof of his claim, nor did he say exactly what he meant by ‘probability’: he didn’t specify a probability measure on the space of unitary operators on $H$. But if
we use the obvious choice (called ‘normalized Haar measure’) his argument can most likely be turned into a proof.
So, I don’t want to argue with his math. I want to argue with his interpretation of the math. He concludes that
the chances are nil for the existence of a set of ‘living’ states for which one can find a nutrient of such nature that interaction always leads to multiplication.
The problem is that he fixed the decomposition of the Hilbert space $H$ as a tensor product
$H = H_1 \otimes H_1 \otimes H_2$
before choosing the time evolution operator $S$. There is no good reason to do that. It only makes sense split up a physical into parts this way after we have some idea of what the dynamics is. An
abstract Hilbert space doesn’t come with a favored decomposition as a tensor product into three parts!
If we let ourselves pick this decomposition after picking the operator $S$, the story changes completely. My paper shows:
Theorem 1. Let $H$, $H_1$ and $H_2$ be finite-dimensional Hilbert spaces with $H \cong H_1 \otimes H_1 \otimes H_2$. Suppose $S : H \to H$ is any unitary operator, suppose $V$ is any subspace of
$H_1$, and suppose $w$ is any unit vector in $H_1 \otimes H_2$ Then there is a unitary isomorphism
$U: H \to H_1 \otimes H_1 \otimes H_2$
such that if we identify $H$ with $H_1 \otimes H_1 \otimes H_2$ using $U$, the operator $S$ maps $V \otimes \langle w \rangle$ into $V \otimes V \otimes H_2$.
In other words, if we allow ourselves to pick the decomposition after picking $S$, we can always find a ‘living organism’ subspace of any dimension we like, together with a ‘nutrient plus
environment’ state that allows our living organism to reproduce.
However, if you look at the proof in my paper, you’ll see it’s based on a kind of cheap trick (as I forthrightly admit). Namely, I pick the ‘nutrient plus environment’ state to lie in $V \otimes H_2$
, so the nutrient actually consists of another organism!
This goes to show that you have to be very careful about theorems like this. To prove that life is improbable, you need to find some necessary conditions for what counts as life, and show that these
are improbable (in some sense, and of course it matters a lot what that sense is). Refuting such an argument does not prove that life is probable: for that you need some sufficient conditions for
what counts as life. And either way, if you prove a theorem using a ‘cheap trick’, it probably hasn’t gotten to grips with the real issues.
I also show that as the dimension of $H$ approaches infinity, the probability approaches 1 that we can get reproduction with a 1-dimensional ‘living organism’ subspace and a ‘nutrient plus
environment’ state that lies in orthogonal complement of $V \otimes H_2$. In other words, the ‘nutrient’ is not just another organism sitting there all ready to go!
More precisely:
Theorem 2. Let $H$, $H_1$ and $H_2$ be finite-dimensional Hilbert spaces with $\mathrm{dim}(H) = \mathrm{dim}(H_1)^2 \cdot \mathrm{dim}(H_2)$. Let $\mathbf{S'}$ be the set of unitary operators
$S: H \to H$ with the following property: there’s a unit vector $v \in H_1$, a unit vector $w \in V^\perp \otimes H_2$, and a unitary isomorphism
$U: H \to H_1 \otimes H_1 \otimes H_2$
such that if we identify $H$ with $H_1 \otimes H_1 \otimes H_2$ using $U$, the operator $S$ maps $v \otimes w$ into $\langle v\rangle \otimes \langle v \rangle \otimes H_2$. Then the normalized
Haar measure of $\mathbf{S'}$ approaches 1 as $\mathrm{dim}(H) \to \infty$.
Here $V^\perp$ is the orthogonal complement of $V \subseteq H_1$; that is, the space of all vectors perpendicular to $V$.
I won’t include the proofs of these theorems, since you can see them in my paper.
Just to be clear: I certainly don’t think these theorems prove that life is probable! You can’t have theorems without definitions, and I think that coming up with a good general definition of ‘life’,
or even supposedly simpler concepts like ‘entity’ and ‘reproduction’, is extremely tough. The formalism discussed here is oversimplified for dozens of reasons, a few of which are listed at the end of
my paper. So far we’re only in the first fumbling stages of addressing some very hard questions.
All my theorems do is point out that Wigner’s argument has a major flaw: he’s choosing a way to divide the world into chunks of matter and the environment before choosing his laws of physics. This
doesn’t make much sense, and reversing the order dramatically changes the conclusions.
By the way: I just started looking for post-1989 discussions of Wigner’s paper. So far I haven’t found any interesting ones. Here’s a more recent paper that’s somewhat related, which doesn’t mention
Wigner’s work:
• Indranil Chakrabarty and Prashant, Non existence of quantum mechanical self replicating machine, 2005.
The considerations here seem more closely related to the Wooters–Zurek no-cloning theorem.
|
{"url":"http://johncarlosbaez.wordpress.com/category/quantum-technologies/","timestamp":"2014-04-21T04:37:34Z","content_type":null,"content_length":"108666","record_id":"<urn:uuid:ececd59c-00a1-4971-aeff-53178bcfeb26>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chebyshev's Inequality (Statistics Question)
Could someone tell me how to find the k in Chebyshev's inequality??
Parameter "k" is the number of standard deviations "σ" on either side of the mean "μ" for which a
lower bound
of the included distribution {
fraction between (μ-kσ) and (μ+kσ)
} is required {and given by this inequality to be
(1 - 1/k^2)
}. See also Msg #4 at the following site (Form #1 in this Msg is most commonly used):
|
{"url":"http://www.physicsforums.com/showthread.php?t=73530","timestamp":"2014-04-17T07:37:21Z","content_type":null,"content_length":"23213","record_id":"<urn:uuid:fd396252-5a3d-4482-b9f5-c89cf4ff006e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IEOR 6614, Spring 2009
Assigned: Thursday, February 19, 2009
Due: Thursday, February 26, 2009
General Instructions
1. Please review the course information.
2. You must write down with whom you worked on the assignment. If this changes from problem to problem, then you should write down this information separately with each problem.
3. Numbered problems are all from the textbook Network Flows .
1. Problem 6.28. Maximum flows and minimum cuts.
2. Problem 6.46. Matrix covering.
3. Problem 6.40. Converting non-integer flows to integer flows.
4. Problem 6.48. Ford-Fulkerson with irrational capacities
5. In class, we showed that the shortest augmenting path algorithm for maximum flow performs O(nm) iterations. Give an example of a family of networks for which the shortest augmenting path
algorithm performs X iterations, and try to make X as large as possible.
6. Problem 6.38. Submodularity of cuts.
Switch to:
|
{"url":"http://www.columbia.edu/~cs2035/courses/ieor6614.S09/hw5.html","timestamp":"2014-04-20T22:10:41Z","content_type":null,"content_length":"1740","record_id":"<urn:uuid:b64a84d5-33e9-48a4-bace-e082db3a543f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inference in Curved Exponential Family Models for Networks
Inference in Curved Exponential Family Models for Networks (2006)
Download Links
by David R. Hunter , Mark S. Handcock
Venue: Journal of Computational and Graphical Statistics
Citations: 42 - 9 self
author = {David R. Hunter and Mark S. Handcock},
title = {Inference in Curved Exponential Family Models for Networks},
journal = {Journal of Computational and Graphical Statistics},
year = {2006},
volume = {15},
pages = {565--583}
Network data arise in a wide variety of applications. Although descriptive statistics for networks abound in the literature, the science of fitting statistical models to complex network data is still
in its infancy. The models considered in this article are based on exponential families; therefore, we refer to them as exponential random graph models (ERGMs). Although ERGMs are easy to postulate,
maximum likelihood estimation of parameters in these models is very difficult. In this article, we first review the method of maximum likelihood estimation using Markov chain Monte Carlo in the
context of fitting linear ERGMs. We then extend this methodology to the situation where the model comes from a curved exponential family. The curved exponential family methodology is applied to new
specifications of ERGMs, proposed by Snijders et al. (2004), having non-linear parameters to represent structural properties of networks such as transitivity and heterogeneity of degrees. We review
the difficult topic of implementing likelihood ratio tests for these models, then apply all these model-fitting and testing techniques to the estimation of linear and non-linear parameters for a
collaboration network between partners in a New England law firm.
1637 Social Network Analysis: Methods and Applications - Wasserman, Faust - 1994
1144 Spatial interaction and the statistical analysis of lattice systems - Besag - 1974
560 A stochastic approximation method - Robbins, Monro - 1951
523 Theory of Point Estimation - Lehmann, Casella - 1998
204 Constrained Monte Carlo maximum likelihood for dependent data, (with discussion - Geyer, Thompson - 1992
200 On the orientation of graphs - Frank - 1980
146 Simulating normalizing constants: from importance sampling to bridge sampling to path sampling - Gelman, Meng - 1998
109 Simulating ratios of normalizing constants via a simple identity: A theoretical exploration - Meng, Wong - 1996
105 Markov chain monte carlo estimation of exponential random graph models - Snijders - 2002
94 An exponential family of probability distributions for directed graphs - Holland, Leinhardt - 1981
92 Markov chain concepts related to sampling algorithms - Roberts - 1996
81 New specifications for exponential random graph models - Snijders, Pattison, et al. - 2004
68 Pseudolikelihood estimation for social networks - Strauss, Ikeda - 1990
61 Defining the curvature of a statistical problem (with applications to second order efficiency - Efron - 1975
58 On the convergence of Monte Carlo maximum likelihood calculations - Geyer - 1994
56 Inference and monitoring convergence - Gelman - 1996
55 Assessing degeneracy in statistical models of social networks - Handcock - 2003
38 Generalized Monte Carlo significance tests - Besag, Clifford - 1989
28 The geometry of exponential families - Efron - 1978
26 Statistical analysis of change in networks - Frank - 1991
21 Goodness of fit of social network models - Hunter, Goodreau, et al.
18 Markov chain Monte Carlo for statistical inference - Besag - 2000
16 Multiplexity, Generalized Exchange and Cooperation in Organizations: A Case Study.” Social Networks - Lazega, Pattison - 1999
16 Possible Biases Induced by MCMC Convergence Diagnostics - Cowles, Roberts, et al. - 1999
15 The collegial phenomenon: the social mechanisms of co-operation among peers in a corporate law partnership - Lazega - 2001
14 P.: Maximum likelihood estimation for Markov graphs - Corander, Dahmström, et al. - 1998
14 F.: Markov Chain Monte Carlo maximum likelihood estimation for p ∗ social network models. Paper presented at - Crouch, Wasserman, et al. - 1998
12 Statistical models for social networks: Inference and degeneracy - Handcock - 2003
6 Ml-estimation of the clustering parameter in a markov graph model - Dahmström, Dahmström - 1993
1 The geometry of exponential families - unknown authors - 1978
|
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.112.9997","timestamp":"2014-04-17T16:55:15Z","content_type":null,"content_length":"32849","record_id":"<urn:uuid:ec721ffe-d4f1-4178-a588-6d71307d117c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cauchy-Schwartz Inequality
June 28th 2009, 02:48 PM #1
Jun 2009
Cauchy-Schwartz Inequality
Prove the Cauchy-Schwartz inequality |<u, v>| <= |u| |v|. (<= means less than or equal to)
I attempted it a couple times, but didn't get anywhere. Help would be great, thanks.
I've seen that already, but I couldn't understand it all that well.
Please post an attempt at a solution or clarify what you don't understand in the proof provided (preferrably the former).
If $v=0_v,$ then we have the equality, so there's nothing to prove there. Thus let's take care about the interesting case when it's $ve0_v.$
Under this assumption, put $t=u-\frac{\left\langle u,v \right\rangle }{\left\| v \right\|^{2}}\cdot v$ and then
\begin{aligned}<br /> \left\langle t,v \right\rangle &=\left\langle u-\frac{\left\langle u,v \right\rangle }{\left\| v \right\|^{2}}\cdot v,v \right\rangle \\ <br /> & =\left\langle u,v \right\
rangle -\frac{\left\langle u,v \right\rangle }{\left\| v \right\|^{2}}\cdot \left\langle v,v \right\rangle \\ <br /> & =0. <br /> \end{aligned}
From here we have $0\le\|t\|^2=\left\langle t,t \right\rangle =\left\| u \right\|^{2}-\frac{\left\langle u,v \right\rangle ^{2}}{\left\| v \right\|^{2}}.$ Hence,
$0\le \left\| t \right\|^{2}\cdot \left\| v \right\|^{2}=\left\| u \right\|^{2}\cdot \left\| v \right\|^{2}-\left\langle u,v \right\rangle ^{2}\implies \left| \left\langle u,v \right\rangle \
right|\le \left\| u \right\|\cdot \left\| v \right\|.\quad\blacksquare$
June 28th 2009, 03:30 PM #2
June 28th 2009, 03:37 PM #3
Jun 2009
June 28th 2009, 04:33 PM #4
Nov 2008
June 28th 2009, 06:05 PM #5
|
{"url":"http://mathhelpforum.com/advanced-algebra/93952-cauchy-schwartz-inequality.html","timestamp":"2014-04-20T01:48:11Z","content_type":null,"content_length":"42802","record_id":"<urn:uuid:a7a0b4f2-7bab-465f-bb94-067f100e2f49>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lie group
The topic Lie group is discussed in the following articles:
history of mathematics
• TITLE: mathematicsSECTION:
Riemann’s influence
Yet another setting for Lebesgue’s ideas was to be the theory of Lie groups. The Hungarian mathematician Alfréd Haar showed how to define the concept of measure so that functions defined on Lie
groups could be integrated. This became a crucial part of Hermann Weyl’s way of representing a Lie group as acting linearly on the space of all (suitable) functions on the group (for technical...
• TITLE: mathematicsSECTION:
Mathematical physics and the theory of groups
...are made, were all representable as algebras of matrices, and, in a sense, Lie algebra is the abstract setting for matrix algebra. Connected to each Lie algebra there were a small number of
Lie groups, and there was a canonical simplest one to choose in each case. The groups had an even simpler geometric interpretation than the corresponding algebras, for they turned out to
|
{"url":"http://www.britannica.com/print/topic/339804","timestamp":"2014-04-25T04:12:17Z","content_type":null,"content_length":"7493","record_id":"<urn:uuid:2c87351c-a570-4f26-bc4a-0f342c44402b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the slope of the tangent to the curve y =(1/2x) + 3; at the point where x = -1. Find the angle which this tangent makes with the curve y = 2x^2 + 2
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50754dd6e4b05254de012f3e","timestamp":"2014-04-18T03:47:55Z","content_type":null,"content_length":"37226","record_id":"<urn:uuid:b95d311a-21a3-49b6-9128-1cd4f44bc411>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
100 cc equals how many ounces
You asked:
100 cc equals how many ounces
3.3814022701 US fluid ounces
the volume 3.3814022701 US fluid ounces
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/100_cc_equals_how_many_ounces","timestamp":"2014-04-24T21:39:57Z","content_type":null,"content_length":"57961","record_id":"<urn:uuid:4326ccfe-d86c-4a69-912d-d731f891bcb8>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geodesics in Graphs:Shortest Paths vs Going as Straight Ahead as Possible
up vote 2 down vote favorite
According to Wikipedia http://en.wikipedia.org/wiki/Geodesic, a geodesic " is a generalization of the notion of a "straight line" to "curved spaces " and further " In the presence of an affine
connection, a geodesic is defined to be a curve whose tangent vectors remain parallel if they are transported along it. If this connection is the Levi-Civita connection induced by a Riemannian
metric, then the geodesics are (locally) the shortest path between points in the space "
My questions are:
1. whether graphs in general resp. which kind of graphs fullfill the cited conditions that imply that the geodesics are (locally) shortest paths?
2. are there alternative definitions of geodesics in graphs, that are based on the generalization of straight lines, resp. on a measure for the deviation from a straight line (e.g. the angle
between successive edges in case of geometric graphs)?
Background of the question:
My interest in that question comes from an attempt to generalize planar convex hulls to graphs and that in turn from the observation, that an optimal round trip through all elements of a finite
subset of the points of an Euclidean plane encounters the points in the same (or reverse) order, in which they are encountered around the convex hull.
Relation of geodesics to planar convex hulls
I finally realized that the gift-wrapping algorithm http://en.wikipedia.org/wiki/Gift_wrapping_algorithm for constructing planar convex hulls could be w.l.o.g. interpreted as starting at a point with
minimal $y$-coordinate, heading in positive $x$-direction and that proceeding by chosing as the next point the one that required the least change of direction, the measure of change in that case is
the angle between current and subsequent direction.
The method of going as straight ahead as possible yields the convex hull as the limit cycle independent of the starting point.
Unfortunately the limit cycle need not be unique and can also have self-intersection in the case of general graphs, for which an angle between edges is defined
(however, the collection of limit cycles along with their topologies might be interesting on their own).
dg.differential-geometry graph-theory mg.metric-geometry algorithms
add comment
2 Answers
active oldest votes
An interesting class of graphs are the median graphs ( see here http://en.wikipedia.org/wiki/Median_graph). If you fill in all (maximal) subgraphs which are Hamming cubes you will get a
$CAT(0)$ cubical complex. The $CAT(0)$ distance is given by putting the euclidian metric in each cube and then taking the shortest path in the whole complex. It is really a $L_2$ metric
versus the initial graph metric which is $L_1$. The new $CAT(0)$ geodesics (which are unique between two points-versus the many shortest paths in the initial graph) look more like
"straight lines" in the usual sense (and indeed you can embed bounded parts of the complex into euclidian spaces such that the geodesic you are interested in is straight in the usual
up vote 3 sense). Eg. $\mathbb{Z^2}$ is turned into $\mathbb{R^2}$ this way.
down vote
accepted The connection between median graphs and $CAT(0)$ cubical complexes is worked out by Chepoi here: http://pageperso.lif.univ-mrs.fr/~victor.chepoi/cat0.pdf
I hope it helps a bit.
thanks Dan, your answer is in the spirit of replies I hope for. – Manfred Weis Aug 21 '13 at 19:54
My pleasure. I highly recommend the following paper for related algorithmic questions: math.sfsu.edu/federico/Articles/cat0.pdf – Dan Sălăjan Aug 21 '13 at 20:05
add comment
I imagine the property you want is very rare, regardless of how you define "straightest path". Here is one example though. Consider the Cayley graph of the free group presented by $\langle
x,y|\rangle$, i.e., the infinite 4-valent regular graph. Take the angle between two edges leaving a vertex to be $\pi$ if they are distinct and $0$ if they are the same edge. Then
up vote 0 geodesics are curves which curve as little as possible, i.e., always go through straight angles.
down vote
using shortest paths in case of complete metric graphs yields the edge connecting start- and target vertex; not very interesting either - here the "straight-line approach" can exhibit
more interesting behaviour. – Manfred Weis Aug 21 '13 at 19:45
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry graph-theory mg.metric-geometry algorithms or ask your own question.
|
{"url":"http://mathoverflow.net/questions/140041/geodesics-in-graphsshortest-paths-vs-going-as-straight-ahead-as-possible","timestamp":"2014-04-17T12:32:27Z","content_type":null,"content_length":"59634","record_id":"<urn:uuid:79f3dad4-096a-47e0-b2b1-4aab57469b74>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Type Fixpoints: Iteration vs. Recursion
Type Fixpoints: Iteration vs. Recursion
I have the pleasure to announce the availability of the following paper:
Type Fixpoints: Iteration vs. Recursion
(To appear in Proc. 4th ICFP, Paris, France, September 1999.)
Authors: Zdzislaw Splawski, Faculty of Informatics and Management,
Wroc\l aw University of Technology, Poland
Pawel Urzyczyn, Institute of Informatics, Warsaw University, Poland
The paper can be downloaded from http://zls.mimuw.edu.pl/~urzy/ftp.html
Positive recursive (fixpoint) types can be added to the polymorphic
(Church-style) lambda calculus \lambda 2 (System {\bf F}) in several different
ways, depending on the choice of the elimination operator. We compare several
such definitions and we show that they fall into two equivalence classes with
respect to mutual interpretability. Elimination operators for fixpoint types
are thus classified as either ``iterators'' or ``recursors''. This
classification has an interpretation in terms of the Curry-Howard
correspondence: types of iterators and recursors can be seen as images
of induction axioms under different dependency-erasing maps.
Systems with recursors are equivalent to a calculus of recursive types
with the operators Fold :
sigma[mu alpha.sigma/alpha] -> mu alpha.sigma and
Unfold: mu alpha.sigma -> sigma[mu alpha.sigma/alpha], where
Unfold circ Fold =_beta I.
It is known that systems with iterators can be defined within lambda 2.
We show that systems with recursors can not. For this we study the notion
of polymorphic type embeddability (via (beta) left-invertible terms) and
we show that if a type sigma is embedded into another type tau
then sigma can not be longer than tau.
Pawel Urzyczyn urzy@mimuw.edu.pl
Institute of Informatics http://zls.mimuw.edu.pl/~urzy/home.html
University of Warsaw direct phone: +48-22-658-43-77
Banacha 2, room #4280 main office: +48-22-658-31-65
02-097 Warszawa, Poland fax: +48-22-658-31-64
|
{"url":"http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/msg00164.html","timestamp":"2014-04-16T07:28:41Z","content_type":null,"content_length":"4504","record_id":"<urn:uuid:bb1fb9b9-b14b-4d4f-9940-b2a8093c179e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The number that (usually) comes before
. Square root of
. Also the amount of people it takes to
one two three four...
-The ultimate numerical paradox.
-A perfect balance of one on each side for perfect harmony, never being able to counter balance since its perfectly even. Yin and Yang. You have a brain for logic, but then you have the heart to
counter balance with emotions.
-Not only used for balancing, but combined for the ultimate pair. In order to start a family, you need 2. Every hero needs a sidekick. you have not 1 but 2 eyes for ultimate depth perception. 2 arms,
2 legs, 2 ears, 2 lungs, 2 testicals.
-most powerful number in math. 2 is the number most divisible into other numbers (not counting 1 since one is itself and more or less is the building block of all numbers). 2 is what we build off of.
It's so simple, yet so strong. Subconsciously you know this to be true since when refering to something simple or basic, we say "put 2 and 2 together." Why dont we say "put 1 and 1 together?" We
don't see 1 as a credible number, holding 2 in higher regards.
-2 is the only number that can't make a shape(number of sides). 1 is a circle. 3 is a triangle. But what is 2? Think about it...
-There are only 2 things to do with your life, live or die.
-For every action, there is a re-action. one things leads to one other thing. that equals 2. we live our lives between 2's. we go to work to get money to live. we work out to get buff. we eat to
satiate our hunger.
-Everything has an opposite, or equal other, that makes them a pair of 2. There is only good or evil. There is only right or wrong.
2 packs quite a punch, don't you think?
Two is the master to key to everything in this world:
Yin and Yang
Husband and Wife
noun. to give someone twos on a cigarette is to smoke half of it then give them the rest to finish. More commonly done by boys as it is less considerate.
verb. to "twos" a cigarette is to smoke one together, taking a couple of
s then swapping. More commonly done by girls.
"I met this well safe
the other day; he gave me twos on a rollie"
"That's so dirty..."
"Don't worry babe; tell you what, lets have some wine and twos a fag outside"
n. The lonliest number since the number
The lonliest number since the number one.
Twos is a special word. While it can be used in relation to cigarettes, it makes more sense to apply it to joints.
It makes sense as a noun -
Adjective -
imperative -
Basically the person who rolls it starts it - the person who has called 'twos' then gets it afterwards.
There is no threes or fours. It just flows from there.
got any twos? (weed, dope etc)
I'm twosed (baked)
TWOS! - order someone to roll a joint.
Im twosing (smoking a joint)
To defecate
I have to go number two.
The number that comes after 1 and before 3. it is the answer to the worst math problem ever.
|
{"url":"http://ru.urbandictionary.com/define.php?term=two","timestamp":"2014-04-16T19:04:25Z","content_type":null,"content_length":"61359","record_id":"<urn:uuid:3c653138-939c-4018-a138-7d4b02e6f1a6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(NIOS Syllabus) Class 10 NIOS Syllabus | Mathematics
(NIOS Syllabus) Class 10 NIOS Syllabus | Mathematics 2012
Mathematics Syllabus Class X
Secondary Course (Mathematics)
Mathematics is an important discipline of learning at the secondary stage. It helps the learners in acquiring decision- making ability through its applications to real life both in familiar and
unfamiliar situations. It predominately contributes to the development of precision, rational and analytical thinking, reasoning and scientific temper. One of the basic aims of teaching Mathematics
at the Secondary stage is to inculcate the skill of quantification of experiences around the learner. Mathematics helps the learners to understand and solve the day to day life problems faced by them
including those from trade, banking, sales tax and commission in transaction. It also helps them to acquire the skill of representing data in the form of tables/graphs and to draw conclusions from
the same.
The present curriculum in Mathematics includes the appreciation of the historical development of mathematical knowledge with special reference to the contribution of Indian mathematicians
particularly in the introduction of zero, the decimal system of numeration in the international form (popularly known as Hindu – Arabic numerals ). The learners are encouraged to enhance their
computational skills using Vedic Mathematics.
The main objectives of teaching Mathematics at the Secondary stage are to enable the learners to :
· acquire knowledge and understanding of the terms, concepts, symbols, principles and processes.
· acquire the skill of quantification of experiences around them.
· acquire the skill of drawing geometrical figures, charts and graphs representing given data.
· interpret tabular/graphical representation of the data.
· articulate logically and use the same to prove results.
· translate the word problems in the mathematical form and solve them.
· appreciate the contribution of Indian mathematicians towards the development of the subject.
· develop interest in Mathematics.
The present syllabus in Mathematics has been divided into six modules namely Algebra
,Commercial Mathematics ,Geometry, Mensuration ,Trigonometry and Statistics .
The marks allotted , number of lessons and suggested study time for each module are as under :
│Name of the module │Number of│Study time │Marks│
│ │lessons │( in hours )│ │
│1. Algebra │8 │50 │26 │
│2. Commercial Mathematics │4 │35 │15 │
│3. Geometry │10 │75 │25 │
│4. Mensuration │2 │25 │10 │
│5. Trigonometry │2 │20 │12 │
│6. Statistics │4 │35 │12 │
│ │30 │240 │100 │
There will be three Tutor Marked Assignments (TMA’s) to be attempted by the learner. The awards/grades of the best two TMA’s will be reflected in the Mark sheet.
Module 1 : Algebra
Study time : 50 Hours Marks : 26
Scope and Approach : Algebra is generalized form of arithmetic. Here we would deal with unknowns in place of knowns as in arithmetic. These knowns are, in general, numbers. It may be recalled that
the study of numbers begin with natural numbers without which we would not be able to count. The system of natural numbers is extended to rational number system. To be able to measure all lengths in
terms of a given unit, the rational numbers have to be extended to real numbers. Exponents and indices would simplify repeated multiplication and their laws would be introduced. These would be used
to write very large and very small numbers in the scientific notation.
Algebraic expressions and polynomials would be introduced with the help of four fundamental operations on unknowns. Equating two algebraic expressions or polynomials leads to equations. In the module
a study of linear and quadratic equations would be taken up to solve problems of daily life.
The learners would be acquainted with different number patterns. One such pattern, namely Arithmetic Progression would be studied in details.
• 1.1 Number Systems
–Review of natural numbers ,integers and rational numbers, rational numbers as terminating or non – terminating decimals. Introduction of irrational numbers as nonterminating and non – recurring
–Rounding of rational numbers and irrational numbers. Real numbers.
–Representation of irrational numbers such as 2 , 3 and 5 on the number line.
–Operations on rational and irrational numbers.
• 1.2 Indices ( Exponents )
–Exponential notation ,meaning of exponent ,laws of exponents. Applications of laws of exponents. Expressing numbers as product of powers of prime numbers. Scientific notation.
• 1.3 Radicals( Surds )
–Meaning of a radical, index and radicand. Laws of radicals. Simplest form of a radical.
–Rationalising a radical in the denominator. Simplification of expressions involving
• 1.4 Algebraic Expressions and Polynomials
–Introduction to variables. Algebraic expressions and polynomials. Operations on algebraic expressions and polynomials. Degree of a polynomial. Value of an algebraic expression .
• 1.5 Special Products and Factorisation
–Special products of the type ( a ± b )2 , (a + b)(a – b) , ( a ± b )3.
–Application of these to calculate squares and cube of numbers.
–Factorisation of the algebraic expressions.
–Factorisation of expressions of the form a2 – b2, a3 ± b3 .
–Factorisation of the polynomial of the form ax2 + bx + c ( a ¹ 0) by splitting the middle term.
–H.C.F and L.C.M of two polynomials in one variable only by factorisation.
–Rational expressions. Rational expression in the simplest form.
–Operations on rational expressions.
• 1.6 Linear Equations
–Linear equations in one variable and in two variables. Solution of a linear equation in one variable.
–System of linear equations in two variables. Graph of a linear equation in two variables.
–Solution of a system of linear equations in two variables ( graphical and algebraic methods).
–Solving word problems involving linear equations in one or two variables.
• 1.7 Quadratic Equations
–Standard form of a quadratic equation : ax2 + bx + c = 0 , a ¹ 0.
–Solution of ax2 + bx + c = 0 , a ¹ 0 by (i) factorization (ii) quadratic formula.
–Formation of quadratic equation with given roots. Application of quadratic equations to solve word problems.
• 1.8 Number Patterns
-Recognition of number patterns. Arithmetic and Geometric progressions. nth term and sum to n terms of an Arithmetic Progression.
Module 2 : Commercial Mathematics
Study time : 35 Hours Marks : 15
Scope and Approach : After passing Seco
ndary level examination ,some learners may work in banks, business, houses, insurance companies dealing with sales tax ,income tax , excise duty etc. Some other may enter business and ind ustry. Some
may go for higher studies. All of them will need mathematics of finance. In any case ,every citizen has to deal with problems involving interest , investment , purchases etc. It is in this context
,the present module would be developed.
In this module , applications of compound interest in the form of rate of growth ( appreciation ) and depreciation(decay) will be dealt. In solving problems related to all the stated areas , the
basic concepts of direct and inverse proportion (variation) ,and percentage are all pervading.
• 2.1 Ratio and Proportion
Review of ratio and proportion. Application of direct and inverse proportion (variation).
• 2.2 Percentage and its Applications
Concept of percentage. Conversion of percents to a decimal ( fraction ) and vice – versa. Computations involving percentage.
Applications of percentage to (i) profit and loss (ii) simple interest
(iii) discount (rebate ) (iv) sales tax
(v) commission in transaction (vi) instalment buying
• 2.3 Compound Interest
Compound interest and its application to rate of growth and depreciation.
(conversion periods not more than 4 )
• 2.4 Banking
Concept of Banking. Types of accounts : (a) Saving (b) Fixed/term deposit
Calculation of interest in saving account and on fixed deposit with not more than 4 conversion periods.
Module 3 : Geometry
Study time : 75 Hours Marks : 25
Scope and Approach : Looking at the things around him , the learner sees the corners ,edges , top of a table , circular objects like rings or bangles and similar objects like photographs of different
sizes made from the same negative which arouse his curiosity to know what they represent geometrically.
To satisfy the learners curiosity and to add to his knowledge about the above things, the lessons on Lines and Angles, congruent and similar triangles and circles will be introduced. Some of the
important results dealing with above concepts would be verified experimentally while a few would be proved logically. Different types of quadrilaterals would also be introduced under the lessons on
Quadrilaterals and Areas.
The learners would also be given practice to construct some geometrical figures using geometrical instruments. In order to strengthen graphing of linear equations , the basic concept of coordinate
geometry has been introduced.
Note : Proofs of only “ * ” marked propositions and riders based on “ * ” marked propositions using unstarred propositions may be asked in the examination.
However direct numerical problems based on unstarred propositions may also be asked in the examination.
• 3.1 Lines and Angles
Basic geometrical concepts : point ,line ,plane,parallel lines and intersecting lines in a plane. Angles made by a transversal with two or more lines.
–If a ray stands on a line, the sum of the two angles so formed is 180o.
–If two lines intersect, then vertically opposite angles are equal.
–If a transversal intersects two parallel lines then corresponding angles are equal.
–If a transversal intersects two parallel lines then
(a) alternate angles are equal
(b) interior angles on the same side of the transversal are supplementary.
–If a transversal intersects two lines in such a way that
(a) alternate angles are equal ,then the two lines are parallel.
(b) interior angles on the same side of the transversal are supplementary ,then the two lines are parallel.
*Sum of the angles of a triangle is 180o.
–An exterior angle of a triangle is equal to the sum of the interior opposite angles.
–Concept of locus (daily life examples may be given)
–The locus of a point equidistant from two given :
(a) points (b) intersecting lines.
• 3.2 Congruence of Triangles
–Concept of congruence through daily life examples . Congruent figures.
–Criteria for congruence of two triangles namely : SSS,SAS,ASA,RHS
*Angles opposite to equal sides of a triangle are equal.
*Sides opposite to equal angles of a triangle are equal.
*If two sides of a triangle are unequal ,then the longer side has the greater angle opposite to it.
–In a triangle , the greater angle has the longer side opposite to it.
–Sum of any two sides of a triangle is greater than the third side.
• 3.3 Concurrent Lines
–Concept of concurrent lines.
–Angle bisectors of a triangle pass through the same point.
–Perpendicular bisectors of the sides of a triangle pass through the same point.
–In a triangle the three altitudes pass through the same point.
–Medians of a triangle pass through the same point which divides each of the medians in the ratio 2 : 1.
• 3.4 Quadrilaterals
–Quadrilateral and its types.
–Properties of special quadrilaterals viz. trapezium ,parallelogram ,rhombus , rectangle ,square.
–In a triangle , the line segment joining the mid points of any two sides is parallel to the third side and is half of it.
–The line drawn through the mid point of a side of a triangle parallel to another side bisects the third side.
–If there are three or more parallel lines and the intercepts made by them on a transversal are equal, the corresponding intercepts on any other transversal are also equal.
–A diagonal of a parallelogram divides it into two triangles of equal area.
*Parallelograms on the same or equal bases and between the same parallels are equal in area.
–Triangles on the same or equal bases and between the same parallels are equal in area.
–Triangles on equal bases having equal areas have their corresponding altitudes equal.
• 3.5 Similarity of Triangles
–Similar figures ,concept of similarity in geometry. Basic proportionality theorem and its converse.
–If a line is drawn parallel to one side of a triangle , the other two sides are divided in the same ratio.
–If a line divides any two sides of a triangle in the same ratio , it is parallel to the third side.
–Criteria for similarity of triangles : AAA, SSS and SAS .
–If a perpendicular is drawn from the vertex of the right angle of a triangle to its hypotenuse , the triangles on each side of the perpendicular are similar to the whole triangle and to each
–The internal bisector of an angle of a triangle divides the opposite side in the ratio of the sides containing the angle.
–Ratio of the areas of two similar triangles is equal to the ratio of the squares on their corresponding sides.
*In a right triangle ,the square on the hypotenuse is equal to the sum of the squares on the other two sides (Baudhayan / Pythagoras theorem)
In a triangle ,if the square on one side is equal to the sum of the squares on the remaining two sides ,the angle opposite to the first side is a right angle
( converse of Baudhayan /Pythagoras theorem)
• 3.6 Circles
Definition of a circle and related concepts. Concept of concentric circle.
Congruent circles :
–Two circles are congruent if and only if they have equal radii.
–Two arcs of a circle( or congruent circles) are congruent , if the angles subtended by them at the c
entre(s) are equal and its converse.
–Two arcs of a circle( or congruent circles)are congruent ,if their corresponding chords are equal , and its converse.
–Equal chords of a circle( or congruent circles) subtend equal angles at the centre(s) and conversely , if the angles subtended by the chords at the centre of a circle are equal , then the chords
are equal.
–Perpendicular drawn from the centre of a circle to a chord bisects the chord.
–The line joining the centre of a circle to the mid point of a chord is perpendicular to the chord.
–There is one and only one circle passing through three given non collinear points.
–Equal chords of a circle (or of congruent circles) are equidistant from the centre (centres) and its converse.
• 3.7 Angles in a Circle and Cyclic Quadrilateral
The angle subtended by an arc at the centre is double the angle subtended by it at any point on the remaining part of the circle.
*Angles in the same segment of a circle are equal.
Angle in a semi circle is a right angle.
Concyclic points.
*Sum of the opposite angles of a cyclic quadrilateral is 180o.
If a pair of opposite angles of a quadrilateral is supplementary , then the quadrilateral is cyclic.
• 3.8 Secants , Tangents and their Properties
Intersection of a line and a circle. Point of contact of a line and a circle.
A tangent at any point of a circle is perpendicular to the radius through the point of contact.
Tangents drawn from an external point to a circle are of equal length.
If two chords AB and CD of a circle intersect at P (inside or outside the circle), then PA ´ PB = PC ´ PD
If PAB is a secant to a circle intersecting the circle at A and B, and PT is a tangent to the circle at T, then PA ´ PB = PT2.
If a chord is drawn through the point of contact of a tangent to a circle , then the angles which this chord makes with the given tangent are equal respectively to the angles formed by the chord
in the corresponding alternate segments.
• 3.9 Constructions
–Division of a line segment internally in a given ratio.
–Construction of triangles with given data:
(a) Construction of a triangle with given data : SSS , SAS , ASA , RHS
(b) perimeter and base angles (c) its base , sum and difference of the other two sides and one base angle.(d) its two sides and a median corresponding to one of these sides.
–Construction of parallelograms , rectangles, squares , rhombuses and trapeziums.
–Constructions of quadrilaterals given :
(a) four sides and a diagonal (b) three sides and both diagonals
(c) two adjacent sides and three angles (d) three sides and two included angles
(e) four sides and an angle
–Construction of a triangle equal in area to a given quadrilateral.
–Construction of tangents to a circle from a point
(a) outside it
(b) on it using the centre of the circle .
–Construction of circumcircle and incircle of a triangle.
• 3.10 Coordinate Geometry
Coordinate system. Distance between two points. Section formula (internal division only).
Coordinates of the centroid of a triangle.
Module 4 : Mensuration
Study time : 25 Hours Marks : 10
Scope and Approach : In this module an attempt would be made to answer the following questions arising in our daily life.
–How do you find the length of the barbed wire needed to enclose a rectangular kitchen garden ?
–What is the cost of constructing two perpendicular concrete rectangular paths ?
–What is the area of the four walls of a room with given dimensions ?
–How much plywood is needed to be fixed on the top of a rectangular table ?
–The formulae for areas of plane figures would be taught in the first lesson.
In the second lesson , the surface and volume of the different solids ( three dimensional figures ) would be taken up and formulae given. Their applications to daily life situations would then be
taken up.
• 4.1 Area of Plane Figures
–Rectilinear figures. Perimeter and area of a square , rectangle ,triangle, trapezium , quadrilateral , parallelogram and rhombus.
–Area of a triangle using Hero’s formula. Area of rectangular paths .
–Simple problems based on the above.
–Non rectilinear figures : Circumference and area of a circle.
–Area and perimeter of a sector.
–Area of circular paths. Simple problems based on the above.
• 4.2 Surface Area and Volume of Solids
–Surface area and volume of a cube , cuboid , cylinder , cone , sphere and hemisphere. ( combination of two solids should be avoided ).
–Area of four walls of a room.
Module 5 : Trigonometry
Study time : 20 Hours Marks : 12
Scope and Approach : In astronomy one often encounters the problems of predicting the position and path of various heavenly bodies ,which in turn requires the way of finding the remaining sides and
angles of a triangle provided some of its sides and angles are known. The solutions of these problems has also numerous applications to engineering and geographical surveys ,navigation etc. An
attempt has been made in this module to solve these problems. It is done by using ratios of the sides of a right triangle with respect to its acute angle called trigonometric ratios. The module will
enable the learners to find other trigonometric ratios provided one of them is known. It also enables the learners to establish well known identities and to solve problems based on trigonometric
ratios and identities.
Measurement of accessible lengths and heights (e.g. height of a pillar, height of a house etc.) and inaccessible heights ( e.g. height of a hill top, height of a lamp post on the opposite bank of a
river (without bridge),celestial objects etc. ) is a routine requirement. The learners will be able to distinguish between angles of elevation and depression and use trigonometric ratios for solving
simple real life problems based on heights and distances , which do not involve more than two
right triangles.
• 5.1 Introduction to Trigonometry
Trigonometric ratios of an acute angle of a right triangle.
Relationships between trigonometric ratios.
Trigonometric identities : sin2q + cos2q = 1, sec2q = 1+ tan2q, cosec2q = 1+ cot2q
Problems based on trigonometric ratios and identities.
• 5.2 Trigonometric Ratios of Some Special Angles
Trigonometric ratios of 30o,45o and 60o.
(Results for trigonometric ratios of 30o,45o and 60o to be proved geometrically)
Trigonometric ratios of complementary angles.
Application of these trigonometric ratios for solving problems such as heights and distances( problems on heights and distances should not involve more than two right triangles)
Module 6 : Statistics
Study time : 35 Hours Marks :12
Scope and Approach : Since ancient times, it has been the practice by the householders , shopkeepers , individuals etc to keep records of their receipts, expenditures and other resources. To make the
learners acquainted with the methods of recording, condensing and culling out relevant information from the given data, the learners would be exposed to the lesson on Data and their Representation.
Everyday we come across data in the form of tables, g
raphs, charts etc on various aspects of economy, advertisements which are eye catching. In order to read and understand these, the learners would be introduced to the lesson on Graphical
Representation of Data.
Sometimes we are required to describe data arithmetically like average age of a group median score of a group or modal collar size of a group. To be able to do this, the learners would be introduced
to the lesson on Measures of Central Tendency. They would also be taught characteristics and limitation of these measures.
‘It will rain today’, ‘India will win the match against England’, are statements that involve the chance factor. The learners would be introduced to the study of elementary probability as measure of
uncertainty, through games of chance- tossing a coin, throwing a die , drawing a card at random from a well shuffled pack etc.
• 6.1 Data and their Representation
–Introduction to Statistics. Statistics and statistical data. Primary and secondary data.
–Ungrouped/raw and grouped data. Class marks ,class intervals , class limits and true class limits. Frequency, frequency distribution table. Cumulative frequency.
–Cumulative frequency table.
• 6.2 Graphical Representation of Data
–Drawing of Bar charts, Histograms and frequency polygons.
–Reading and interpretation of Bar charts and Histograms. Reading and construction of graphs related to day to day activities ;temperature – time graph ,pressure – volume graph and velocity –
time graph etc.
• 6.3 Measures of Central Tendency
–Mean of ungrouped (raw ) and grouped data. Mode and median of raw data.
–Properties of mean and median .
• 6.4 Introduction to Probability
–Elementary idea of probability as a measure of chance of occurrence of an event ( for single event only ) Problems based on tossing a coin ,throwing a die, drawing a card from a well shuffled
pack .
For further information regarding (NIOS Syllabus) Class 10 NIOS Syllabus | Mathematics, connect with us on facebook, google+ and twitter.
Tags: nios syllabus for 10th 2013, nios assignment 2012-13 answers, nios assignment 2012 13 answers, nios class-x solve paper 2012 calendar, solved assignment nios secondary 2012-13, Passing marks in
nios in class 10, nios tma maths solution, solved TMA NIOS 2013, nios secondary syllabus, nios secondary maths book 2013 2014
my name is sonu sidhewar iam joining nios 10 class in marathi midium I have get study material but marathi medium assignment notes not Available so please sagges me
sonu sidhewar on 05 Jan 2013
how to solve problem for geometry construction of similar triangles for 10th standard
atul jain on 08 Sep 2011
|
{"url":"http://examsindia.net/2009/12/nios-syllabus-class-10-nios-syllabus-mathematics.html","timestamp":"2014-04-18T10:33:43Z","content_type":null,"content_length":"48494","record_id":"<urn:uuid:270201b1-22b9-470c-83e0-448aac2bb210>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
act one
• 1. How many toy cars are in that circle?
• 2. Guess as close as you can.
• 3. Give an answer you know is too high.
• 4. Give an answer you know is too low.
act two
• 5. What information will you need to know to solve the problem?
act three
• 6. What was the percent error of your answer?
• 7. How would your estimates have to change to get a more accurate answer?
• 8. It is estimated that 10,000 different Hot Wheels cars have been produced since 1967. How large of a circle would it take to fit all those cars inside?
• 9. Take a long piece of string. Draw a circle in chalk outside using that string as its radius. Estimate how many cars would fit inside and then calculate it.
• 10. How much did it cost?
|
{"url":"http://threeacts.mrmeyer.com/carcaravan/","timestamp":"2014-04-16T05:07:48Z","content_type":null,"content_length":"4451","record_id":"<urn:uuid:a86827bf-8d6c-4aa4-ae87-748b9a6d83d6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A question on axiomatization of geometry
Hello !
Does anyone have an idea if it is possible to prove Dedekind's continuity axiom from the list of Hilbert's axioms, as given in his book Grundlagen der Geometrie ? Especially what is the role of the
last completeness axiom in his list ?
Thanks in advance
|
{"url":"http://www.physicsforums.com/showthread.php?t=410592","timestamp":"2014-04-21T12:12:25Z","content_type":null,"content_length":"19127","record_id":"<urn:uuid:9c1708a6-bd70-43bc-a6df-d897b63f7f9f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Course/Non Linear Motion
Non Linear MotionEdit
Non Linear Motion refers to any non straight line motion with changing direction
For any Non Linear Motion can be represented mathematically as a function of speed in time, v(t) then
v = v(t)
$a = \frac{d v(t)}{dt}$
$s = \int v(t) dt$
$F = m \frac{d v(t)}{dt}$
$W = F \times s = F \int v(t) dt$
$W = \frac{F}{t} = \frac{F}{t} \int v(t) dt$
Characteristics Symbol Calculus Equation
Speed v $\frac{ds(t)}{dt}$
Accelleration a $\frac{dv(t)}{dt} = \frac{d^2s}{dt^2}$
Distance s $\int v(t) dt$
Force F $m \frac{dv(t)}{dt}$
Work W $\frac {F}{t} \int v(t)dt$
Pressure P $\frac{ds(t)}{dt}$
Impulse F[m] $m t \frac{dv(t)}{dt}$
Momentum m[v] $m \frac{ds(t)}{dt}$
Energy E $F \int v(t) dt$
Last modified on 26 December 2011, at 16:20
|
{"url":"http://en.m.wikibooks.org/wiki/Physics_Course/Non_Linear_Motion","timestamp":"2014-04-19T12:23:57Z","content_type":null,"content_length":"16715","record_id":"<urn:uuid:936cb378-b27b-4590-8f33-21f366ca6eab>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical physics
Mathematical physics is the scientific discipline concerned with the interface of mathematics and physics. There is no real consensus about what does or does not constitute mathematical physics. A
very typical definition is the one given by the Journal of Mathematical Physics: "the application of mathematics to problems in physics and the development of mathematical methods suitable for such
applications and for the formulation of theoretical physics|physical theories."
This definition does, however, not cover the situation where results from physics are used to help prove facts in abstract mathematics which themselves have nothing particular to do with physics.
This phenomenon has become increasingly important, with developments from string theory research breaking new ground in mathematics. Eric Zaslow coined the phrase physmatics to describe these
developments, although other people would consider them as part of mathematical physics proper.
Important fields of research in mathematical physics include: functional analysis/quantum physics, geometry/general relativity and combinatorics/probability theory/ statistical physics. More
recently, string theory has managed to make contact with many major branches of mathematics including algebraic geometry, topology, and complex geometry.
Scope of the subject
There are several distinct branches of mathematical physics, and these roughly correspond to particular historical periods. The theory of partial differential equations (and the related areas of
variational calculus, Fourier analysis, potential theory, and vector analysis) are perhaps most closely associated with mathematical physics. These were developed intensively from the second half of
the eighteenth century (by, for example, D'Alembert, Euler, and Lagrange) until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, elasticity theory,
acoustics, thermodynamics, electricity, magnetism, and aerodynamics.
The theory of atomic spectra (and, later, quantum mechanics) developed almost concurrently with the mathematical fields of linear algebra, the spectral theory of operators, and more broadly,
functional analysis. These constitute the mathematical basis of another branch of mathematical physics.
The special and general theories of relativity require a rather different type of mathematics. This was group theory: and it played an important role in both quantum field theory and differential
geometry. This was, however, gradually supplemented by topology in the mathematical description of cosmological as well as quantum field theory phenomena.
Statistical mechanics forms a separate field, which is closely related with the more mathematical ergodic theory and some parts of probability theory.
The usage of the term 'Mathematical physics' is sometimes idiosyncratic. Certain parts of mathematics that initially arose from the development of physics are not considered parts of mathematical
physics, while other closely related fields are. For example, ordinary differential equations and symplectic geometry are generally viewed as purely mathematical disciplines, whereas dynamical
systems and Hamiltonian mechanics belong to mathematical physics.
Prominent mathematical physicists
The great seventeenth century English physicist and mathematician Isaac Newton [1642-1727] developed a wealth of new mathematics (for example, calculus and several numerical methods (most notably
Newton's method)) to solve problems in physics. Other important mathematical physicists of the seventeenth century included the Dutchman Christiaan Huygens [1629-1695] (famous for suggesting the wave
theory of light), and the German Johannes Kepler [1571-1630] ( Tycho Brahe's assistant, and discoverer of the equations for planetary motion/orbit).
In the eighteenth century, two of the great innovators of mathematical physics were Swiss: Daniel Bernoulli [1700-1782] (for contributions to fluid dynamics, and vibrating strings), and, more
especially, Leonhard Euler [1707-1783], (for his work in variational calculus, dynamics, fluid dynamics, and many other things). Another notable contributor was the Italian-born Frenchman,
Joseph-Louis Lagrange [1736-1813] (for his work in mechanics and variational methods).
In the late eighteenth and early nineteenth centuries, important French figures were Pierre-Simon Laplace [1749-1827] (in mathematical astronomy, potential theory, and mechanics) and Siméon Denis
Poisson [1781-1840] (who also worked in mechanics and potential theory). In Germany, both Carl Friedrich Gauss [1777-1855] (in magnetism) and Carl Gustav Jacobi [1804-1851] (in the areas of dynamics
and canonical transformations) made key contributions to the theoretical foundations of electricity, magnetism, mechanics, and fluid dynamics.
Gauss (along with Euler) is considered by many to be one of the three greatest mathematicians of all time. His contributions to non-Euclidean geometry laid the groundwork for the subsequent
development of Riemannian geometry by Bernhard Riemann [1826-1866]. As we shall see later, this work is at the heart of general relativity.
The nineteenth century also saw the Scot, James Clerk Maxwell [1831-1879], win renown for his four equations of electromagnetism, and his countryman, Lord Kelvin [1824-1907] make substantial
discoveries in thermodynamics. Among the English physics community, Lord Rayleigh [1842-1919] worked on sound; and George Gabriel Stokes [1819-1903] was a leader in optics and fluid dynamics; while
the Irishman William Rowan Hamilton [1805-1865] was noted for his work in dynamics. The German Hermann von Helmholtz [1821-1894] is best remembered for his work in the areas of electromagnetism,
waves, fluids, and sound. In the U.S.A., the pioneering work of Josiah Willard Gibbs [1839-1903] became the basis for statistical mechanics. Together, these men laid the foundations of
electromagnetic theory, fluid dynamics and statistical mechanics.
The late nineteenth and the early twentieth centuries saw the birth of special relativity. This had been anticipated in the works of the Dutchman, Hendrik Lorentz [1852-1928], with important insights
from Jules-Henri Poincaré [1854-1912], but which were brought to full clarity by Albert Einstein [1879-1955]. Einstein then developed the invariant approach further to arrive at the remarkable
geometrical approach to gravitational physics embodied in general relativity. This was based on the non-Euclidean geometry created by Gauss and Riemann in the previous century.
Einstein's special relativity replaced the Galilean transformations of space and time with Lorentz transformations in four dimensional Minkowski space-time. His general theory of relativity replaced
the flat Euclidean geometry with that of a Riemannian manifold, whose curvature is determined by the distribution of gravitational matter. This replaced Newton's scalar gravitational force by the
Riemann curvature tensor.
The other great revolutionary development of the twentieth century has been quantum theory, which emerged from the seminal contributions of Max Planck [1856-1947] (on black body radiation) and
Einstein's work on the photoelectric effect. This was, at first, followed by a heuristic framework devised by Arnold Sommerfeld [1868-1951] and Niels Bohr [1885-1962], but this was soon replaced by
the quantum mechanics developed by Max Born [1882-1970], Werner Heisenberg [1901-1976], Paul Dirac [1902-1984], Erwin Schrodinger [1887-1961], and Wolfgang Pauli [1900-1958]. This revolutionary
theoretical framework is based on a probabilistic interpretation of states, and evolution and measurements in terms of self-adjoint operators on an infinite dimensional vector space ( Hilbert space,
introduced by David Hilbert [1862-1943]). Paul Dirac, for example, used algebraic constructions to produce a relativistic model for the electron, predicting its magnetic moment and the existence of
its antiparticle, the positron.
Later important contributors to twentieth century mathematical physics include Satyendra Nath Bose [1894-1974], Julian Schwinger [1918-1994], Sin-Itiro Tomonaga [1906-1979], Richard Feynman
[1918-1988], Freeman Dyson [1923- ], Hideki Yukawa [1907-1981], Roger Penrose [1931- ], Stephen Hawking [1942- ], and Edward Witten [1951- ].
Mathematically rigorous physics
The term 'mathematical' physics is also sometimes used in a special sense, to distinguish research aimed at studying and solving problems inspired by physics within a mathematically rigorous
framework. Mathematical physics in this sense covers a very broad area of topics with the common feature that they blend pure mathematics and physics. Although related to theoretical physics,
'mathematical' physics in this sense emphasizes the mathematical rigour of the same type as found in mathematics. On the other hand, theoretical physics emphasizes the links to observations and
experimental physics which often requires theoretical physicists (and mathematical physicists in the more general sense) to use heuristic, intuitive, and approximate arguments. Such arguments are not
considered rigorous by mathematicians. Arguably, rigorous mathematical physics is closer to mathematics, and theoretical physics is closer to physics.
Such mathematical physicists primarily expand and elucidate physical theories. Because of the required rigor, these researchers often deal with questions that theoretical physicists have considered
to already be solved. However, they can sometimes show (but neither commonly nor easily) that the previous solution was incorrect.
The field has concentrated in three main areas: (1) quantum field theory, especially the precise construction of models; (2) statistical mechanics, especially the theory of phase transitions; and (3)
nonrelativistic quantum mechanics ( Schrödinger operators), including the connections to atomic and molecular physics.
The effort to put physical theories on a mathematically rigorous footing has inspired many mathematical developments. For example, the development of quantum mechanics and some aspects of functional
analysis parallel each other in many ways. The mathematical study of quantum statistical mechanics has motivated results in operator algebras. The attempt to construct a rigorous quantum field theory
has brought about progress in fields such as representation theory. Use of geometry and topology plays an important role in string theory. The above are just a few examples. An examination of the
current research literature would undoubtedly give other such instances.
|
{"url":"http://www.pustakalaya.org/wiki/wp/m/Mathematical_physics.htm","timestamp":"2014-04-20T17:00:01Z","content_type":null,"content_length":"23255","record_id":"<urn:uuid:28b66640-6844-4c2e-9b34-addc40222cd6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: Uniqueness of axioms ? for Tennant, Detlefsen, et al.
FOM: Uniqueness of axioms ? for Tennant, Detlefsen, et al.
Robert S Tragesser RTragesser at compuserve.com
Mon Dec 22 06:41:26 EST 1997
I've been following the thread on independent
axiomatization from the point of view of the
question of whether or not arithmetic is a
Science in the Aristotelian sense [and under-
standing this question in the light of Mancosu's
history of the question in the late Renaissance,
and the role of the question in the emergence of
"modern" mathematics -- and this in turn was
inspired by Evert Beth's very hard won observation
that we go desparately if we try to understand
FOM against the background of Kant rather than
Detlefsen had observed that in an Aristotelian
science the axioms should be atomic, and one way
of understanding this is in terms of their mutual
independence. Hence the foundational importance
of the question of independent axioms.
One learns from Mancosu that it is permitted
in an Aristotelian science that there are valid
deductions which are not causal or explanatory.
(From a point of view, RAA proofs have this
character.) Thus one must also say which of the
possible deductions count as explanatory. One
then needs to show that any deduction from the
axioms can be transformed into or be replaced by
an explanatory/direct proof. So it must be possible
to choose (mutually indepedent) axioms and logic
so that the theorems of arithmetic can be given
such direct proofs?
There is an interesting complication already
suggested by Harvey Friedman in another context:
that not all the truths of arithmetic are
essential truths. That is to say, some truths
of arithmetic may be accidental truths -- they
cannot be understood on the basis of WHAT-IS alone.
(We could then have the prospect of a system of
arithmetic being complete with respect to essential
truths or elementary/elemental truths, but not
with respect to all truths?
QUESTION OF THE UNIQUENESS OF THE AXIOMS:
First, it does seem that we do not need strict
logical independence of the axioms, but only that
no axioms be deducible from other axioms BY
EXPLANATORY DEDUCTIONS (e.g., by a normal proof).
The important thing is that the axioms be atoms of
expanation, not logical atoms.
Seocond, it is important that one be able to
detect the axioms qua atoms of explanation. Is
there then some sense in which the axioms of
PA may be regarded as forced, unique?? Here
is what I have in mind: given axioms which are
independent at least up to explanatory deductions,
is there some strong sense in which any other such
set of axioms are inter-explanatorily deducible?
Recall that Aristotelian axioms must have the
character of being definitions. Thus the number-
theoretic books rather than the geometric books of
Euclid are paradigmatic here. [ Seidenberg
argued that Euclid wanted Book I to be based on
Definitions alone, but couldn't resolve all
the Postulates into Definitions.]
That is, the axiom-atoms are to be atomic
elements of species/concepts/ideas. This
suggests that something more like a combinatorial
or lambda calculus might be more appropriate for
resolving arithmetic into its specific/conceptual
elements, rather than a predicate calculus.
I see the fundamental or foundational issue here
is of course not wether we can modernize Aristotle,
but whether or not mathematics can be resolved into
self-sustaining, self-standing spheres, walled
cities, rather than megaopolis sprawl (of the sort
Lakatos envisions)?
robert tragesser
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1997-December/000617.html","timestamp":"2014-04-19T04:22:24Z","content_type":null,"content_length":"6063","record_id":"<urn:uuid:d4f70b59-0aef-4875-abed-a1bc073ed12c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chemical Biology - Bioinformatics Concentration Curriculum
Chemical Biology - Bioinformatics Concentration Curriculum
(1) CS 115 must have been chosen in Freshman year
(2) Requires advisor's approval.
(3) Requires advisor's approval.
(4) Project/Research can be either a project (CH 497) or thesis (CH 499). For American Chemical Society certification, Ch 412 is required.
|
{"url":"http://www.stevens.edu/ses/ccbbme/undergrad/prog_bioinformatics_cur","timestamp":"2014-04-19T22:22:34Z","content_type":null,"content_length":"84929","record_id":"<urn:uuid:bf387f90-bf36-4146-b1a3-fa9fbf4604aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vector product for higher dimensions than 3?
April 29th 2008, 02:03 AM #1
Vector product for higher dimensions than 3?
I know that the vector product (for $\Bbb{R}^3$) is defined in a way so that a orthogonal vector is produced from two original vectors. In $\Bbb{R}^2$ you can't create a vector $eq\overline{0}$
orthogonal to two linear independent vectors. On the other hand, you can if you only have one vector from the beginning. In $\Bbb{R}^4$, you can if you have 3 linear independent vectors from the
beginning (you'll get a line of possible vectors contrary to if you only have 2 vectors to perform the multiplication with, then you'll get a plane).
Is there some kind of general vector product for $n-1$ vectors in $\Bbb{R}^n$? (this would be almost the same as a method for obtaining a vector orthogonal to the other vectors.)
In some ways, the vector product is a feature unique to three-dimensional space. But there is a construction called the wedge product that generalises some of the properties of the vector product
to higher-dimensional spaces.
April 29th 2008, 12:13 PM #2
|
{"url":"http://mathhelpforum.com/advanced-algebra/36497-vector-product-higher-dimensions-than-3-a.html","timestamp":"2014-04-16T19:08:27Z","content_type":null,"content_length":"35106","record_id":"<urn:uuid:97bcd577-9bd0-432b-9082-818b5f1cd0a0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Complexity of Robot Motion
- IEEE fins. Auto. Control , 1993
"... Abstract--In this paper, we investigate methods for steering systems with nonholonomic constraints between arbitrary configurations. Early work by Brockett derives the optimal controls for a set
of canonical systems in which the tangent space to the configuration manifold is spanned by the input vec ..."
Cited by 251 (15 self)
Add to MetaCart
Abstract--In this paper, we investigate methods for steering systems with nonholonomic constraints between arbitrary configurations. Early work by Brockett derives the optimal controls for a set of
canonical systems in which the tangent space to the configuration manifold is spanned by the input vector fields and their first order Lie brackets. Using Brockett’s result as motivation, we derive
suboptimal trajectories for systems which are not in canonical form and consider systems in which it takes more than one level of bracketing to achieve controllability. These trajectories use
sinusoids at integrally related frequencies to achieve motion at a given bracketing level. We define a class of systems which can be steered using sinusoids (chained systems) and give conditions
under which a class of two-input systems can be converted into this form. I.
- Robotics and Autonomous Systems , 1994
"... The problem of synthesizing and analyzing collective autonomous agents has only recently begun to be practically studied by the robotics community. This paper overviews the most prominent
directions of research, defines key terms, and summarizes the main issues. Finally, it briefly describes our app ..."
Cited by 123 (14 self)
Add to MetaCart
The problem of synthesizing and analyzing collective autonomous agents has only recently begun to be practically studied by the robotics community. This paper overviews the most prominent directions
of research, defines key terms, and summarizes the main issues. Finally, it briefly describes our approach to controlling group behavior and its relation to the field as a whole.
, 1992
"... This paper describes a practical path planner for nonholonomic robots in environments with obstacles. The planner is based on building a one-dimensional, maximal clearance skeleton through the
configuration space of the robot. However rather than using the Euclidean metric to determine clearance, a ..."
Cited by 32 (1 self)
Add to MetaCart
This paper describes a practical path planner for nonholonomic robots in environments with obstacles. The planner is based on building a one-dimensional, maximal clearance skeleton through the
configuration space of the robot. However rather than using the Euclidean metric to determine clearance, a special metric which captures information about the nonholonomy of the robot is used. The
robot navigates from start to goal states by loosely following the skeleton; the resulting paths taken by the robot are of low "complexity." We describe how much of the computation can be done
off-line once and for all for a given robot, making for an efficient planner. The focus is on path planning for mobile robots, particularly the planar two-axle car, but the underlying ideas are quite
general and may be applied to planners for other nonholonomic robots.
, 1996
"... In the first part of this paper, we dene approximate polynomial gcds (greatest common divisors) and extended gcds provided that approximations to the zeros of the input polynomials are
available. We relate our novel definition to the older and weaker ones, based on perturbation of the coefficients o ..."
Cited by 24 (8 self)
Add to MetaCart
In the first part of this paper, we dene approximate polynomial gcds (greatest common divisors) and extended gcds provided that approximations to the zeros of the input polynomials are available. We
relate our novel definition to the older and weaker ones, based on perturbation of the coefficients of the input polynomials, we demonstrate some deficiency of the latter definitions (which our
denition avoids), and we propose new effective sequential and parallel (RNC and NC) algorithms for computing approximate gcds and extended gcds. Our stronger results are obtained with no increase of
the asymptotic bounds on the computational cost. This is partly due to application of our recent nearly optimal algorithms for approximating polynomial zeros. In the second part of our paper, working
under the older and more customary definition of approximate gcds, we modify and develop an alternative approach, which was previously based on the computation of the Singular Value Decomposition
(SVD) of the associat...
- IEEE Transactions on Robotics and Automation , 1993
"... This work considers the path planning problem for planar revolute manipulators operating in a workspace of polygonal obstacles. This problem is solved by determining the topological
characteristics of obstacles in configuration space, thereby determining where feasible paths can be found. A collisio ..."
Cited by 5 (1 self)
Add to MetaCart
This work considers the path planning problem for planar revolute manipulators operating in a workspace of polygonal obstacles. This problem is solved by determining the topological characteristics
of obstacles in configuration space, thereby determining where feasible paths can be found. A collision-free path is then calculated by using the mathematical description of the boundaries of only
those configuration space obstacles with which collisions are possible. The key to this technique is a simple test for determining whether two disjoint obstacles are connected in configuration space.
This test allows the path planner to restrict its calculations to regions in which collisionfree paths are guaranteed a priori, thus avoiding unnecessary computations and resulting in an efficient
implementation. Typical timing results for environments consisting of four polyhedral obstacles comprised of a total of 27 vertices are on the order of 22 ms on a SPARC-IPC workstation. I.
Introduction The p...
- In Proc. 7th Canad. Conf. Comput. Geom , 1995
"... Exact computation is an important paradigm for the implementation of geometric algorithms. In this paper, we consider for the first time the practically important problem of collision detection
under this aspect. The task is to decide whether a polyhedral object can perform a prescribed sequence of ..."
Cited by 2 (0 self)
Add to MetaCart
Exact computation is an important paradigm for the implementation of geometric algorithms. In this paper, we consider for the first time the practically important problem of collision detection under
this aspect. The task is to decide whether a polyhedral object can perform a prescribed sequence of translations and rotations in the presence of stationary polyhedral obstacles. We present an exact
decision method for this problem which is purely based on integer arithmetic. Our approach guarantees that the required binary length of intermediate numbers is bounded by 14L+ 22, where L denotes
the maximal bit-size of any input value. 1 Introduction Exact computation is widely recognized as one of the key issues in the design of geometric algorithms in the near future. Recent work on exact
algorithms focuses on traditional problems of computational geometry, such as the construction of Voronoi diagrams [9, 1, 6]. However, there are reasons for believing that exact computation will also
be a...
- Information Processing Letters , 1993
"... We present an algebraic algorithm to generate the exact general sweep boundary of a 2D curved object which changes its shape dynamically while moving along a parametric curve trajectory. ..."
Add to MetaCart
We present an algebraic algorithm to generate the exact general sweep boundary of a 2D curved object which changes its shape dynamically while moving along a parametric curve trajectory.
"... Demining and unexploded ordnance (UXO) clearance are extremely tedious and dangerous tasks. The use of robots bypasses the hazards and potentially increases the efficiency of both tasks. A first
crucial step towards robotic mine/UXO clearance is to locate all the targets. This requires a path planne ..."
Add to MetaCart
Demining and unexploded ordnance (UXO) clearance are extremely tedious and dangerous tasks. The use of robots bypasses the hazards and potentially increases the efficiency of both tasks. A first
crucial step towards robotic mine/UXO clearance is to locate all the targets. This requires a path planner that generates a path to pass a detector over all points of a mine/UXO field, i.e., a
planner that is complete. The current state of the art in path planning for mine/UXO clearance is to move a robot randomly or use simple heuristics. These methods do not possess completeness
guarantees which are vital for locating all of the mines/UXOs. Using such random approaches is akin to intentionally using imperfect detectors. In this paper, we first overview our prior complete
coverage algorithm and compare it with randomized approaches. In addition to the provable guarantees, we demonstrate that complete coverage achieves coverage in shorter time than random coverage. We
also show that the use of complete approaches enables the creation of a filter to reject bad sensor readings, which is necessary for successful deployment of robots. We propose a new approach to
handle sensor uncertainty that uses geometrical and topological features rather than sensor uncertainty models. We have verified our results by performing experiments in unstructured indoor
environments. Finally, for scenarios where some a priori information about a minefield is available, we expedite the demining process by introducing a probabilistic method so that a demining robot
does not have to perform exhaustive coverage.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=200723","timestamp":"2014-04-20T07:18:55Z","content_type":null,"content_length":"32182","record_id":"<urn:uuid:aad67c64-5d88-416f-90ba-7e28f21cfce7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Smooth MacroElements Based
on PowellSabin Triangle Splits
Peter Alfeld 1) and Larry L. Schumaker 2)
Abstract. Macroelements of smoothness C r on PowellSabin triangle splits
are constructed for all r – 0. These new elements are improvements on el
ements constructed in [13] in that certain unneeded degrees of freedom have
been removed.
x1. Introduction
A bivariate macroelement defined on a triangle T consists of a finite dimensional
linear space S defined on T , and a set \Lambda of linear functionals forming a basis for
the dual of S.
It is common to choose the space S to be a space of polynomials or a space of
piecewise polynomials defined on some subtriangulation of T . The members of \Lambda,
called degrees of freedom, are usually taken to be point evaluations of derivatives.
A macroelement defines a local interpolation scheme. In particular, if f is
a sufficiently smooth function, then we can define the corresponding interpolant
as the unique function s 2 S such that –s = –f for all – 2 \Lambda. We say that a
macroelement has smoothness C r provided that if the element is used to construct
an interpolating function locally on each triangle of a triangulation 4, then the
resulting piecewise function is C r continuous globally.
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/489/3879468.html","timestamp":"2014-04-17T13:12:39Z","content_type":null,"content_length":"8325","record_id":"<urn:uuid:99c57bf7-0178-4dcc-b4d7-060a73997f7d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interactive On-line Exercises of Basic Mathematical Functions -- from Wolfram Library Archive
Interactive On-line Exercises of Basic Mathematical Functions
Organization: Toyota National College of Technology
1999 International Mathematica Symposium
In this paper, a system is described for serving on-line exercises of mathematical functions for high-school students. The system basically depends on WWW technology and uses WWW browsers
running on students' computers for displaying questions, answers, and explanations. An exercise page has a graph of a mathematical function and requests a student to fill the text field with a
mathematical expression appropriate for the graph. When the student clicks on the "evaluate" button, the expression is sent to the system server and compared with the answer expression. If the
expression is not equal to the answer, graphs of both expressions, and the comment on how different his expression to the answer are displayed. Comparing the graphs and reading the comment, the
student is able to continue his guessing work until he gets the correct expression. The exercises have been developed for students who have difficulty in understanding mathematical functions in
the authorīs algebra course, who seem to lack graphical images of the functions they learned in the classes. They usually try to remember the written rules for converting a mathematical
function to its graph and reverting it. Such attempts sometimes succeed in simple cases like linear functions but fails in quadratic functions or more complicated functions because the rules
for converting those functions become too complicated just to remember. What they lack is rich experiences of handling real graphs. Although recent development of electrical worksheets and
symbolic computing program like Mathematica made it easy and quick for a student to draw graphs, it is only half the way to the understanding. Because the drawing is automated and includes
usually few guessing works, the student does not necessarily have ideas about, for example, the effect of a coefficient value on the shape or the position of a graph. The only rigid knowledge
he has is how he operates the program. The on-line exercises provide with the other half. The first thing a student does in the exercise is a guess which tries to describe the given graph
correctly. There is no instructions of expressing the graph nor hints before the first guess. After the first guess is done, then comes the hint or comment which help him to make the next
guess. This makes a kind of experiment on the desktop. A series of the experiments tell him the behavior of graphs in detail and the relation with the expressions. Adding to that, because the
evaluation are done symbolically using Mathematica functions, wide variety of expressions are allowed, which is easier to accept for usual students. The exercise provides the students with
opportunities to learn from their mistakes and to build their own theory about mathematical functions.
Education > Precollege
interactive on-line exercises, web based education, mathematical functions
|
{"url":"http://library.wolfram.com/infocenter/Conferences/6168/","timestamp":"2014-04-19T07:10:25Z","content_type":null,"content_length":"35497","record_id":"<urn:uuid:4323a292-6817-4a85-bafb-3b9fbfc3e0ae>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why Transpose a Matrix?
Date: 03/23/2008 at 16:00:47
From: Rob
Subject: Why Transpose a Matrix
Why is a transpose of a matrix needed? I know how to transpose a
matrix, I just don't know why I have to do it. I have read through my
Adjustment computations book by Ghilani and Wolf as well as consulted
online help with out finding any answers.
Date: 03/24/2008 at 12:08:33
From: Doctor Fenton
Subject: Re: Why Transpose a Matrix
Hi Rob,
Thanks for writing to Dr. Math. The principal value of the transpose
arises in connection with scalar or dot products. If
u = <u_1,u_2,...,u_n> and v = <v-1,v_2,...v_n>
are vectors in R^n, then the dot product of u and v, u.v, is
defined by
u.v = u-1*v_1 + u_2*v_2 + ... + u_n*v_n .
If A is an n x n matrix, then direct computation shows that
(Au).v = u.(A^tv) and u.(Av) = (A^tu).v .
That is, if you have a dot product of two vectors, with a matrix A
applied to one of them, you can "move" the matrix to the other vector
if you transpose it.
That has many consequences, one of which is the following. Note that
the magnitude ||v|| of a vector is related to the dot product by
||v||^2 = v.v .
Suppose that A is an n x n matrix which preserves the dot product, so
that for all vectors u and v,
(Au).(Av) = u.v .
Then taking u = v, we see that
||Av||^2 = ||v||2 ,
so that ||Av|| = ||v||, which means that A preserves distance. In
addition, A preserves angles, since
(Au).(Av) = ||Au|| ||Av|| cos(@1) ,
where @1 is the angle between Av and Av,
u.v = ||u|| ||v|| cos(@2) where @2 is the angle between u and v.
Then since ||Au|| = ||u|| and ||Av|| = ||v||, we have that
cos(@1) = cos(@2), (and @1 and @2 are in Quadrants 1 or 2)
so @1 = @2, and A preserves angles (the angle between Au and Av is
the same as the angle between u and v).
Also, if (Au,Av) = u.v for all u and v, then
(A^tAu).v = u.v for all u and v,
((A^tA-I)u).v = 0 for all v,
and therefore A^tAu = 0 for all u, since the only vector whose dot
product with every vector is 0 is the zero vector. This means that
A^tA = I, that A is invertible, and A^(-1) = A^t.
You also use this property to show that the eigenvectors belonging
to different eigenvalues of a symmetric matrix in an inner product
space must be orthogonal.
A side benefit is that you can represent vectors with matrices. For
example, if we write vectors as column matrices, then the dot product
becomes a matrix operation:
u.v = (u^t)v .
If you have any questions, please write back, and I will try to
explain further.
- Doctor Fenton, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/71949.html","timestamp":"2014-04-19T08:20:42Z","content_type":null,"content_length":"7607","record_id":"<urn:uuid:b1ee8f50-f4f1-4a8b-9459-0ce4e9b7bd02>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Measurement Science for Complex Information Systems
This project aims to develop and evaluate a coherent set of methods to understand behavior in complex information systems, such as the Internet, computational grids and computing clouds. Such large
distributed systems exhibit global behavior arising from independent decisions made by many simultaneous actors, which adapt their behavior based on local measurements of system state. Actor
adaptations shift the global system state, influencing subsequent measurements, leading to further adaptations. This continuous cycle of measurement and adaptation drives a time-varying global
behavior. For this reason, proposed changes in actor decision algorithms must be examined at large spatiotemporal scale in order to predict system behavior. This presents a challenging problem.
What are complex systems? Large collections of interconnected components whose interactions lead to macroscopic behaviors in:
• Biological systems (e.g., slime molds, ant colonies, embryos)
• Physical systems (e.g., earthquakes, avalanches, forest fires)
• Social systems (e.g., transportation networks, cities, economies)
• Information systems (e.g., Internet and compute clouds)
What is the problem? No one understands how to measure, predict or control macroscopic behavior in complex information systems: (1) threatening our nation’s security and (2) costing billions of
“[Despite] society’s profound dependence on networks, fundamental knowledge about them is primitive. [G]lobal communication … networks have quite advanced technological implementations but their
behavior under stress still cannot be predicted reliably.… There is no science today that offers the fundamental knowledge necessary to design large complex networks [so] that their behaviors can be
predicted prior to building them.”
above quote from Network Science 2006, a National Research Council report
What is the new idea? Leverage models and mathematics from the physical sciences to define a systematic method to measure, understand, predict and control macroscopic behavior in the Internet and
distributed software systems built on the Internet.
What are the technical objectives? Establish models and analysis methods that (1) are computationally tractable, (2) reveal macroscopic behavior and (3) establish causality. Characterize distributed
control techniques, including: (1) economic mechanisms to elicit desired behaviors and (2) biological mechanisms to organize components.
Why is this hard? Valid computationally tractable models that exhibit macroscopic behavior and reveal causality are difficult to devise. Phase-transitions are difficult to predict and control.
Who would care? All designers and users of networks and distributed systems with a 25-year history of unexpected failures:
• ARPAnet congestion collapse of 1980
• Internet congestion collapse of Oct 1986
• Cascading failure of AT&T long-distance network in Jan 1990
• Collapse of AT&T frame-relay network in April 1998 …
Businesses and customers who rely on today's information systems:
• “Cost of eBay's 22-Hour Outage Put At $2 Million”, Ecommerce, Jun 1999
• “Last Week’s Internet Outages Cost $1.2 Billion”, Dave Murphy, Yankee Group, Feb 2000
• “…the Internet "basically collapsed" Monday”, Samuel Kessler, Symantec, Oct 2003
• “Network crashes…cost medium-sized businesses a full 1% of annual revenues”, Technology News, Mar 2006
• “costs to the U.S. economy…range…from $65.6 M for a 10-day [Internet] outage at an automobile parts plant to $404.76 M for … failure …at an oil refinery”, Dartmouth study, Jun 2006
Designers and users of tomorrow's information systems that will adopt dynamic adaptation as a design principle:
• DoD to spend $13 B over the next 5 yrs on Net-Centric Enterprise Services initiative, Government Computer News, 2005
• Market derived from Web services to reach $34 billion by 2010, IDC
• Grid computing market to exceed $12 billion in revenue by 2007, IDC
• Market for wireless sensor networks to reach $5.3 billion in 2010, ONWorld
• Revenue in mobile networks market will grow to $28 billion in 2011, Global Information, Inc.
• Market for service robots to reach $24 billion by 2010, International Federation of Robotics
Hard Issues & Plausible Approaches
│ Hard Issues │ Plausible Approaches │
│ H1. Model scale │ A1. Scale-reduction techniques │
│ H2. Model validation │ A2. Sensitivity analysis & key comparisons │
│ H3. Tractable analysis │ A3. Cluster analysis and statistical analyses │
│ H4. Causal analysis │ A4. Evaluate analysis techniques │
│ H5. Controlling behavior │ A5. Evaluate distributed control regimes │
Model scale – Systems of interest (e.g., Internet and compute grids) extend over large spatiotemporal extent, have global reach, consist of millions of components, and interact through many adaptive
mechanisms over various timescales. Scale-reduction techniques must be employed. Which computational models can achieve sufficient spatiotemporal scaling properties? Micro-scale models are not
computable at large spatiotemporal scale. Macro-scale models are computable and might exhibit global behavior, but can they reveal causality? Meso-scale models might exhibit global behavior and
reveal causality, but are they computable? One plausible approach is to investigate abstract models from the physical sciences. e.g., fluid flows (from hydrodynamics), lattice automata (from gas
chemistry), Boolean networks (from biology) and agent automata (from geography). We can apply parallel computing to scale to millions of components and days of simulated time. Scale reduction may
also be achieved by adopting n-level experiments coupled for orthogonal fractional factorial (OFF) experiment designs.
Model validation – Scalable models from the physical sciences (e.g., differential equations, cellular automata, nk-Boolean nets) tend to be highly abstract. Can sufficient fidelity be obtained to
convince domain experts of the value of insights gained from such abstract models? We can conduct sensitivity analyses to ensure the model exhibits relationships that match known relationships from
other accepted models and empirical measurements. Sensitivity analysis also enables us to understand relationships between model parameters and responses. We can also conduct key comparisons along
three complementary paths: (1) comparing model data against existing traffic and analysis, (2) comparing results from subsets of macro/meso-scale models against micro-scale models and (3) comparing
simulations of distributed control regimes against results from implementations in test facilities, such as the Global Environment for Network Innovations.
Tractable analysis – The scale of potential measurement data is expected to be very large – O(10**15) – with millions of elements, tens of variables, and millions of seconds of simulated time. How
can measurement data be analyzed tractably? We could use homogeneous models, which allow one (or a few) elements to be sampled as representative of all. This reduces data volume to 10**6 – 10**7,
which is amenable to statistical analyses (e.g., power-spectral density, wavelets, entropy, Kolmogorov complexity) and to visualization. Where homogeneous models are inappropriate, we can use
clustering analysis to view relationships among groups of responses. We can also exploit correlation analysis and principal components analysis to identify and exclude redundant responses from
collected data. Finally, we can construct combinations of statistical tests and multidimensional data visualization techniques tailored to specific experiments and data of interest.
Causal analysis – Tractable analysis strategies yield coarse data with limited granularity of timescales, variables and spatial extents. Coarseness may reveal macroscopic behavior that is not
explainable from the data. For example, an unexpected collapse in the probability density function of job completion times in a computing grid was unexplainable without more detailed data and
analysis. Multidimensional analysis can represent system state as a multidimensional space and depict system dynamics through various projections (e.g., slicing, aggregation, scaling). State-space
dynamics can segment system dynamics into an attractor-basin field and then monitor trajectories. Markov models providing compact, computationally efficient representations of system behavior can be
subjected to perturbation analyses to identify potential failure modes and their causes.
Controlling Behavior – Large distributed systems and networks cannot be subjected to centralized control regimes because the system consists of too many elements, too many parameters, too much
change, and too many policies. Can models and analysis methods be used to determine how well decentralized control regimes stimulate desirable system-wide behaviors? Use price feedback (e.g.,
auctions, present-value analysis or commodity markets) to modulate supply and demand for resources or services. Use biological processes to differentiate function based on environmental feedback,
e.g., morphogen gradients, chemotaxis, local and lateral inhibition, polarity inversion, quorum sensing, energy exchange and reinforcement.
Additional Technical Details:
Related Presentations
Major Accomplishments:
Oct 2013 The project delivered an evaluation of a method combining genetic algorithms and simulation to search for failure scenarios in system models. The method was applied to a case study of the
Koala cloud computing model. The method was able to discover a known failure cause, but in a novel setting, and was also able to discover several unknown failure scenarios. Subsequently, the method
and evaluation were presented at an international workshop on simulation methods, and in two invited lectures, one at Mitre and one at George Mason University.
Dec 2012 In the fall of 2012, Dr. Mills contributed methods from this project to a DoE Office of Science Workshop on Computational Modeling of Big Networks (COMBINE). Dr. Mills also coauthored the
report, which was published in December of 2012.
Nov 2011 In the fall of 2009, this project started investigating large scale behavior in Infrastructure Clouds. The project produced three related papers during 2011, and all three papers were
accepted at the two major IEEE cloud computing conferences held during the year. The rapid success of the project in this new domain illustrates the general applicability of the methods we developed,
as well as the ease with which those methods can be applied.
Nov 2010 Developed and demonstrated Koala, a discrete-event simulator for Infrastructure Clouds. Completed a sensitivity analysis of Koala to identify unique response dimensions and significant
factors driving model behavior. Created multidimensional animations to visualize spatiotemporal variation in resource usage and load for cores, disks, memory and network interfaces in clouds with up
to O(10**5) nodes.
May 2010 NIST Special Publication 500-282: Study of Proposed Internet Congestion Control Mechanisms
Sep 2009 Draft NIST Special Publication: Study of Proposed Internet Congestion-Control Mechanisms
Apr 2009 Demonstrated applicability of Markov model perturbation analysis to communication networks.
Sep 2008 Developed a Markov model for a global, computational grid and demonstrated the feasibility of applying perturbation analysis to predict conditions that could lead to performance degradation.
Currently, perturbation analysis is a theoretical topic for which we show applications to large distributed systems.
Aug 2008 Developed and demonstrated multidimensional visualization software to explore relationships among complex data sets derived from simulations of large distributed systems. Currently, there
are no widely used visualization techniques to explore multidimensional data from simulations of large distributed systems.
Jun 2008 Developed and demonstrated an analytical framework to understand relationships among pricing, admission control and scheduling for resource allocation in computing clusters. Currently,
resource-allocation mechanisms for computing clusters rely on heuristics.
Apr 2008 Developed and validated MesoNetHS, which adds six proposed replacement congestion-control algorithms to MesoNet and allows the behavior of the algorithms to be investigated in a large
topology. Currently, these congestion-control algorithms are explored in simulated and empirical topologies of small size.
Sep 2007 Developed and demonstrated a methodology for sensitivity analysis of models of large distributed systems. Currently, sensitivity analysis of models for large distributed systems is
considered infeasible.
Apr 2007 Developed and verified MesoNet, a mesoscopic scale network simulation model that can be specified with about 20 parameters. Currently, specifying most network simulations requires hundreds
to thousands of parameters.
Internet Autonomous System Graph Circa 2001 - Image by Sandy Ressler
Start Date:
October 2, 2006
End Date:
Lead Organizational Unit:
The NIST Cloud Computing program
Sandy Ressler - Cloud Information Visualization
Open Grid Forum (OGF) Research Group on Grid Reliability and Robustness
Internet Congestion-Control Research Group (ICCRG) of the Internet Research Task Force (IRTF)
Professor Yan Wan, University of North Texas
Professor Jian Yuan, Tsinghua University
James Henriksen, Wolverine Software
Chris Dabrowski
Jim Filliben
Kevin Mills
Sandy Ressler
Former Contributors
Dong Yeon Cho
Faouzi Daoud
Brittany Devine
Daniel Genin
Cedric Houard
Fern Hunt
Michel Laverne
Vladimir Marbukh
Edward Schwartz
Zanin Xu
Jian Yuan
Related Programs and Projects:
Associated Products:
Related Publications
• K. Mills, C. Dabrowski, J. Filliben and S. Ressler, "Combining Genetic Algorithms and Simulation to Search for Failure Scenarios in System Models", Proceedings of the 5th International Conference
on Advances in Simulation, Venice, Italy, October 2013.
• K. Mills, et al., "Workshop Report on Computational Modeling of Big Networks (COMBINE)", C. Dovrolis, D. Nicol, and G. Riley (eds.), Department of Energy, Office of Science, December 2012.
• A. Haines, K. Mills and J. Filliben, "Determining Relative Importance and Best Settings for Genetic Algorithm Control Parameters", NIST Publication # 912472, December 3, 2012.
• K. Mills, C. Dabrowski and D. Santay, "Practical Issues When Implementing a Distributed Population of Cloud-Computing Simulators Controlled by a Genetic Algorithm", NIST Publication # 912474,
November, 28, 2012.
• C. Dabrowski, J. Filliben and K. Mills, "Predicting Global Failure Regimes in Complex Information Systems", NetONets 2012: Networks of Networks: Systemic Risk and Infrastructural
Interdependencies, Northwestern University, June 19, 2012.
• C. Dabrowski and K. Mills, "
Extended Version of VM Leakage and Orphan Control in Open-Source Clouds
", NIST Publication 909325; an abbreviated version of this paper was published in the Proceedings of IEEE CloudCom 2011, Nov. 29-Dec. 1, Athens, Greece.
• K. Mills, J. Filliben, D-Y. Cho and E. Schwartz, "
Predicting Macroscopic Dynamics in Large Distributed Systems
Proceedings of ASME 2011 Conference on Pressure Vessels & Piping
, Baltimore, MD, July 17-22, 2011.
• C. Dabrowski, F. Hunt and K. Morrison, Improving the Efficiency of Markov Chain Analysis of Complex Distributed Systems, NIST Inter-Agency Report 7744, November 2010.
• K. Mills, E. Schwartz and J. Yuan, "How to Model a TCP/IP Network using only 20 Parameters", Proceedings of the 2010 Winter Simulation Conference (WSC 2010), Dec. 5-8, Baltimore, MD.
• K. Mills and J. Filliben, "An Efficient Sensitivity Analysis Method for Network Simulation Models", presented at the 2010 Winter Simulation Conference (WSC 2010), Dec. 5-8, Baltimore, MD.
• K. Mills, J. Filliben, D. Cho, E. Schwartz and D. Genin, Study of Proposed Internet Congestion Control Mechanisms, NIST Special Publication 500-282, May 2010, 534 pages.
• D. Genin and V. Marbukh, "Bursty Fluid Approximation of TCP for Modeling Internet Congestion at the Flow Level", Proceedings of the 47th Annual Allerton Conference on Communication, Control, and
Computing, Sept 30-Oct 2, 2009.
• V. Marbukh, “From Network Microeconomics to Network Infrastructure Emergence”, Proceedings of the 1st IEEE International Workshop on Network Science for Communication Networks (NetSciCom 2009),
held in conjunction with IEEE Infocom 2009, April 24, 2009 - Rio de Janeiro, Brazil.
• F. Hunt and V. Marbukh, "Measuring the Utility/Path Diversity Tradeoff in Multipath Protocols", Proceedings of the 4th International Conference on Performance Evaluation Methodologies and Tools,
Pisa, Italy, October 20-22, 2009.
• C. Dabrowski, “Reliability in grid computing systems”, in Concurrency and Computation: Practice and Experience, John Wiley & Sons, 21/8, pp. 927-959, 2009.
• C. Dabrowksi and F. Hunt, “Using Markov Chain Analysis to Study Dynamic Behaviour in Large-Scale Grid Systems”, Proceedings of the 7th Australasian Symposium on Grid Computing and e-Research,
Wellington, New Zealand, Jan. 2009.
• C. Dabrowski and F. Hunt, Markov Chain Analysis for Large-Scale Grid Systems, NIST Inter-Agency Report 7566, January 2009.
• D. Genin and V. Marbukh, "Toward Understanding of Metastability in Cellular CDMA Networks: Emergence and Implications for Performance." GLOBECOM 2008, New Orleans, Nov. 31 - Dec. 4.
• K. Mills and C. Dabrowski, “Can Economics-based Resource Allocation Prove Effective in a Computation Marketplace?", Journal of Grid Computing, 6/3, September 2008, pp. 291-311.
• F. Hunt and V. Marbukh, “Dynamic Routing and Congestion Control Through Random Assignment of Routes”, Proceedings of the 5th International Conference on Cybernetics and Information Technologies,
Systems and Applications: CITSA 2008, Orlando FL, July 2008. (BEST PAPER)
• V. Marbukh, "Can TCP Metastability Explain Cascading Failures and Justify Flow Admission Control in the Internet?", Proceedings of the 15th International Conference on Telecommunications, Saint
Peterbsurg, Russia, June 16-19, 2008.
• V. Marbukh and K. Mills, "Demand Pricing & Resource Allocation in Market-based Compute Grids: A Model and Initial Results", Proceedings of the 7th International Conference on Networking, IEEE,
April 2008, pp. 752-757.
• V. Marbukh and S. Klink, "Decentralized Control of Large-Scale Networks as a Game with Local Interactions: Cross-Layer TCP/IP Optimization", 2nd International Conference on Performance Evaluation
Methodologies and Tools, Nantes, France, October 23-25, 2007.
• V. Marbukh, "Utility Maximization for Resolving Throughput/Reliability Trade-offs in an Unreliable Network with Multipath Routing", 2nd International Conference on Performance Evaluation
Methodologies and Tools, Nantes, France, October 23-25, 2007.
• V. Marbukh, "Fair Bandwidth Sharing under Flows Arrivals/Departures: Effect of Retransmissions on Stability and Performance", ACM Sigmetrics Performance Evaluation Review, Vol. 35, No. 2, pp.
• V. Marbukh, "Metastability of fair bandwidth sharing under fluctuating demand and necessity of admission control", IEE Electronics Letters, Vol. 43, No. 19. pp. 1051-1053.
• V. Marbukh and K. Mills, "On Maximizing Provider Revenue in Market-Based Compute Grids", Proceedings of the 3rd International Conference on Networking and Services, Athens, Greece, June 19-25,
• K. Mills, "A Brief Survey of Self-Organization in Wireless Sensor Networks", Wireless Communications and Mobile Computing, Wiley Interscience, 7/7, September 2007, pp. 823-834.
• K. Mills and C. Dabrowski, "Investigating Global Behavior in Computing Grids", Self-Organizing Systems, Lecture Notes in Computer Science, Volume 4124 ISBN 978-3-540-37658-3, pp. 120-136.
• K. Sriram, D. Montgomery, O. Borchert, O. Kim and D. R. Kuhn, "Study of BGP Peering Session Attacks and Their Impacts on Routing Performance", IEEE Journal on Selected Areas in Communications, 24
/10, October 2006, pp. 1901-1915.
• J. Yuan and K. Mills, "Simulating Timescale Dynamics of Network Traffic Using Homogeneous Modeling", The NIST Journal of Research, 111/3, May-June 2006, pp. 227-242.
• J. Yuan and K. Mills, "Monitoring the Macroscopic Effect of DDoS Flooding Attacks", IEEE Transactions on Dependable and Secure Computing, 2/4, October-December 2005, pp. 324-335.
• J. Yuan and K. Mills, "A Cross-Correlation Based Method for Spatial-Temporal Traffic Analysis", Performance Evaluation, 61/2-3, pp 163-180.
• J. Yuan and K. Mills, "Macroscopic Dynamics in Large-Scale Data Networks", chapter 8 in Complex Dynamics in Communication Networks, edited by Ljupco Kocarev and Gabor Vattay, published by
Springer, 2005, ISBN 3-540-24305-4, pp. 191-212.
• J. Yuan and K. Mills, "Exploring Collective Dynamics in Communication Networks", The NIST Journal of Research, 107/2, March-April 2002, pp. 179-191.
• J. Heidemann, K. Mills and S. Kumar, "Expanding Confidence in Network Simulation", IEEE Network Magazine, 15/5, September/October 2001, pp. 58-63.
Related Software Tools
Related Demonstrations
• Animation (176 Mbyte Quicktime Movie) of vCore, Memory and Disk Space usage and pCore, Disk Count and NIC Count load from a Koala Simulation (Oct. 22, 2010) of a 20 cluster x 200 node (i.e.,
4,000 node) Infrastructure Cloud evolving over 1200 hours.
• Visualization (10 Mbyte .avi) from a Simulation (May 23, 2007) of an Abilene-style Network
• Visualization (14.4 Mbyte .avi) from a Simulation (July 31, 2007) of a Network Running CTCP
Other Information
|
{"url":"http://nist.gov/itl/antd/emergent_behavior.cfm","timestamp":"2014-04-21T09:38:37Z","content_type":null,"content_length":"65744","record_id":"<urn:uuid:d5f1de4f-d53c-4bbd-a142-d64901351b26>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex numbers questions
August 31st 2010, 06:09 AM #1
Aug 2010
Complex numbers questions
I have a whole load of questions that I have attempted but could not do. Hopefully someone here can help me out!
1. Express z^4 + z^3 + z^2 + z + 1 as a product of two real quadratic factors.
2. Find the zeros of z^5 - 1, giving your answers in the form r(cos theta + i sin theta), where r > 0 and -pi < theta < pi.
3. Z1 and Z2 are complex numbers on the Argand diagram relative to the origin. If |Z1 + Z2| = |Z1 - Z2| where | | denotes the moduli, show that arg Z1 and arg Z2 differ by pi/2
If you can do any of these, that would be great. I've been stuck on these for a while now.
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/calculus/154850-complex-numbers-questions.html","timestamp":"2014-04-17T02:57:07Z","content_type":null,"content_length":"29393","record_id":"<urn:uuid:fcc43ea8-7317-47fb-aaeb-21c2eaf8cbd5>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Competitive Analysis
When you cannot achieve the optimum solution of a problem, how do you measure the performance of an algorithm? If you knew the distribution of instances, you can see how well the algorithm performs
on average. But most theoretical computer scientists prefer a worst-case analysis that tries to minimize the ratio of the optimal solution to the algorithmic solution. But many algorithms achieve
seemingly large ratios that don't seem practical.
Vijay Vazirani defends this competitive analysis of algorithms in the preface of his 2001 book Approximation Algorithms
With practitioners looking for high performance algorithms having error within 2% or 5% of the optimal, what good are algorithms that come within a factor of 2, or even worse, O(log n) of the
optimal? Further, by this token, what is the usefulness of improving the approximation guarantee from, say, factor 2 to 3/2?
Let us address both issues and point out some fallacies in these assertions. The approximation guarantee only reflects the performance of the algorithm on the most pathological instances. Perhaps
it is more appropriate to view the approximation guarantee as a measure that forces us to explore deeper into the combinatorial structure of the problem and discover more powerful tools for
exploiting this structure. It has been observed that the difficulty of constructing tight examples increases considerably as one obtains algorithms with better guarantees. Indeed, for some recent
algorithms, obtaining a tight example has been a paper in itself. Experiments have confirmed that these and other sophisticated algorithms do have error bounds of the desired magnitude, 2% to 5%,
on typical instances, even though their worst case error bounds are much higher. Additionally, the theoretically proven algorithm should be viewed as a core algorithmic idea that needs to be fine
tuned to the types of instances arising in specific applications.
But still why should a practitioner prefer such an algorithm to a heuristic that does as well on "typical" instances but doesn't have a worst case bound? Our usual argument says that we don't really
know what "typical" means and we can promise you something no matter what happens.
Besides approximation algorithms, theoreticians have taken competitive analysis into other arenas, like comparing on-line versus off-line job requests and auctions that make a constant factor of the
optimal revenue where achieving a competitive factor of 2 can mean a serious loss of income.
Sometimes these algorithms will allow themselves to do poorly when the optimum is bad in order to achieve the best ratio. If we truly want to sell these algorithms to the practitioners, should we
focus on doing well on the situations they care about and then, only secondarily, worry about the performance in the worst case?
23 comments:
1. If we truly want to sell these algorithms to the practitioners, should we focus on doing well on the situations they care about and then, only secondarily, worry about the performance in the
worst case?
More often than not, practitioners don't quite know which situations they care about. When they do, and can make some attempt at defining it, one can often find a specialized subproblem that has
its own provable guarantees.
Other times, practitioners' insistence on specific algorithms that "work well on all inputs" actually reveals hidden structure that can be elucidated formally. Two examples of this are the
smoothed simplex analysis by Teng and Spielman, where they provide one explanation for the phenomenal success of the simplex method, and the various analyses of EM and the k-means method, where
again one can determine the specific instances where the general approachh tends to work well.
In either scenario, the formal approach has great strengths: it either gives a new method with a formal guarantee, or provides a rigorous explanation (independent of benchmark data) as to why a
particular heuristic works well.
2. It is quite hard to define what a "typical" input is -- hence an algorithm that works well on typical inputs seems to be not a well-defined object. For example, practioners (some of them anyway)
seem to think that so-called SAT-solvers work wonderfully "in practice" on "typical inputs." They rave about solving instances on thousands of variables. But we all "know" that SAT is not in
polynomial time. The last point in Suresh's comment is particularly relevant here. I think the SAT solvers use heuristics that, consciously or unconciously, exploit structures that are rather
special. Even if they work well on millions of examplees in an applicatin domain, that is an infinitismal fraction of all possible inputs, especially on 1000's of variables. Hence the claim that
they work on "typical" inputs is not really justified. Most of these SAT solvers also seem to fail badly on randomly generated instances. On the other hand, little research seems to have been
done to offer rigorous explanations of what kinds of special structures are exploite by these SAT solvers that make them so efficient on real-world instances. The contrast between the
theoreticians' view that SAT is the prototypical intractable problem and the practitioners' view that SAT-solvers routinely solve problems of 1000's of variables is rather amusing.
3. I think(?) the last two posts miss Lance's point. His point seems to be that, on the one hand, Vazirani argues that even though the best provable approximation ratio for a given algorithm is
something like 2, this is ok since "in practice" the algorithm achieves approximation ration more like 1.05. But then on the other hand, Vazirani rejects "heuristic" algorithms that achieve
approximation ratio 1.05 "in practice", since tthere is no proof for their worst-case performance. So it seems that Vazirani wants it both ways...
At least, that's what I took away from Vazirani's quote.
4. Chip Klostermeyer8:05 AM, November 02, 2005
Recently had a query from someone who wanted a "genetic optimization" (his words) algorithm for an NP-complete problem. I sense some practitioners think that such heuristics can do wonders in all
situations and don't understand worst-case performance. I think the people who "sell" these heuristics do a better job of salesmanship than the folks who are selling algorithms with a bounded
performance (or competitive) ratio!
5. Vazirani's quote does not suggest
that heuristics that have good
performance in practice are not
worth studying or valuing.
The relationships between algorithms, complexity and their practical utility have many facets and it is not fruitful to summarize
this in one or two soundbites. The
truth is that theoreticians study
algorithms often for their
mathematical and intuitive appeal
and not for practical merit. This
has led to a number of successes
and for the same reason has not
had a uniform impact on practice.
I don't view this as a negative.
The only question really is one
of economics. How many people
should be supported to do this
kind of theoretical work. Here I believe that the market forces will
play a role. A number of algorithms
people write grants with applied CS
people and interact with them so I
think the system is working overall.
Of course we should also evaluate
from time to time our impact and
research directions. We have
been reasonably successful at not
splitting into pure theory vs
applied theory, unlike mathematics.
However there is always the danger
that this can happen and and it is in the interest of the community to
have a healthy mix.
6. There are many ways to measure quality in theoretical computer science, including:
(1) Mathematical beauty.
(2) Resulting insight into the nature of computation.
(3) Practical importance and impact.
I think there are many cases where approximation algorithms have (1) and (2) covered, but of course not always. I often find approximation algorithms papers now have a "game" feel about them --
we can beat the last papers's number -- rather than any exciting insight or beauty.
Having worked on heuristic algorithms, I would say the case for (3) is vastly overstated by the TCS community, to the point where it makes us look bad and reinforces negative stereotypes about
TCS. As a corollary, I think there's too many people, and too much work, on approximation algorithms.
I sense some practitioners think that such heuristics can do wonders in all situations and don't understand worst-case performance.
Not at all. They just often don't care about worst case performance; they care about solving problems they want solved. If you can code up your solution and show that along with a good worst-case
performance it's even reasonably competitive with heuristics they use, they'll be quite happy. How many people actually code up their approximation algorithms and see how they work? It's hard to
make the claim that you're having a practical impact when you don't code up your algorithm and compare against heuristics on standard benchmarks...
The only question really is one
of economics. How many people
should be supported to do this
kind of theoretical work. Here I believe that the market forces will play a role.
I agree, but I think as a community we shape the market forces quite a bit, and we are susceptible to groupthink. I think in the future we will look back and see a lot of exciting ideas that have
come out of the research on approximation algorithms, and even some cases of powerful practical impact. But we will also see a lot of incremental papers that were shallow and unimportant -- and
that, in the end, perhaps far more resources were devoted to these problems than they merited.
7. Of course if you are a practitioner that has a heuristic that solves the problem you're dealing with with 2% guarantee then you shouldn't care about worst case guarantees.
I think what Vazirani tried to say is that
1) sometimes practitioners don't already have such a heuristic
(not all the world's computational problems are already solved)
2) sometimes the hueristic is based on ideas developed for worst-case algorithms for the same or different problems.
Provable performance is a constraint the forces you to think hard about the problem. Hopefully some interesting insight will come out of it.
8. Claire Kenyon9:17 AM, November 02, 2005
There are many ways to measure quality in theoretical computer science, including:
(1) Mathematical beauty.
(2) Resulting insight into the nature of computation.
(3) Practical importance and impact.
I think that (1) is correlated with simplicity, which is correlated with (3). The simplest ideas seem to be the ones which catch the attention of practitioners when I give talks; and those ideas
are sometimes (not always, because sometimes they are mere observations without much depth) the ones with mathematical beauty.
9. The contrast between the theoreticians' view that SAT is the prototypical intractable problem and the practitioners' view that SAT-solvers routinely solve problems of 1000's of variables is
rather amusing.
Current SAT solvers fail on DES circuit with probability 1. (DES CNF has less than 2000 variables)
10. Mitzenmacher says,
I think there are many cases where approximation algorithms have (1) and (2) covered, but of course not always. I often find approximation algorithms papers now have a "game" feel about them --
we can beat the last papers's number -- rather than any exciting insight or beauty.
That might be the case but this is
true in any kind of research as
we grope for the amibitious goal
of beauty, simplicity, and utility.
I see the same phenomena in complexity (see the parameters
for extractor constructions),
learning theory, coding theory etc
As a corollary, I think there's too many people, and too much work, on approximation algorithms.
It is fine to have this opinion
but it is preferable to let
the corrective forces work
naturally instead of being
dictated by a few people.
One reason for the activity in
approximation algorithms is
the fact that we are making
substantial progress on problems
both from upper and lower bounds.
Along the way we are discovering
connections to such things as
embeddings and fourier analysis.
This is exciting and attracts
more people and in the process
it could be the case that a number
of shallow or incremental papers
might be written. However that is
true for all growth areas. Take
for example the junk that is
written in the name of wireless
networks. Kuhn has described this
process in his famous book on
scientific progress. My take is
that the approximation algorithms
bubble has a few legs left.
11. To add my 3 cents on this important topic:
- I think that the practitioners knowledge of "the situations they care about" heavily varies from area to area, so it is really hard to make any generalizations. On one hand you have
communications people, which seem in most cases happy with modelling noise as additive or multiplicative Gaussian noise, or its variants (bursts etc).
On the other hand, you have the examples mentioned earlier in this thread; I would also add data analysis, where the structure of the data is often so complex (is your data Gaussian, clustered,
taken from a manifold, all/none of the above ?) that it is hard to converge on a well-accepted model.
And then you have also crypto/security, where malicious adversaries actually do exist
- In my opinion, perhaps the main advantage of worst-case analysis is composability: you can combine algorithms in various unexpected ways, and still have some understanding of their behavior.
This is not directly beneficial for applications, but it enables discovering (often surprising) connections, which often (although perhaps not as often as one would like) have practical impact.
E.g., in the aforementioned context of communications, the worst-case approach of Guruswami-Sudan for Reed-Solomon resulted in algorithms which (after lots of massaging) work great for Gaussian
- I do not know how much practical impact approximation algorithms (for NP-hard problems) had, I am sure other people can comment on that. But, there are quite a few examples of an impact of
(1+eps)-approximation algorithms for poly-time problems.
For example, approximate algorithms for various streaming problems, such as counting the number of distinct elements, had a quite significant impact on databases and networking.
There are many ways to measure quality in theoretical computer science, including:
(1) Mathematical beauty.
(2) Resulting insight into the nature of computation.
(3) Practical importance and impact.
I think there are many cases where approximation algorithms have (1) and (2) covered, but of course not always. I often find approximation algorithms papers now have a "game" feel about them --
we can beat the last papers's number -- rather than any exciting insight or beauty.
I agree. It is usually (1) and (2) that motivate (most) people working in the area. But it seems to me that people well-versed in the area are able to be useful when they work on, or are queried
on questions of practical importance. Opinion may be divided on whether the training/experience plays any role.
Having worked on heuristic algorithms, I would say the case for (3) is vastly overstated by the TCS community, to the point where it makes us look bad and reinforces negative stereotypes about
TCS. As a corollary, I think there's too many people, and too much work, on approximation algorithms.
Too much relative to what? Papers in networking, or in information theory, outnumber those in approximation algorithms by a huge constant factor. Do they have a proportionately larger impact? Not
clear to me.
Any field of research, as it matures will have incremental papers. If the fraction approaches one, that would be a problem. But that is far from true for approximation algorithms. The area is
about developing new techniques for designing algorithms; the number of "incremental" papers is evidence of the generality of the toolbox we continue to enrich.
Of overstating the case for (3), I think it is regrettable, but is true for almost all of science. It is more a statement on our society than on our community.
The only question really is one
of economics. How many people
should be supported to do this
kind of theoretical work. Here I believe that the market forces will play a role.
I agree, but I think as a community we shape the market forces quite a bit, and we are susceptible to groupthink. I think in the future we will look back and see a lot of exciting ideas that have
come out of the research on approximation algorithms, and even some cases of powerful practical impact. But we will also see a lot of incremental papers that were shallow and unimportant -- and
that, in the end, perhaps far more resources were devoted to these problems than they merited.
Once again, you could replace approximation algorithms by any area of science and the first part of your claim would be true---that is the nature of science. Some of the shallow/unimportant
papers result from not-entirely-successful attempts to understand deep questions and a part of science. Others result from people seeing an easy opportunity to write such a paper, which I agree
is regrettable, but not unique to any one area. Far more resources again seems more generally true in hindsight. But I think there are substantially worse inefficiencies in our society that we
shouldn't be worrying about this one.
13. Since STOC deadline is tomorrow
and Michael is on the PC, are
papers on approximation algorithms going to take a hit :)?
14. We are in the middle of a grand classification effort for approximation problems that was restarted with the PCP theorem and the GW Maxcut SDP algorithm after it hit an impasse in the late
1970's. It has been as exciting time to work on either the hardness of approximation or on approximation algorithms and the many sucesses on long-standing problems has been justification enough
for this effort. Fundamental open problems such as the approximability of VertexCover still remain and there are extensions of these results to economic domains that are interesting as well and
will keep researchers working for quite a while.
It seems foolish to view the results of this classification effort primarily using the lens of the practicality of the guarantees that they yield. That simply is beside the point.
The issue of heuristic algorithms is quite separate: Many times we do not (yet) possess the analytical tools to understand the behavior of these algorithms; it is not because we don't care about
I have spent a fair amount of time working with practical SAT solvers and related algorithms for symbolic model checking and the like. Despite the fact that these algorithms routinely solve large
problems they are typically also extraordinarily brittle. Often one has to massage the instances significantly so that the methods work or settle for something that isn't exactly the problem that
one began with. Everyone celebrates their successes but tends to downplay the many practical problems where the methods fail completely. Practioners are generally very happy when with theoretical
analysis one can begin to predict the behavior of these algorithms (or how to use them).
Many of the most effcient SAT solvers used in practice employ algorithms that have close parallels in those that we can analyze so this has been a fruitful connection. Analyzing other algorithms
used (e.g. random walk-related algorithms) seems to be quite a bit beyond current technology.
Paul Beame
15. I appear to have been greatly misunderstood. To be clear, I think there is great work going on in approximation algorithms, especially right now, as many people have commented and as I tried to
say clearly in my first post. This work is mathematically beautiful and gives fundamental insight into computation; while practical importance might not be immediate, given it has these other
properites, it's certainly very worthwhile and it's likely it will be practically important in the future.
I am thinking more of papers (or chains of papers) I have seen taken some supposed practical problem -- usually a dramatic simplification -- and getting some large constant factor approximation
or worse, often for a problem where there are already heuristics that work well, and are already used in practice or written about. This seem quite clearly to be the type of work Lance was
discussing in his original post, and was the type of work I was referring to in my comment. SAT algorithms, for example, do not fit into this paradigm, for reasons alluded to in other comments.
There has been a lot of this type of work in our community, that we sometimes, or even often, accept into our top conferences. Lance brought up the question of how we should think of this type of
work, and I gave my opinion, which I stand by.
The defense multiple people seem to have given is that yes, perhaps some of this work is not so good after all, but that is true everywhere. Of course. (When a friend told me "95% of all theory
papers are crap", I immediately responded, "Well, then we're about 4% better than systems!") But when I see what I think are poor networking or information theory papers, or even worse bad
directions in networking or information theory papers, I feel free to let my opinion of the work be known. (At least now that I have tenure. :) ) Should I not do the same here?
Since STOC deadline is tomorrow
and Michael is on the PC, are
papers on approximation algorithms going to take a hit :)?
Well, if your STOC paper has a competitive ratio of 2 or higher, but you've claimed it it gives insight into some practical problem, you can go ahead and blame me if it doesn't get in. :)
16. Mike,
Now that you've clarified your position, let me be the first to say that this is a sentiment many of us share. People have various aesthetics---some people are searching for solutions that are
useful, some for solutions that are beautiful, and others for solutions that are very difficult. And this diversity in goals and visions is something that tends to benefit us all in the long run.
Beauty is hard to fake, and although it's certainly possible to make your proofs look more difficult than they actually are, this cheap trick can only work so many times. Usefulness, on the other
hand, seems way too easy to sell to theoreticians.
One sees a plethora of papers which are easy and ugly, but get by on the merits of "working on an actual problem that arises in the real world." Such a claim is often bogus, and almost always
unjustified. In fact, there is a whole culture surrounding this notion of fake usefulness.
I have seen comments from referees that say "The ideas in this paper are very nice and the proofs are ingenious, but their techniques only apply to the case when \epsilon is very small, and these
do not often arise in practice."
Forget the fact that the case of vanishing \epsilon is where all the interesting mathematics lie, where there was a hole in our previous understanding, and which relates to a host of other open
problems. Just know that the "large \epsilon" domain wasn't actually applied to anything anyway--in this way, fake usefulness begins to trump creativity and beauty.
There are people working brilliantly at the interface between theory and applications, and they rock. But fake usefulness degrades both the purity and the utility of our field, in addition to
making us all look foolish.
17. I am thinking more of papers (or chains of papers) I have seen taken some supposed practical problem -- usually a dramatic simplification -- and getting some large constant factor approximation
or worse, often for a problem where there are already heuristics that work well, and are already used in practice or written about.
I don't think there is anything wrong with these chains so long as (a) they do not appear in top conferences and (b) they are honest about the fact that these are early efforts in a simplified
version of the model. In particular they should list among the open questions not just "lower the competitive ratio" but also "make the model realistic".
Why do I believe these chains are important? Because science is built on the shoulders of giants... and of average folks.
A lot of this exploratory work lays down the foundation for a "giant" to formalize the questions and give new direction to an incipient field. Complexity and Analysis of Algorithms existed before
their acknowledged founders started them. What made them the founders was their ability to formalize and solidify the research program started by others.
Alex Lopez-Ortiz
18. What about correlation clustering? This topic had a chain of results, the first with some large constrant factor approximation ratios. But the work that ultimately resulted was pretty
interesting. What does Monsieur Mitzenmacher say about that?
19. First, might I suggest it might be nice if some of these anonymous disagreers made themselves nonymous. Then we could take discussions off-line, and I'd at least have the courtesy of knowing who
disagreed with me.
What about correlation clustering? This topic had a chain of results, the first with some large constrant factor approximation ratios. But the work that ultimately resulted was pretty
interesting. What does Monsieur Mitzenmacher say about that?
I say, I'm not French. Does Mitzenmacher sound French to you? :)
Interesting is in the eye of the beholder. I perhaps don't know enough to say very specific things on this problem, but general thoughts: The basic problem is certainly very nice and well
motivated. Results like that it's APX-hard in general seem important, in that they motivate why people should look at and use heuristic algorithms.
For competitive analysis, I have the same basic questions I've had in my other comments, that apply to any problem. Do any of these papers code up and check the performance of these algorithms?
Try them on somebody's real or even fake data? Do any of these papers refer to heuristics people actually use to solve these problems? Do they compare with these heuristics? I don't know what the
best current approximation constants are, but if you told me that in O(n^4) time we could get within a factor of 2, I guess I wouldn't be very excited.
I can understand arguments for studying the complexity of specific problems (though I reserve the right to disagree :) ). The argument Lance's original post mentions in favor of looking at
algorithms with "large" competitive ratios is that it forces us to look at/ tells us something about the combinatorial structure of the problem which would lead to better practical algorithms. Is
there evidence that that is happening here? If so, that's great -- that would make a believer out of me --but if not, is it worth the effort? Why is everyone so willing to just say that it is?
20. Claire Kenyon5:59 PM, November 03, 2005
Today, in preparation for my upcoming undergrad Algorithms course, I scanned the non-theory undergraduate courses in my department to see for what topics I would be able to claim applications.
Here is what I found:
B-trees in the database management course
Suffix trees in the computational biology course
Greedy scheduling algorithms and LRU cache algorithm in the operating systems course
Viterbi's algorithm and some other dynamic programming stuff for Markov decision processes in intro to AI
Random walk algorithm for 3SAT in intro to AI
Distributed hash tables
Spectral clustering in learning
Matrix algorithms (SVD, PCA,...) in intro to vision
I think this means that these are all high-impact problems and algorithms. And some are approximation algorithms.
I can understand arguments for studying the complexity of specific problems (though I reserve the right to disagree :) ). The argument Lance's original post mentions in favor of looking at
algorithms with "large" competitive ratios is that it forces us to look at/ tells us something about the combinatorial structure of the problem which would lead to better practical algorithms. Is
there evidence that that is happening here? If so, that's great -- that would make a believer out of me --but if not, is it worth the effort? Why is everyone so willing to just say that it is?
Because there are plenty of
examples where the initial stuff
that seemed flaky or not so
interesting from practice
turned out to have lot of impact.
Take k-server problem or metrical
task systems. Tree embeddings
came out of this and that had
a huge impact in understanding
many problems. Take property
testing. One could question the
usefulness of it but again it
had nice math. Not everything
we do should be drive by
practice. We explore things because
our current techniques have
limitations and that leads to
new things and expands our tool
22. Claire's list indeed contains an examples of high impact mathematical and algorithmic notions. However, Mike's question was more specific, he was asking about practical impact of a (specific)
notion originating in TCS. While we (TCS) can claim credit for some examples (e.g., fast algorithms for suffix trees), I dont think we can do that for others (PCA was invented before the World
War II, Viterbi's algorithm was invented in the context of digital communication, etc).
23. (PCA was invented before the World War II, Viterbi's algorithm was invented in the context of digital communication, etc).
If those algorithms are created with theory tools and justified with theory techniques we can lay claim to them, there is no need to carry a theory "union card" to create a theoretical result.
|
{"url":"http://blog.computationalcomplexity.org/2005/11/competitive-analysis.html","timestamp":"2014-04-18T18:37:52Z","content_type":null,"content_length":"244870","record_id":"<urn:uuid:86029bbe-0578-4ef8-b594-894cc653df12>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The eye of the umpire
How accurate are umpires in calling the strike zone? How well can
they locate a ball flashing towards them at 95 mph? Or unexpectedly
swooping down and, perhaps, nicking the lower outside extremity of
the strike zone? Two inches? One? One-tenth of an inch? Here’s what
Ted Williams wrote about his ability to judge where a pitched ball
actually goes, from his book The Science of Hitting:
It’s very likely that once you’ve made yourself sensitive to the
strike zone, you’ll be a little more conscious of what you think are
bad calls by the umpire … I would say umpires are capable of calling
a ball within an inch of where it is. As a hitter, I felt I could
tell within a half-inch.
Well, I’m skeptical by nature, and those estimates seem a trifle too
good to me. But Williams was a very smart guy and he wasn’t one to
throw a lot of bullshit around, so I wouldn’t dismiss his claims
outright. And it turns out that we can shed some light on the subject
by looking at MLB’s fabulous pitch data, the so-called pitch-f/x
Today I’m going to build on some work I did last time ( href="http://www.hardballtimes.com/main/article/strike-zone-fact-vs-fiction/">Strike zone: fact vs. fiction)
on determining
the size of the strike zone using pitch data. As we’ll see in a
few moments, we can infer from that data how well an umpire can
locate the incoming pitch. First, though, I want to go back
and make some small improvements to the measurements of the strike
zone that I did last time.
That was a ball?!?
One of the loose ends of that analysis was some question about the
quality of the data. Here’s a snippet from that article:
I’ve already mentioned the fact that the ball fraction for pitches
right down the middle of the plate is not zero, in fact it’s about
5-6%. Can umpires be missing these easy calls so frequently? It seems
hard to believe. The alternative explanation is that there is some
problem with the data.
I also mentioned that one of the pitches that supposedly was right
down the middle of the strike zone was actually an intentional ball,
thrown two feet off the plate, as verified by checking the pitch on
After viewing some other pitches on video, it became clear that the
MLB system for tracking pitches was just getting some pitches
wrong. Of course, this shouldn’t be surprising. This is a very complex
system that is still in the course of being rolled out in all major
league parks, we should not expect the data to be perfect. But, we do need
to understand its limitations and see how it affects what we are
trying to do with the data.
So, I have tried to determine how often the system mis-tracks a
pitch. First, let’s recall the ball
fraction graphic I produced last time. This graph shows the fraction
of balls called by the umpire as you move across the strike zone.
The edges of the strike zone are defined as the position the ball
fraction (blue curve) crosses the one-half mark (horizontal green
line). Whereas last time I focused on measuring the width of the
zone, I now want to understand the features of this plot more
generally. As already noted, the ball fraction does not go to zero at
the center, as one would expect it should. Also, the transition from
zero to one at the edges of the strike zone is not perfectly sharp,
which is what you’d expect for a perfect pitch-tracking system and
infallible umpires.
In fact, the sharpness of the ball-strike transition is a direct
measure of the accuracy of the system, although it should be kept in
mind that I’m referring to the pitch-tracking system and umpire
pitch-locating ability combined. The graphic below shows how the ball
fraction curve is modified for different accuracies. I generated these
curves analytically using a simple model (see the Resource section for details).
As you can see, the
less accurate the system, the more the curves get “smeared” out. Note
how the edges of the strike zone are the same for all values of
accuracy. In other words, the measured width is independent of
Do any of these colored curves look like the real data shown above?
Not really: the green or cyan curves seem to have the right shape in the
transition region, but they do not show the non-zero ball
fraction at the center. It turns out that no value of the accuracy
number can reproduce what we see in the data. However, if I modify my
model a bit, I can get this plot:
Here I show the same data I showed above (dark blue curve), but now
I’ve superimposed the curve I get from my calculation (in cyan). As
you can see, the match to the data, while not perfect, is actually
pretty good: the transition sharpness looks about right and we see a
ball fraction of around 7-8% right in the middle of the plate. To get
this shape, I had to assume about 5% of pitches are completely
mis-tracked by the system, i.e. for those 5% of pitches the
location as determined by the system was wildly off. Note that the
measured width of the strike zone is not affected significantly. (I
have assumed a strike zone that goes from -1 to one foot, to match the
observed data.)
A big word of caution: I am not claiming that 5% of the
pitches gathered thus far are mis-measured. Mine is just one
hypothesis that happens to qualitatively describe the data, but it
doesn’t mean it’s correct. My little model does not rule out
other possibilities, it simply shows how one hypothesis is indeed
The main point here is that, while there is some small level of noise in the data, its presence doesn’t affect our ability
to measure the strike zone.
Calling the high strike, or not
After my previous article appeared there were lively discussions on
the results both on href="http://ballhype.com/story/strike_zone_fact_vs_fiction/"
target="new">Ballhype and over at href="http://www.insidethebook.com/ee/index.php/site/article/where_is_the_strike_
target="new">The Book Blog. Sabermetrician
Mitchel Lichtman was fairly (OK, very)
certain that there was something wrong with my estimation of the
vertical strike zone for right-handed batters. I had found the the
umps were calling the high strike correctly, as shown in this plot (taken directly from my previous article):
Here’s what Mitchel
thought about that:
In any case, there is NO WAY IN HECK that the average umpire calls a
rule book strike at the top of the zone for RHB!!!!!!!!!!
…Something is wrong. I have watched 300 games a year for 20 years.
The average top of the strike zone is well below the rule book.
This is almost unequivocable.
Hey, when Mitchel speaks, especially this forcefully, well, I
listen. The guy knows his stuff. And indeed, I found two problems, one
was a trivial mistake on my part, the other was another data quality
My mistake was in reporting the size of the rulebook strike zone. I
did not add in the radius of the ball to either end of the vertical strike zone
as I had for the horizontal dimension. OK, that’s easy to fix, but the
second problem was more difficult to solve. It has to do with the
MLB-supplied limits of the vertical strike zone.
The height of Jeter’s knee
While the horizontal size of the strike zone is defined by the width
of the plate and is the same for everybody, the vertical dimension of
the zone is tied to each individual batter. A nice feature of the
MLB pitch data, is that they include, for each pitch, their estimate of
the lower and upper limits of the strike zone, based on the batter’s
stance. The operator of the pitch-f/x system sets those limits on a
video screen as the batter assumes the hitting position.
This data, then, allows us to know if a pitch was actually in the strike zone. However,
I have found some problems with these strike zone limits that come
with the pitch data, namely, they seem to vary a quite a bit, even for
the same batter on different days. As an example, here are the lower
and upper limits of the strike zone for Derek Jeter on three different
Limits of Jeter's strike zone (inches)
Game Low High Diff
Tex, 5/3 23.6 53.0 29.4
Sea, 5/12 23.3 46.4 23.0
Chi, 5/16 20.4 40.5 20.1
Diff: High minus Low; the vertical size of the strike zone
Now, I suppose a batter can tweak his stance a little from one game to
the next, but I seriously
doubt that Jeter’s vertical strike zone is changing by nine inches
from game to game. I did not single Jeter out as a particularly bad
case; just about all batters in the sample have this problem.
Let me say that I don’t think this is particularly surprising. As I
mentioned above, this is a complicated system that has just begun
operating. There is surely a learning curve for the system’s operators
and I’m confident that the strike zone data will improve as time goes
But in the meantime, what shall we do? Do we abandon our idea of
measuring the vertical strike zone using the pitch data? Actually, I
don’t think we have to do that. What we can do is assume that on
average the system’s operators are getting it right. So, for each
batter, I calculate his average strike zone lower and upper limits,
based on the pitch data. Then I apply each batter’s average strike
zone for all pitches thrown to him, instead of the pitch-by-pitch
values that come with the data. Make sense?
The results for both right-handed and left-handed batters, is shown in
the graph below:
These definitely look better than the
previous plot: the bottom is flatter and the ball-strike transition is
sharper. In fact, these plots now resemble the plots for the
horizontal dimension, where the strike zone limits are not
batter-dependent, so that’s good. Note that in these plots, I’ve also
corrected my error on the rulebook strike zone—it’s been
widened compared to the plot above.
From these plots, it now appears that umpires are not really calling
the vertical strike zone as they should, although they are doing just
as poorly on the low strike as they are on the high strike. Here are
updated versions of a plot and table I ran last time:
Actual vs. Rulebook Strike Zone Dimensions (inches)
Left Right Lower* Upper Total Area+
RHB -12.0 12.1 21.6 42.0 492
LHB -14.6 9.9 21.5 40.8 475
Rulebook -9.9 9.9 17.7 44.2 527
* vertical strike zone mapped to average
+ total area in square inches
So, our conclusions from last time change a bit. Right-handed batters
still have to defend a slightly larger strike zone than lefties, but
in both cases the total area of the measured zone is less than the
rulebook strike zone. The difference between the measured upper limit
and the rulebook strike zone is only 2.2 inches for right-handed
batters, which doesn’t seem like much, certainly not as much as what
we see on TV, where pitches that are just a shade above the belt are
routinely called balls.
It’s hard to judge the height of a pitch on TV
But are we seeing what we think we’re seeing? I’m not sure we
are. When we watch a pitch on television, we generally see if from the
center field camera, so we have no depth perception along a line from
the pitcher’s mound to home plate. We necessarily judge the location of a pitch
from where it hits the catcher’s glove. However, since the pitch is
moving at a downward angle and the catcher is positioned well back of
home plate, the pitch drops significantly from the point it passes
through the strike zone to the point where the catcher receives it.
The amount of drop will depend on the speed and the type of pitch, it
can be a foot or more for a slow curve, but even hard fastballs will
drop 3-4 inches between home plate and catcher’s glove. As I
mentioned, watching on TV we cannot discern this drop, we can’t tell
how high the pitch was when it crossed the plate.
Note that this same illusion is present even when viewing a pitch from
the side, which is the view on some replays. In that case, we tend to
judge the pitch as it passes the batter, but almost all batters take
their stance well back in the batter’s box and the distance from the
front of home plate to the batter (middle of chest, let’s say) can
easily be two feet. Again, many pitches will drop several inches over that
distance, and we will think the pitch is lower than it actually was.
In other words, it is virtually impossible to judge the vertical position of where a pitch crosses the strike zone
by watching on TV.
Final thoughts
So what about Ted Williams and his claim that umpires can call pitches
to an accuracy of one inch, what does my study say about that? Well,
the nice curve I calculated for the third graphic in this article
assumed an accuracy of 2.5 inches. Now that number represents a
combination of the average accuracy of the umpires and the accuracy of
the pitch-f/x system. The latter is reported to have an accuracy of
one inch, but keeping with my skeptical nature, I will assume that this is the best-case scenario.
This would imply that the contribution of the umps to the
overall accuracy is, at most, a little over two inches (see the Resources section
if you’re curious about how I get this number). Two inches is not as
good as Williams’ estimate, but I think it’s pretty darn good.
References & Resources
For those few that want the gory details:
• Analytical ball fraction curves—I used a simple simulation to generate these curves. The first step is to choose a random number between -2 and two feet. This is the true position of a pitch. To
that I add a small number, the uncertainty, the result being the apparent position of the pitch. The uncertainty is normally distributed with mean zero and sigma set to one, two or three inches,
etc. The pitch is a strike if its apparent position is within the strike zone. I generate thousands of pitches this way, and the ball fraction as a function of the true position gives the curves
shown above.
To reproduce the actual data, I had to add about 5% of pitches where the uncertainty is very large (around two feet) instead of two or three inches.
• Accuracy of umpire’s eye — Our measured accuracy is a combination of the accuracy of the pitch-tracking system and umpire accuracy. When there are multiple contributions to an uncertainty, the
total uncertainty is not the sum of the individual contributions, but rather the square of the total is the sum of the squares. Thus, given total uncertainty (s_tot) and pitch-tracking
uncertainty (s_track), the umpire uncertainty (s_ump) can be estimated as s_ump = sqrt(s_tot^2 – s_track^2).
|
{"url":"http://www.hardballtimes.com/the-eye-of-the-umpire/","timestamp":"2014-04-16T04:18:35Z","content_type":null,"content_length":"56090","record_id":"<urn:uuid:3614add4-922a-48a9-b637-e1ce8d2283a8>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A gymnast jumps straight up, with her center of mass moving at...
Get your Question Solved Now!!
A gymnast jumps straight up, with her center of mass moving at...
Introduction: falling objects
More Details: A gymnast jumps straight up, with her center of mass moving at 3.72 m/s as she leaves the ground. How high above this point is her center of mass at the following
times? (Ignore the effects of air resistance, and assume the initial height of her center of mass is at y = 0.)
t (s) y (m)
Please log in or register to answer this question.
0 Answers
Related questions
|
{"url":"http://www.thephysics.org/26478/a-gymnast-jumps-straight-up-with-her-center-of-mass-moving-at","timestamp":"2014-04-17T07:29:45Z","content_type":null,"content_length":"105918","record_id":"<urn:uuid:61cffacb-58b6-4e4c-8d0a-2ad49d8a556e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3.1: Introduction
Created by: CK-12
This algebra module has been designed to introduce grade 1 students to ten key concepts of algebra and to enhance their problem solving skills. Each section begins with a brief description of the
problem set and the concepts and skills developed. This is followed by the solutions to problems in the problem set. The first problem in each set is the “teaching problem.” It is completed following
the five-step problem solving model and is designed to be used by teachers as the centerpiece of the instructional program. The teaching problem, to be completed by students with the teacher’s
guidance, is followed by problems to be completed by the students working on their own or in pairs.
This module may be used as an algebra unit to complement the existing instructional program. It also may be used to show connections between algebra and the strands of number and measurement, to
provide practice of number computational algorithms, and to reinforce problem solving skills.
The Extra for Experts provides additional opportunities for students to apply newly learned concepts and skills to the solution of problems like those developed in this Algebra module.
The Key Algebraic Concepts
In this module, students explore equal and unequal relationships by interpreting and reasoning about pictures of pan balances. Their job is to figure out which boxes to place in an empty pan to
balance the weight in the other pan. This is preparation for the study of variables as unknowns in equations, and reinforcing the concept that there are often multiple solutions to a problem.
Variables as unknowns:
Variables may be letters, geometric shapes, or objects that stand for a number of things. When used to represent an unknown, the variable has only one value. For example, in the equation, $t + 5 = 7$
$t$$t$$2 + 5$
Variables as varying quantities:
In some equations, variables can take on more than one value. For example, in the equation $q = 2 + r$$q$$r$$r$$q$$q$$q$$r$$r$$q$$y = 2z$
Proportional Reasoning:
A major method for solving algebraic problems is by reasoning proportionally. Proportional reasoning is sometimes called “multiplicative reasoning,” because it requires application of multiplication
or its inverse, division. In this module, students reason proportionally when, given the price of one silly sticker, they compute the cost of multiple sets of the stickers.
Interpret Representations:
Mathematical relationships can be displayed in a variety of ways including with text, tables, graphs, diagrams and with symbols. Having students interpret these types of displays and use the data in
the displays to solve problems is critical to success with the study of algebra. In this module, students interpret pan balances, circle and arrow grid diagrams, tables of values, and weight scales.
Write Equations:
Although an equation is a symbolic representation of a mathematical relationship, the writing of an equation is a key algebraic skill and one that requires separate attention and instruction. In this
module, students learn to write letter and number equations for the various collections of weights that can balance the pans.
The Problem Solving Five-Step Model
The model that we recommend to help students move through the solution problems has five steps:
Describe focuses students’ attention on the information in the problem display. In some cases, the display is a diagram. Other times it is a table, graph, equation, or model, or a combination of any
of these. Having students tell what they see will help them interpret the problem and identify key facts needed to proceed with the solution method.
My Job helps students focus on the task by having them tell what they have to do, that is, rephrase the problem in their own words.
Plan requires identification of the steps to follow to solve the problem and helps students focus on the first step. Knowing where to start is often the most difficult part of the solution process.
Solve is putting the plan to work and showing the steps.
Check is used to verify the answer.
We recommend that you “model” these steps in your instruction with the first problem in each problem set and that you encourage your students to follow the steps when solving the problems and when
relating their solution processes to others.
Note: Although the instructional pages show only one solution plan, many of the problems have more than one correct solution path. These problems provide excellent opportunities for engaging your
students in algebraic conversations about how their solutions are the same, how they differ, and perhaps, which solution method is “most elegant.
You can only attach files to None which belong to you
If you would like to associate files with this None, please make a copy first.
|
{"url":"http://www.ck12.org/book/Algebra-Explorations-Pre-K-through-Grade-7/r4/section/3.1/","timestamp":"2014-04-17T01:49:36Z","content_type":null,"content_length":"108747","record_id":"<urn:uuid:2af2bf05-e194-4a9f-873b-ffe6d3514ff6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rounding keyframe transformations - Graphics Programming and Theory
I have successfully implemented an exporter to load FBX models and export to my own custom format. The downside is there are a lot of redundant key-frame transformations. Rather, I end up with
transformations that were the same before the conversion, but after the conversion (due to round-off error and various technical reasons) become almost the same.
So I want to create some optimization functions to reduce the number of key-frames stored in memory. (Note, it doesn't matter so much if this optimizer is fast since it is only an exporter, it isn't
executed during the game)
The general idea is that if two transformations, are 'close enough' together, then they should be replaced with the same transformation.
For example, the two vectors A =(100,0,0) and B=(100,0.00002, 7.0e-18) are essentially the same and therefore we should delete B from memory and put A in place of B. (or a pointer to A in place of B,
you get the idea)
So my question is how does one determine if two elements (say two vectors) can be treated as identical?
My idea goes something like this: When comparing two vectors A and B, take the norm (or length) of A and multiply by some fixed small value (call it epsilon) then if all entries in A and B differ by
less than Norm(A)*epsilon then they are considered identical.
For my setup it appears that an epsilon of 10^(-5) is as small as I can go to avoid round-off error creating redundant key-frames. Though I suppose I could go lower, and just let a few extra
key-frames sneak in. Shouldn't be that big of a deal.
What do you think of this approach?
And what of quaternions? In this case I have the luxury that all vectors have the same norm. So should I just check to see if all elements differ by less than some fixed epsilon?
|
{"url":"http://www.gamedev.net/topic/636134-rounding-keyframe-transformations/","timestamp":"2014-04-19T04:21:00Z","content_type":null,"content_length":"113150","record_id":"<urn:uuid:44724eff-976e-44ee-b3b6-caa8be610739>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: September 2002 [00293]
[Date Index] [Thread Index] [Author Index]
RE: Arrow pointers rather than Legend
• To: mathgroup at smc.vnet.net
• Subject: [mg36617] RE: [mg36589] Arrow pointers rather than Legend
• From: "David Park" <djmp at earthlink.net>
• Date: Fri, 13 Sep 2002 23:33:20 -0400 (EDT)
• Sender: owner-wri-mathgroup at wolfram.com
You are certainly correct in wanting to make a tidier plot! Legends are
often poor because they actually distract from the message of the data.
However, the best solution will depend on the particular nature of your
data. If there are not too many curves you could perhaps put Text labels
right on top of each curve. If there are many curves, or some of them are
close together, use arrows for some of them. If you have a really large
number of curves, then maybe a different approach is needed.
It is probably not possible to make a useful general routine for labeled
arrows because the best placement would depend upon the particular nature of
the graph. So, to make a nice graphic you will have to do some "hand" work,
specifying each Arrow and Text label. You can actually click the coordinates
off the graph to put into the Arrow and Text statements.
If you want to actually show me your plot, I could try to make some
David Park
djmp at earthlink.net
From: JM [mailto:j_m_1967 at hotmail.com]
To: mathgroup at smc.vnet.net
Instead of using Legend in plots with multiple series is there a way
to have arrows with a text at the end identifying the different
series? The reason I'd like this is that the legend is a bit 'bulky'
looking and I'd want something tidier.
I know there is an Arrow package but I think you'd need to manually
enter each start and end point of each arrow - is there a simple
command to do this?
Also - is there a way to adjust the legend font size? I.e. make it
small so that it doesn't interfere with data series.
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2002/Sep/msg00293.html","timestamp":"2014-04-20T00:57:33Z","content_type":null,"content_length":"35825","record_id":"<urn:uuid:dbba067d-fd0d-454a-b8c5-727a1cd9d954>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Neuroscience] Re: Equation that explains the behaviour of a
circuit in voltage clamp
[Neuroscience] Re: Equation that explains the behaviour of a circuit in voltage clamp
Bill.Connelly via neur-sci%40net.bio.net (by connelly.bill from gmail.com)
Sat Apr 4 04:34:37 EST 2009
On Apr 4, 3:32 am, r norman <r_s_nor... from comcast.net> wrote:
> If you use non-step clamp voltages, you just calculate the dV/dt term
> and subtract the capacitative component from the total current to get
> the pure ionic component.
Well this is exactly where I am stuck, how do I calculate the current
that "crosses" the capcitor, I know its somehow proportional to the
series resistor, the membrane capacitance and dV/dt, but exactly how,
I'm not sure.
More information about the Neur-sci mailing list
|
{"url":"http://www.bio.net/bionet/mm/neur-sci/2009-April/062431.html","timestamp":"2014-04-16T22:36:46Z","content_type":null,"content_length":"3343","record_id":"<urn:uuid:81a90e08-f148-4cb1-9557-c102c158ba10>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
helpp pleasee
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50d11b02e4b0091849d7b4e2","timestamp":"2014-04-21T04:37:56Z","content_type":null,"content_length":"101535","record_id":"<urn:uuid:9aa6a31d-27c3-44f9-a751-8ccec6846bdb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mean Value Theorem Explained, Part 1 of 2
Mean Value Theorem Explained, Part 1 of 2
1731 views, 3 ratings - 00:04:02
Produced by Kent Murdick
Instructor of Mathematics
University of South Alabama
Part 1 of an explanation on the Mean Value Theorem.
• What is the Mean Value Theorem?
• What does the Mean Value Theorem mean?
• Why does the Mean Value Theorem work?
• Why is the Mean Value Theorem important?
This video explains the Mean Value Theorem in a way that makes sense. It breaks down the language of the definition so any student can understand what it means, see why it is true, and understand
why it is useful. This is a very good explanation of a very important theorem in Calculus.
Well done! It was a nice refresher and easy to understand.
|
{"url":"http://mathvids.com/topic/5-calculus/major_subtopic/147/lesson/529-mean-value-theorem-explained-part-1-of-2/mathhelp","timestamp":"2014-04-19T22:07:10Z","content_type":null,"content_length":"81105","record_id":"<urn:uuid:fd47be5b-c62d-4143-a281-c3d40b43542f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Narrow Search
Earth and space science
Physical sciences
Sort by:
Per page:
Now showing results 1-10 of 12
In this problem set, learners will analyze a table on the reflectivity of various substances to three kinds of wavelengths in order to answer a series of questions. Answer key is provided. This is
part of Earth Math: A Brief Mathematical Guide to... (View More) Earth Science and Climate Change. (View Less)
In this problem set, learners will analyze a figure of solar irradiance, derived from ACRIMSAT satellite data, and sunspot number from 1978 to 2003. Answer key is provided. This is part of Earth
Math: A Brief Mathematical Guide to Earth Science and... (View More) Climate Change. (View Less)
In this problem set, learners will analyze a graph of the reflectivity of soil and two kinds of vegetation to understand how scientists use these measures to identify different materials. Answer key
is provided. This is part of Earth Math: A Brief... (View More) Mathematical Guide to Earth Science and Climate Change. (View Less)
In this problem set, learners will analyze a map of Earth's gravity field derived from satellite data to answer questions on gravity averages and anomalies in certain regions. Answer key is provided.
This is part of Earth Math: A Brief Mathematical... (View More) Guide to Earth Science and Climate Change. (View Less)
In this problem set, learners will determine the scale of a false-color infrared satellite image of Paris and measure several of the features depicted in it. Answer key is provided. This is part of
Earth Math: A Brief Mathematical Guide to Earth... (View More) Science and Climate Change. (View Less)
In this problem set, learners will apply the concepts of reflectivity and absorption to derive the likely composition of the materials described in different scenarios. A table with the reflectivity
of common materials and the answer key are... (View More) provided. This is part of Earth Math: A Brief Mathematical Guide to Earth Science and Climate Change. (View Less)
In this problem set, learners will practice fractions by working with the ratios of various molecules or atoms in different compounds to answer a series of questions. Answer key is provided. This is
part of Earth Math: A Brief Mathematical Guide to... (View More) Earth Science and Climate Change. (View Less)
In this problem set, learners will answer a series of questions about the complex molecule, Propanal. Answer key is provided. This is part of Earth Math: A Brief Mathematical Guide to Earth Science
and Climate Change.
This interactive, online module reviews the basics of the the electromagnetic spectrum and makes the connection between radiation theory and the images we get from weather satellites. Students will
learn about: the electromagnetic spectrum;... (View More) electromagnetic waves; the electromagnetic spectrum and radiation theory; and how satellite radiometers "see" different sections of the
spectrum. The module is part of an online course for grades 7-12 in satellite meteorology, which includes 10 interactive modules. The site also includes lesson plans developed by teachers and links
to related resources. Each module is designed to serve as a stand-alone lesson, however, a sequential approach is recommended. Designed to challenge students through the end of 12th grade, middle
school teachers and students may choose to skim or skip a few sections. (View Less)
In this interactive, online module, students learn about satellite orbits (geostationary and polar), remote-sensing satellite instruments (radiometers and sounders), satellite images, and the math
and physics behind satellite technology. The module... (View More) is part of an online course for grades 7-12 in satellite meteorology, which includes 10 interactive modules. The site also includes
lesson plans developed by teachers and links to related resources. Each module is designed to serve as a stand-alone lesson, however, a sequential approach is recommended. Designed to challenge
students through the end of 12th grade, middle school teachers and students may choose to skim or skip a few sections. (View Less)
«Previous Page12 Next Page»
|
{"url":"http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects=Physical+sciences&smdForumPrimary=Earth+Science&resourceType%5B%5D=Instructional+materials%3ACurriculum&resourceType%5B%5D=Instructional+materials%3AProblem+set","timestamp":"2014-04-19T11:00:36Z","content_type":null,"content_length":"66904","record_id":"<urn:uuid:07bc0baf-6d88-4261-8125-664c733288cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Use the definition of E[g(Y)] to derive E[Y^2]
You can derive E(Y^2) easily:
let f(x) = ƩC(m,x)*q^x * (1-q)^(m-x)
E(X^2) = ƩC(m,x)*x^2 *q^x * (1-q)^(m-x)
Observe that this is equivalent to ƩC(m-1,x-1)*(m/x)*q^x*(1-q)^(m-x)*x^2
This becomes:
Let y = x-1
x = y+1
This equals
This equals mq*ƩC(m-1,y)*q^(y)*(1-q)^(m-1-y)*(y+1)
Expand via factor (y+1)
mq*ƩC(m-1,y)*q^(y)*(1-q)^(m-1-y)*(y) + mq*ƩC(m-1,y)*q^(y)*(1-q)^(m-1-y)*(1)
The term on the right is simply mq, because the sum itself is equal to 1
The term on the left can be solved easily:
mq*ƩC(m-1,y)*q^(y)*(1-q)^(m-1-y)*(y) becomes..
Let z = y-1
= (m-1)*mq^2
So we have..
ƩC(m,x)*x^2 *q^x * (1-q)^(m-x) = (m-1)*mq^2 + mq
|
{"url":"http://www.physicsforums.com/showthread.php?p=3783493","timestamp":"2014-04-20T01:05:50Z","content_type":null,"content_length":"61462","record_id":"<urn:uuid:26313d19-b956-4f5b-a5a1-133f03afedf1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Planted Tank Forum - View Single Post - Newb with lighting question!
Hi everyone,
Thank you in advance for understanding if this is a stupid question! I bought a Fluval Edge II 12 gallon (46l) with the 42 Led lights which basically only equals 6 watts. I have spent NUMEROUS hours
trying to find a lighting mod that won't break the bank or require a DIY as I am not technically experienced in trying to do a lot of the DIY mods as you will see on my question I am about to ask!
Until I figure out how to upgrade the lighting, I purchased a stainless steel flexible gooseneck desk lamp and put a GE Energy Smart Daylight 6500K 13 watt spiral cfl with 825 lumens. I placed it on
top of the top box and bent the neck down over the front top of my tank. It actually looks very stylish and has really increased the light inside the tank a lot!
Okay...here goes..I would like to know how many watts per gallon I have now? The cfl is 13 watts which states is equal to 60 watts incandescent lighting. I know that the 13 watts is what I am
actually using energy wise as compared to 60 watts BUT does this mean I now have a total of 66 watts of lighting over my tank (60w cfl + 6w stock led?) or 19 watts (13wcfl+6w led?) I am trying to
figure out how many watts per gallon I have now. I would like to add more plant varieties like some type of carpet plants but I'm not sure my lighting is sufficient. Y'all are probably cracking up
laughing right now but I seriously cannot understand all the wpg/lumens/par ratios especially with the newer energy saving technology of lighting like cfl and led!
Current plants and livestock:
1 Betta
2 Albino Cory cats (small now but may be going back due to learning more)
5 Galaxy Rasbora/Celestial Danio arriving end of week
Amazon Swords
Java Fern
Jungle Val
Water Trumpet (Wendii Green)
Mopani Driftwood
Flourite Dark substrate
Thanks so much for any help you can give me!
Read more:
|
{"url":"http://www.plantedtank.net/forums/showpost.php?p=2007206&postcount=1","timestamp":"2014-04-18T19:04:49Z","content_type":null,"content_length":"19561","record_id":"<urn:uuid:4d29c360-9c57-4fbd-a824-0c8cb8b86c2d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Divided Square
The diagram below shows how you could split a unit square up into nine non-overlapping squares.
Prove that the only number of non-overlapping squares you cannot split a unit square into are 2, 3, or 5 smaller squares.
Problem ID: 339 (18 Jun 2008) Difficulty: 2 Star
|
{"url":"http://mathschallenge.net/view/divided_square","timestamp":"2014-04-17T07:51:09Z","content_type":null,"content_length":"4269","record_id":"<urn:uuid:c9356d40-21f8-4248-8aa3-56572c3cecb5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find Curve Given Slope, Point
Date: 8/29/96 at 23:8:33
From: Anonymous
Subject: Find Curve Given Slope, Point
Find the curve whose slope at the point (x,y) is 3x^2 if the curve
passes throught the point (1,-1).
Have no idea where to begin!
Date: 8/30/96 at 11:22:43
From: Doctor Mike
Subject: Re: Find Curve Given Slope, Point
Hi Bill,
This is a Calculus problem. One thing you learn how to do in calculus
is a process for finding the slope to the graph of a function. If
f(x) is a function you are graphing, then the "derivative of f", which
is often written f'(x) , gives the slope of the original function.
That is, the slope of the tangent line to the graph of f(x) at point
(a,f(a)) is exactly f'(a).
To get from f(x) to f'(x) is called finding the derivative, or
differentiation. To get from f'(x) back to f(x) is called finding the
anti-derivative, or integration.
For your example y = f(x) is unknown, but f'(x)=3x^2 is given.
Because I have had calculus, I can easily work out in my head that
f(x) = x^3 + C where C is any arbitrary number. Since you must have
(x,y) = (1,-1) on the graph of f , f(1) = -1 must be true, which
requires that C be -2. So, f(x) = x^3 - 2 or y = x^3 - 2 .
I hope this helps. If you have not had Calculus and all this sounds
interesting, why not have a bash at it!
-Doctor Mike, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/53539.html","timestamp":"2014-04-20T21:40:56Z","content_type":null,"content_length":"6251","record_id":"<urn:uuid:1a6d0731-3e36-45e7-bbab-add9e476e3b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Newton, MA Trigonometry Tutor
Find a West Newton, MA Trigonometry Tutor
...I have been a part-time online business, economics, history, law, science, social sciences, and writing professor and tutor for graduate students in more than 60 countries around the world
during the last nine years. One of my tutees won the Texty award for writing the best new physics textbook ...
55 Subjects: including trigonometry, reading, English, writing
...Regardless of a student’s capacity or subject matter, my attention is absolute! I experienced "Real Life" Trigonometry, for more than fifteen years, as an Industry Physicist and Electrical
Engineer. If taught properly, Trigonometry is easy to understand and apply.
6 Subjects: including trigonometry, physics, algebra 1, precalculus
...I am licensed to teach math (8-12) and the topics on the SATs are covered in the licensure. I have tutored in SAT math since 2010. I have been using American Sign Language since 2009.
9 Subjects: including trigonometry, geometry, algebra 2, SAT math
...I have been teaching physics as an adjunct faculty at several universities for the last few years and very much look forward to the opportunity of offering personalized support to those seeking
it, so please don't hesitate to contact me! My schedule is extremely flexible and am willing to meet y...
9 Subjects: including trigonometry, calculus, physics, geometry
...Because of this wide range of grades, there are 3 levels of tests: Lower, for grades 5 and 6, Middle, for grades 7 and 8, and Upper, for grades 9 through 12. Because your child's results are
scored and compared only with others entering the same grade, your child should not worry about encounter...
33 Subjects: including trigonometry, chemistry, calculus, physics
|
{"url":"http://www.purplemath.com/west_newton_ma_trigonometry_tutors.php","timestamp":"2014-04-19T10:08:41Z","content_type":null,"content_length":"24354","record_id":"<urn:uuid:5414f087-40df-4a09-8e98-3063766500a4>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Both Betty and Wilma earn annual salaries of more than $5000
Author Message
Both Betty and Wilma earn annual salaries of more than $5000 [#permalink] 09 Jun 2012, 13:42
25% (low)
Question Stats:
Status: Preparing for the
4th time -:( 69%
Joined: 25 Jun 2011 (01:38) correct
Posts: 563 30% (00:42)
Location: United Kingdom wrong
Concentration: based on 62 sessions
International Business,
Strategy Both Betty and Wilma earn annual salaries of more than $50000. Is Wilma's annual salary greater than Betty's?
GMAT Date: 06-22-2012 (1) Betty's annual salary is closer to $50,000 than is Wilma's.
GPA: 2.9 (2) Betty's annual salary is closer to $35,000 than it is to Wilma's annual salary.
WE: Information Technology [Reveal]
Spoiler: OA
Followers: 11
Kudos [?]: 129 [0], given:
217 _________________
Best Regards,
MGMAT 1 --> 530
MGMAT 2--> 640
MGMAT 3 ---> 610
Last edited by
on 28 Jan 2013, 07:19, edited 2 times in total.
Edited the OA
Re: Both Betty and Wilma earn annual salaries of more than [#permalink] 09 Jun 2012, 13:52
This post received
Expert's post
Both Betty and Wilma earn annual salaries of more than $50000. Is Wilma's annual salary greater than Betty's?
Notice that we are told that
both Betty and Wilma earn annual salaries of more than $50,000
(1) Betty's annual salary is closer to $50,000 than is Wilma's.
----$50,000---(Betty)----(Wilma)---- So, as you can see Wilma's annual salary is greater than Betty's. Sufficient.
Math Expert
(2) Betty's annual salary is closer to $35,000 than it is to Wilma's annual salary.
Joined: 02 Sep 2009
$35,000----$50,000---(Betty)----(Wilma)---- Again Wilma's annual salary is greater than Betty's. Sufficient.
Posts: 17317
Answer: D.
Followers: 2874
Hope it's clear.
Kudos [?]: 18392 [2] ,
given: 2348 _________________
NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!!
PLEASE READ AND FOLLOW: 11 Rules for Posting!!!
RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of
Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!;
COLLECTION OF QUESTIONS:
PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and
Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions,
12 Fresh Meat
DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions
With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set.
What are GMAT Club Tests?
25 extra-hard Quant Tests
manulath Re: Both Betty and Wilma earn annual salaries of more than [#permalink] 10 Jun 2012, 00:34
Manager Bunuel wrote:
Joined: 12 May 2012 Both Betty and Wilma earn annual salaries of more than $50000. Is Wilma's annual salary greater than Betty's?
Posts: 85 Notice that we are told that both Betty and Wilma earn annual salaries of more than $50,000.
Location: India (1) Betty's annual salary is closer to $50,000 than is Wilma's.
Concentration: General ----$50,000---(Betty)----(Wilma)---- So, as you can see Wilma's annual salary is greater than Betty's. Sufficient.
Management, Operations
(2) Betty's annual salary is closer to $35,000 than it is to Wilma's annual salary.
GMAT 1: 650 Q51 V25
GMAT 2: 730 Q50 V38 $35,000----$50,000---(Betty)----(Wilma)---- Again Wilma's annual salary is greater than Betty's. Sufficient.
GMAT 3: Q V
Answer: D.
GPA: 4
Hope it's clear.
WE: General Management
(Transportation) I believe that OA is wrong.
Followers: 2 It should be D.
Kudos [?]: 36 [0], given: Kindly check the OA and edit it.
Bunuel - Your explanation is spot on.
ethnix Re: Both Betty and Wilma earn annual salaries of more than [#permalink] 28 Jan 2013, 07:11
Intern I am sorry, but the official answer does not make any sense for Statement 1. It is simply mathematically wrong.
Joined: 07 Jan 2013 Mathematically spoken the statement says: |Betty-50.000|<|Wilma-50.000| and not Betty-50.000 < Wilma-50.000
Let me make a numerical example. Betty earns 49.999 an Wilma earns 70.000. Obviously Betty's salary is closer than 50.000 though Wilma earns more. And the over way around:
Posts: 11 Let Betty earn 50.001 and Wilma 40.000, now still Betty's wage is closer to 50.000 though she now earns more than Wilma.
Followers: 0 Stating that 1) is sufficient is simply wrong and I'm actually quite astonished people get away with such an answer so easily.
Kudos [?]: 2 [0], given: 1 p.s.: The same argumentation holds for 2), so the correct answer must be C, as you can deduct from both statements that both wages must lie above 50.000; something you
can't predict earlier.
Re: Both Betty and Wilma earn annual salaries of more than [#permalink] 28 Jan 2013, 07:24
This post received
Expert's post
ethnix wrote:
I am sorry, but the official answer does not make any sense for Statement 1. It is simply mathematically wrong.
Mathematically spoken the statement says: |Betty-50.000|<|Wilma-50.000| and not Betty-50.000 < Wilma-50.000
Let me make a numerical example. Betty earns 49.999 an Wilma earns 70.000. Obviously Betty's salary is closer than 50.000 though Wilma earns more. And the over way around:
Let Betty earn 50.001 and Wilma 40.000, now still Betty's wage is closer to 50.000 though she now earns more than Wilma.
Stating that 1) is sufficient is simply wrong and I'm actually quite astonished people get away with such an answer so easily.
p.s.: The same argumentation holds for 2), so the correct answer must be C, as you can deduct from both statements that both wages must lie above 50.000; something you
Bunuel can't predict earlier.
Math Expert Welcome to GMAT Club.
Joined: 02 Sep 2009 Your examples are not correct because we are told that "both Betty and Wilma earn annual salaries of
Posts: 17317 more than $50000
Followers: 2874 ".
Kudos [?]: 18392 [1] , Hope it's clear.
given: 2348
NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!!
PLEASE READ AND FOLLOW: 11 Rules for Posting!!!
RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of
Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!;
COLLECTION OF QUESTIONS:
PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and
Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions,
12 Fresh Meat
DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions
With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set.
What are GMAT Club Tests?
25 extra-hard Quant Tests
Re: Both Betty and Wilma earn annual salaries of more than [#permalink] 28 Jan 2013, 07:27
Bunuel wrote:
ethnix wrote:
I am sorry, but the official answer does not make any sense for Statement 1. It is simply mathematically wrong.
ethnix Mathematically spoken the statement says: |Betty-50.000|<|Wilma-50.000| and not Betty-50.000 < Wilma-50.000
Let me make a numerical example. Betty earns 49.999 an Wilma earns 70.000. Obviously Betty's salary is closer than 50.000 though Wilma earns more. And the over way around:
Intern Let Betty earn 50.001 and Wilma 40.000, now still Betty's wage is closer to 50.000 though she now earns more than Wilma.
Joined: 07 Jan 2013 Stating that 1) is sufficient is simply wrong and I'm actually quite astonished people get away with such an answer so easily.
Posts: 11 p.s.: The same argumentation holds for 2), so the correct answer must be C, as you can deduct from both statements that both wages must lie above 50.000; something you
can't predict earlier.
Followers: 0
Welcome to GMAT Club.
Kudos [?]: 2 [0], given: 1
Your examples are not correct because we are told that "both Betty and Wilma earn annual salaries of
more than $50000
Hope it's clear.
OMG, thanks. I suppose reading the question would avoid to most of my wrong answers :D
gmatclubot Re: Both Betty and Wilma earn annual salaries of more than [#permalink] 28 Jan 2013, 07:27
Similar topics Author Replies Last post
It is true that unionized women earn, on average, more than gmacvik 2 13 Oct 2005, 14:56
It is true that unionized women earn, on average, more than nakib77 6 27 Oct 2005, 13:14
Betty and Sara both earn more than 50000 every year. Is gamjatang 12 15 Dec 2005, 23:17
It is true that unionized women earn, on average, more than zoltan 2 16 Jun 2007, 20:47
The percentage of households with an annual income more than perfectstranger 4 17 Jul 2008, 14:06
|
{"url":"http://gmatclub.com/forum/both-betty-and-wilma-earn-annual-salaries-of-more-than-134229.html","timestamp":"2014-04-19T09:29:40Z","content_type":null,"content_length":"186462","record_id":"<urn:uuid:86474f1e-18e1-4938-85b7-8920665d9f21>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Efficient computation of $E\left[\frac{1}{1+\sum_iX_i}\right]$ where $X_i$ is RV with Bernoulli distribution with different probabilities
up vote 0 down vote favorite
Suppose we have the random variables $X_1, \ldots, X_n$ that have Bernoulli distributions with the (possibly different) probabilities $p_1, \ldots, p_n$. For example, $X_1$ = 1 with probability $p_1$
and 0 with probability $1-p_1$. Is there an efficient way to compute $E\left[\frac{1}{1+\sum_iX_i}\right]$ in polynomial time in $n$? If not, is there an approximate solution?
1 Cross post: math.stackexchange.com/questions/38662/… . It's usually good form to wait a day or two before cross posting. – dorkusmonkey May 12 '11 at 10:23
2 I suppose you mean that the $X_i$'s are independent. Can't you just inductively on the number of variables find completely the probability distribution of $\sum_iX_i$? Sort of like Pascal's
triangle. – Johan Wästlund May 12 '11 at 10:56
i.e. dynamic programming. – Ori Gurel-Gurevich May 13 '11 at 5:57
add comment
2 Answers
active oldest votes
An approach is through generating functions. For every nonnegative random variable $S$, $$ E\left(\frac1{1+S}\right)=\int_0^1E(t^S)\mathrm{d}t. $$ If $S=X_1+\cdots+X_n$ and the random
variables $X_i$ are independent, $E(t^S)$ is the product of the $E(t^{X_i})$. If furthermore $X_i$ is Bernoulli $p_i$, $$ E\left(\frac1{1+S}\right)=\int_0^1\prod_{i=1}^n(1-p_i+p_it)\mathrm
{d}t. $$ This is an exact formula. I do not know how best to use it to compute the LHS efficiently. Of course one can develop the integrand in the RHS, getting a sum of $2^n$ terms indexed
by the subsets $I$ of $\{1,2,\ldots,n\}$ as $$ E\left(\frac1{1+S}\right)=\sum_I\frac1{|I|+1}\prod_{i\in I}p_i\cdot\prod_{j\notin I}(1-p_j). $$
But it might be more useful to notice that $$ \prod_{i=1}^n(1-p_i+p_it)=\sum_{k=0}^n(-1)^k\sigma_k(\mathbf{p})(1-t)^k, $$ where $\sigma_0(\mathbf{p})=1$ and $(\sigma_k(\mathbf{p}))_{1\le k\
le n}$ are the symmetric polynomials of the family $\mathbf{p}=\{p_i\}$. Integrating with respect to $t$, one gets $$ E\left(\frac1{1+S}\right)=\sum_{k=0}^n(-1)^k\frac{\sigma_k(\mathbf{p})}
{k+1}. $$ The computational burden is reduced to the determination of the sequence $(\sigma_k(\mathbf{p}))_{1\le k\le n}$.
up vote Note 1 The last formula is an integrated version of the algebraic identity stating that, for every family $\mathbf{x}=\{x_i\}_i$ of zeroes and ones, $$ \frac1{1+\sigma_1(\mathbf{x})}=\sum_{k
1 down \ge0}(-1)^k\frac{\sigma_k(\mathbf{x})}{k+1}, $$ truncated at $k=n$ since, when at most $n$ values of $x_i$ are non zero, $\sigma_k(\mathbf{x})=0$ for every $k\ge n+1$. To prove the algebraic
vote identity, note that, for every $k\ge0$, $$ \sigma_1(\mathbf{x})\sigma_k(\mathbf{x})=k\sigma_k(\mathbf{x})+(k+1)\sigma_{k+1}(\mathbf{x}), $$ and compute the product of $1+\sigma_1(\mathbf{x})
$ by the series in the RHS. To apply this identity to our setting, introduce $\mathbf{X}=\{X_i\}_i$ and note that, for every $k\ge0$, $$ E(\sigma_k(\mathbf{X}))=\sigma_k(\mathbf{p}). $$
Note 2 More generally, for every suitable complex number $z$, $$ \frac1{z+\sigma_1(\mathbf{x})}=\sum_{k\ge0}(-1)^ka_k(z)\sigma_k(\mathbf{x}),\qquad a_k(z)=\frac{\Gamma(k+1)\Gamma(z)}{\Gamma
(k+1+z)}. $$
Note 3 When $p_i=p$ for every $i$, $$ \frac1{1+pn}< E\left(\frac1{1+S}\right)=\frac{1-(1-p)^{n+1}}{p(n+1)}< \frac1{p(n+1)}. $$
add comment
The result is indeed computable in polynomial time. Didier has shown in his answer that $$E\left(\frac1{1+\sum_{i=1}^n}\right)=\sum_{k=0}^n\frac{(-1)^k}{k+1}\sigma_k(p_1,\dots,p_n),$$ where $
\sigma_k$ are the elementary symmetric polynomials. In order to finish this argument, it thus suffices to compute in polynomial time the numbers $\sigma_k(p_1,\dots,p_n)$. This can be done
up vote easily using the recurrence \sigma_0(p_1,\dots,p_m)&=1,\\\\ \sigma_{k+1}(p_1,\dots,p_m)&=\sum_{i=k+1}^mp_i\sigma_k(p_1,\dots,p_{i-1}). We compute all the numbers $\sigma_k(p_1,\dots,p_m)$ for
0 down $k\le m\le n$ inductively: if we already know the sequence of values $\sigma_k(p_1,\dots,p_m)$ for all $m=k,\dots,n$, we use the recurrence to compute $\sigma_{k+1}(p_1,\dots,p_m)$ for all $m
vote =k+1,\dots,n$. Thus the whole computation takes $O(n^2)$ evaluations of the sum above, hence $O(n^3)$ arithmetical operations.
add comment
Not the answer you're looking for? Browse other questions tagged computational-complexity pr.probability st.statistics or ask your own question.
|
{"url":"http://mathoverflow.net/questions/64764/efficient-computation-of-e-left-frac11-sum-ix-i-right-where-x-i-is-rv","timestamp":"2014-04-17T04:25:02Z","content_type":null,"content_length":"60116","record_id":"<urn:uuid:f9618a23-11be-4f4f-b14a-73d35ea49dae>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] Vector mean calculation?
Fernando.Perez at colorado.edu Fernando.Perez at colorado.edu
Fri Apr 22 05:04:00 CDT 2005
Lance Boyle wrote:
> On Apr 21, 2005, at 5:41 PM, Fernando.Perez at colorado.edu wrote:
>>Quoting Thomas Davidoff <davidoff at haas.berkeley.edu>:
>>>I have a vector with min equal to 412. Why am I getting a negative
>>Without more details, it's quite difficult to be sure. Most likely,
>>it's very long and of integer type (ints are 32 bit objects, so you are
>>effectively doing mod 2^31 arithmetic with negative wraparound).
> Why does Python allow modular arithmetic when the user doesn't request
> modular arithmetic?
Well, it's just what happens in C always: regular ints are 32 bit objects, hence
with a range of [-2**31,2**31-1]. If you start adding and go beyond the right
endpoint of the interval, it will wrap around the left. Pure python shields
you from this with silent promotion to python longs, which are
arbitrary-length integers. But in Numeric/numarray, you are using raw C
objects (for speed reasons), so you can't escape some of their limitations
(which are in many cases precisely what gives you speed). Life is full of
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2005-April/004355.html","timestamp":"2014-04-16T23:02:37Z","content_type":null,"content_length":"3923","record_id":"<urn:uuid:024250e5-ba36-4d21-a90f-1ce056aa9f21>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How To Tutorial: Perl Functions for Real Arrays
"Find Out How You Can Use The Perl Functions For Real @ARRAYs In The Most Frequent Situations You’re Confronted With When You Write A Perl Script … See In Minutes A Lot Of Complete Code Solutions
From Which You Can Copy Pieces of Code And Paste Them Directly In Your Own Script"
The "Perl Functions for Real @ARRAYs - How To Tutorial" eBook answers the most frequently questions regarding the use of the array functions in the Perl language. You’ll find interesting and complete
examples about why, how and where you can use these functions in a Perl script.
The functions described in this eBook include: pop, push, shift, splice and unshift.
For every function, I presented the syntax forms and a brief description. This was necessary to help you better understand how to use the topics. If you want a complete description of these
functions, you should go for additional information to the Perl official site by following the link perldoc.
If you take your time to skim through the Table of Contents of this eBook, you’ll see what you should expect to find here. From using these functions with simple arrays or with more complex data
structures as arrays of arrays, arrays of hashes or hashes of arrays, you’ll find here a lot of commented examples to help you find a quick solution to your problem.
The examples included in this eBook are not only about the array functions, but also show you how to use these functions together with the operators, special variables, statements, regular
expressions and with simple or complex data structures. In the same time are provided interesting examples about how to use these functions with @_, @ARGV, @$_, %{}, ||, &&, =>, @{}, [], $_.
Complete solutions are provided for almost all the examples. Each example can be tested, modified and run individually. Every example shows you a way to resolve a specific problem, allowing you to
find out a quick answer to your problem or question.
Now You Can Save Time And Find In Minutes The Answer To Your Question
If you want to improve your programming skill regarding the use of the arrays functions in different contexts, you’ll find real help in this eBook. I chose for you the most frequent questions and I
illustrated them with the appropriate examples.
Of course you can continue searching through the forums to find the answer of your question but in this case you need to choose, test and interpret yourself the information provided in the forums.
In this eBook I give you a lot of working code that you can implement in minutes in your script. And not only that, all the provided code snippets are fully functional, i.e can be copied and tested
in your command line window. Next, they can be integrated in a script and uploaded to your site.
In order to make the scripts very easy to understand, I commented in detail each code snippet so you will not have any problem to understand and implement it.
Table Of Contents
I think you can use this eBook to find out very fast how to use the Perl array functions in different situations.
Please take a look at the table of contents to see what I mean:
1. Copyright
2. Introduction
2.1. How to run the examples included in this eBook
2.1.1. How to run the script in Windows
2.1.2. How to run the script in Linux
3. Functions for real @ARRAYs
3.1. The pop function
3.1.1. The syntax forms
3.1.2. How to use pop with a stack array
3.1.3. How to use pop with a queue array
3.1.4. How to emulate a circular list with pop
3.1.5. How to print an array using pop within a while loop
3.1.6. Which is the difference between pop, chop and chomp
3.1.7. How to use pop with array references
3.1.8. How to use pop with a list
3.1.9. How to use pop with split
3.1.10. How to reverse an array using pop and push
3.1.11. How to use pop with an AoA (array of arrays)
3.1.11.1. How to delete the last column from a matrix
3.1.11.2. How to delete the last row from a matrix
3.1.12. How to use pop with an AoH (array of hashes)
3.1.13. How to use pop with an HoA (hash of arrays)
3.1.14. How to use pop with @_, @ARGV, ||
3.2. The push function
3.2.1. The syntax form
3.2.2. How to use push to append a list to an array
3.2.3. How to use push to append an array to an array
3.2.4. How to concatenate two arrays by alternating their elements
3.2.5. Using push, chomp and while to create an array from a file
3.2.6. How to use push with a stack array
3.2.7. How to use push with a queue array
3.2.8. How to use push to implement a simple circular list
3.2.9. How to use push with array references
3.2.10. How to use push with an AoA (array of arrays)
3.2.10.1. How to read a matrix from a file
3.2.10.2. How to append a column to a matrix
3.2.10.3. How to append a row to a matrix
3.2.10.4. How to read a text from a file and push into columns
3.2.11. How to use push with an AoH (array of hashes)
3.2.12. How to use push with an HoA (hash of arrays)
3.2.13. How to use push with $_, @{ }, =>, [ ], ||
3.3. The shift function
3.3.1. The syntax forms
3.3.2. How to use shift with a stack array
3.3.3. How to use shift with a queue array
3.3.4. How to emulate a circular list with shift
3.3.5. How to use shift within the body of a subroutine
3.3.6. How to use shift with an AoA (array of arrays)
3.3.6.1. How to delete the first column from a matrix
3.3.6.2. How to delete the first row from a matrix
3.3.7. How to use shift with an AoH (array of hashes)
3.3.8. How to use shift with an HoA (hash of arrays)
3.3.9. How to use shift with @_, @ARGV, @$_, %{}, ||, &&
3.4. The splice function
3.4.1. The syntax forms
3.4.1.1. The general syntax form
3.4.1.2. The second syntax form
3.4.1.3. The third syntax form
3.4.1.4. The fourth syntax form
3.4.2. splice versus shift
3.4.3. splice versus pop
3.4.4. splice versus unshift
3.4.5. splice versus push
3.4.6. How to use splice with an AoA (array of arrays)
3.4.7. How to use splice with an AoH (array of hashes)
3.4.8. How to use splice with an HoA (hash of arrays)
3.4.9. How to use splice with @_
3.5. The unshift function
3.5.1. The syntax form
3.5.2. How to use unshift to insert a list in front of an array
3.5.3. Using unshift to insert an array in front of another array
3.5.4. How to use unshift with array references
3.5.5. How to use unshift with a stack array
3.5.6. How to use unshift with a queue array
3.5.7. How to emulate a circular list with unshift
3.5.8. How to use unshift with an AoA (array of arrays)
3.5.8.1. How to use unshift to insert a column to a matrix
3.5.8.2. How to use unshift to insert a row to a matrix
3.5.9. How to use unshift with an AoH (array of hashes)
3.5.10. How to use unshift with an HoA (hash of arrays)
3.5.11 How to use unshift with @$_
And as a bonus, I give you a "Perl Glossary" eBook to help you better understand the topics included in my tutorial.
A Sample From My eBook
To have a first look about what you could expect to find inside, I invite you to download for free a
sample from my "Perl Functions for Real Arrays - How to Tutorial" and "Perl Glossary" eBooks
What You’ll Get
• A 70 page "Perl Functions for Real Arrays - How to Tutorial" eBook (PDF format)
• I prepared a bonus for you – an additional glossary eBook (PDF format) meant to help you understand the topics included in this eBook tutorial
• After I’ll modify or extend the eBook (maybe at your suggestion), the download link provided will let you download for free the upgraded version - please note that the download link will expire
after 9 attempts
I intend to enlarge this eBook with a lot of other useful examples, but I am going to increase the price, too.
Buy this eBook now!
And you can download it for maximum 9 times, having free access to the following upgrades.
If you consider that this book is valuable or you have some suggestions to improve its content, please send me some feedback through my contact page.
Read what others have to say about my eBooks Collection!
To order your personal copy, click here to receive instant download instructions:
[S:9.95 US$ only:S]
7.45 US$ only (limited time offer)
You Are Secure!
By clicking the "Buy Now" button, you will be taken to the ultra secure online payment method using PayPal.
I will not sell or pass on your details to anyone. Your information and the fact that you contacted me are confidential.
I think this price must be very convenient to you. I try to keep the price as low as I can in order for many people to afford it. But I can’t guarantee this low price for a long time - especially in
the case I'll extend the eBook, so do yourself a favor and grab it now!
30-Day, 100% unconditional money-back guarantee
If for any reason you don’t think this eBook is helpful to you in any way, I offer an unconditional, no-questions-asked, 30-Day 100% refund. You can’t lose on this deal.
Special offer
I'm gonna make you an amazing offer you can't refuse - get all my eBooks for the incredible price of 14.99 US$ only:
and save 78% ($52.81) of the real price.
These eBooks contain more than
• 700 pages and
• 1000 commented script examples.
To order the eBooks package, click here to receive instant download instructions:
14.99 US$ only (limited time offer)
30-Day, 100% unconditional money-back guarantee
If for any reason you don’t think these eBooks are helpful to you in any way, I offer an unconditional, no-questions-asked, 30-Day 100% refund. You can’t lose on this deal.
The eBooks were archived into a zip file. After you download the archive file, you need to extract the PDF files from it using WinZip or any other unzipping program.
The eBooks were converted in a PDF format, so you need Adobe Acrobat Reader to read them. Download the free version here:
return from "Perl Functions for Real Arrays - How to Tutorial" to "Perl How To Tutorials"
|
{"url":"http://www.misc-perl-info.com/perl-func-real-arrays-howto.html","timestamp":"2014-04-18T13:08:29Z","content_type":null,"content_length":"24265","record_id":"<urn:uuid:df2df6d2-7a97-4005-a4b3-ec85a6c1fca9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parity Arguments
If two integers are either both even or both odd, they are said to have the same parity; otherwise they have different parity. Determining the parity of two quantities is often a simple and useful
way to prove that the quantities can never be equal. That result, in turn, can be used to demonstrate that a particular situation is impossible.
Parity is just a special case of divisibility. Although we do not have special words for divisible by 5 or leaves a remainder when divided by 7, issues of divisibility arise frequently. For
example, a fourth grader was investigating which n by m rectangular regions could be tiled by the pentomino below. The 4 by 10 rectangle shown is once such possibility.
A pentomino and a 4 by 10 rectangular tiling
The student conjectured that at least one of the dimensions of the rectangle had to be divisible by 5 for a tiling to be possible. She explained that the area of the pentomino had to divide evenly
into the area of the rectangle. Since 5 is prime, the only way the rectangle s area, mn, could be divisible by 5 was for either m or n to be so. (You can prove an additional condition if you color
the rectangle with a checkerboard pattern and think about the parity of the colors covered by each tile).
For practice with proofs that involve parity arguments, see Parity Problems and Parity Problems 2. For practice with, and further information about, divisibility, see Divisibility and Modular
|
{"url":"http://www2.edc.org/makingmath/mathtools/parity/parity.asp","timestamp":"2014-04-17T18:35:18Z","content_type":null,"content_length":"8965","record_id":"<urn:uuid:3191555c-c2cf-447b-8725-900ae0259e0b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rounding to the Nearest Hundred
Students round two-, three-, and four-digit numbers to the nearest hundred.
Subject(s): CCSS: Mathematics
Grade Level(s): 3
Keywords: MFAS, CCSS, rounding, place value, hundreds
Sorry! This resource is only accessible to certain users at this time. We eventually show all the resources to all our users but most likely this one is being used on a research study that requires
limiting access to particular users. Please contact our help desk if you believe that there is a mistake.
|
{"url":"http://www.cpalms.org/Public/PreviewResource/Preview/42728","timestamp":"2014-04-17T21:40:41Z","content_type":null,"content_length":"73413","record_id":"<urn:uuid:0701dc56-19cd-4453-8f7c-d30fe7d0aea7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ALEX Lesson Plans
Subject: Mathematics (9 - 12)
Title: Vectors Drive the Boat
Description: Using a combination of online exploration and teacher instruction students will discover that vectors have both magnitude and direction. They will be able to use vectors to represent
problem situations. Students will also be will be able to perform basic operations on vectors.
Thinkfinity Lesson Plans
Subject: Mathematics
Title: Vector Investigation: Boat to the Island Add Bookmark
Description: In this student interactive, from Illuminations, students '' drive'' a boat by adjusting the magnitude and direction of a velocity vector. The goal is to land the boat on the island
without hitting the walls.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8,9,10,11,12
Subject: Mathematics
Title: Learning about Properties of Vectors and Vector Sums Using Dynamic Software Add Bookmark
Description: In this two-lesson unit, from Illuminations, students manipulate a velocity vector to control the movement of an object in a gamelike setting. They develop an understanding that vectors
are composed of both magnitude and direction, and extend their knowledge of number systems to the system of vectors.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Sums of Vectors and Their Properties Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students manipulate a velocity vector to control the movement of an object in a gamelike setting. In the process, they extend
their knowledge of number systems to the system of vectors.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics,Science
Title: Learning about Properties of Vectors and Vector Sums Using Dynamic Software: Components of a Vector Add Bookmark
Description: This e-example from Illuminations illustrates how using a dynamic geometrical representation can help students develop an understanding of vectors and their properties. Students
manipulate a velocity vector to control the movement of an object in a gamelike setting. In this part, Components of a Vector, students will develop an understanding that vectors are composed of both
magnitude and direction. In the second part, Sums of Vectors and Their Properties, students extend their knowledge of number systems to the system of vectors. e-Math Investigations are selected
e-examples from the electronic version of the Principles and Standards of School Mathematics (PSSM). The e-examples are part of the electronic version of the PSSM document. Given their interactive
nature and focused discussion tied to the PSSM document, the e-examples are natural companions to the i-Math investigations.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
|
{"url":"http://alex.state.al.us/plans2.php?std_id=54527","timestamp":"2014-04-16T13:15:32Z","content_type":null,"content_length":"25197","record_id":"<urn:uuid:7437eba1-d7d9-4ccb-b6b6-cb71d8bc0a62>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Acid Base Buffer Problems: Question & answer with explanation
Buffer problems can be quite challenging for many students. It is the purpose of this handout to help guide you through simple buffer problems you might encounter in this course. The best way to
solve these problems, and many other problems for that matter, is to follow a step-by-step pattern. …
Buffer problems can be quite challenging for many students. It is the purpose of this handout to help guide you through simple buffer problems you might encounter in this course. The best way to
solve these problems, and many other problems for that matter, is to follow a step-by-step pattern. I will show you the pattern needed for buffer problems in this document.
Here few questions and answer on acid base buffer calculation with explanation
1. Find the pH of 1.25 M acetic acid and 0.75 M potassium acetate.
Acetic acid kA = 1.74 E-5 pKA = 4.76.
This is a genuine buffer problem. Added to the water are a weak acid and a salt containing the anion of the acid.
There are two good ways to work buffer problems, with the Henderson- Hasselbach equation or with the ionization equilibrium expression of the weak acid or base. I personally have a mental block
against the H-H equation because I can never remember whether it uses a positive or negative log and which concentration goes on top. You can use it if you wish. Particularly if you need to calculate
buffers often, you should engrave it upon your gray matter. If you really need it and can’t remember it, you can derive it from the ionization equlilbrium expression.[wp_campaign_1]
There are three cautions you need to observe with either equation: (1) Make sure you are using the correct concentration for each variable, (2) check to see if the numbers you propose to use are
going to be within the 5% rule for simplification, and (3) estimate the answer from what you know and make sure your final answer is reasonable.
Before actually doing the problem, estimate the answer from your own reasoning. In this case, the pkA of acetic acid is 4.76. The rule is that an equimolar buffer has a pH equal to the pkA and in
this problem there is less potassium acetate than acetic acid, so the pH must be lower (more acid) than the pkA within a pH unit or so. If the acetic acid were the only solute, the pH estimate would
be the square root of (acid concentration times kA).
The answer should be somewhere between pH of 2.3 and 4.8
The majority of the acetate ion will be from the potassium acetate. Is it right that the total acetate ion concentration will be equal to the concentration of the potassium acetate? Or will the
acetate ion concentration from the ionization of the acetic acid contribute more than 5%? The potassium acetate concentration is 0.75 M. The acetate ion concentration from acetic acid would be
0.00466 M, less than 5% of 0.75 M even without the common ion effect. We can safely use 0.75 as the concentration of acetate ion.
Will the concentration of unionized acid be a problem? The measured concentration is 1.25 M and the ionized amount is 0.00466 M, far less than 5% of 1.25 M.
As threatened, we can use the ionization equilibrium expression of acetic acid for the main equation for this problem, substituting for the kA, substituting the concentration of potassium acetate for
the concentration of acetate, substituting the concentration of acetic acid, and solving for the hydrogen ion concentration to get the pH.
The answer of pH = 4.5 is a reasonable one by our estimation because it is more acid than the pkA of 4.76.
It is a little easier to do this problem by the Henderson- Hasselbach equation, if you are sure you know it. You must still make sure you are substituting correctly and that your assumptions for
simplification are valid (within 5%). The H-H equation is not much good for solutions in which either the acid or ion concentrations are more than ten times one another or in which the concentration
of either material is less than one hundred times the kA because it doesn’t easily adapt to a quadratic form.
2. Find the pH of 0.788 M lactic acid and 1.27 M calcium lactate.
Lactic acid kA = 8.32 E-4 pKA = 3.08.
Here is another acid – conjugate base buffer pair. This time there is more conjugate anion than acid concentration, so we expect the pH to be somewhat higher (more alkali) than the pKA. As in the
previous problem, there seems to be no complication with either of the components being of too small a concentration or the concentrations being too close to the KA, so there should be no need for a
quadratic equation.
The concentration of calcium lactate needs to be doubled (!) to represent the lactate ion concentration because the calcium is divalent and has two lactate ions per formula of calcium lactate. The
concenration of acid is more than 100 times the KA, so the concentration of acid is close enough to the concentration of unionized species.
By the ionization equilibrium equation:
Or by the H-H equation, you get the same answer.
3.Find the pH of 0.590 M ammonium hydroxide and 1.57 M ammonium chloride.
ammonium hydroxide kB = 1.78 E-5 pKB = 4.75.
Here we have a weak base and its conjugate cation. We can use the ionization equilibrium expression, but it is different from the acid ionization expression. The ammonium hydroxide ionizes into
hydroxide ion and ammonium ion, so it would be best to find the concentration of the hydroxide ion.
The ionization equilibrium expression must have the kB rather than a kA, or the Henderson- Hasselbach equation has to have all its components adapted to alkali, but it is completely analagous to the
acid calculation. In either way of doing the problem, you will have to change the answer to the pH
Will we be able to use our standard shortcuts? The concentration of base is more than 100 times the kA, so the measured amount of ammonium hydroxide in solution is a good enough number for the
concentration of unionized species. The concentration of weak base and conjugate ion will be within 1:10 of each other, so the amount of conjugate ion can be adequately estimated by the concentration
of ammonium chloride. There is high enough concentration of the base so that the ionization of water does not significantly change the hydroxide concentration.
Or by the H-H equation, you get the same answer.
Does the answer make sense? The combination is a base buffer and the pH is slightly base. There is almost three times the concentration of ammonium chloride than ammonium hydroxide, so the pH of the
mixture is more acidic than it would be if the buffer had been equimolar. (pH = 9.25)[wp_campaign_2]
4. Explain how to make 5 L of 0.15 M acetic acid-sodium acetate buffer at pH 5.00 if you have 1.00 Molar acetic acid and crystaline sodium acetate.
Here is a problem you may have to actually use one day. In biochemistry some enzymes need to be at a particular pH to work at maximum. You would choose a weak acid with a pkA close to the pH you
need. (The pkA of acetic acid is 4.76.) The osmolarity (the total molar amount of dissolved materials) may be specified. (It is here. The total of acetic acid and sodium acetate should be at 0.15
It is most convenient to use the Henderson – Hasselbach equation for this, as it has a term that can be the ratio of the two materials. The form of the H-H equation does not matter, but the
concentration of the conjugate ion will have to be greater than the concentration of the acid because the pH is greater than the pkA of the weak acid.
What we get from the H-H equation is the ratio of the two constituents. We can use that ratio as one of the equations in a two – equation – two – unknown setup to substitute one into the other and
calculate the concentration of acetic acid, [HA], and the concentration of sodium acetate, [A^^_].
But we still have not answered the question, “Explain how to make 5 L of pH 5, 0.15 M acetic acid-sodium acetate buffer.” We have a 1.00 Molar solution of acetic acid and crystals of (solid) sodium
acetate. The way we have to measure the acetic acid is by measuring the volume of the more concentrated solution. The way to measure the sodium acetate is to weigh it. We would need (54.7885 x 5 =
273.9425) ml of acetic acid and (82.04 x 0.0952115 x 5 = 39.055757) grams of sodium acetate.
The real answer is that you need to weigh 39.1 g of sodium acetate, measure 274 ml of the 1.00 Molar acetic acid and put them into a 5 liter volumetric flask with enough water to dissolve the sodium
acetate. Then fill the volumetric flast to the line with distilled water and mix the solution.
Leave a Comment
|
{"url":"http://www.bioentranceexam.com/acid-base-buffer-problems-question-answer-with-explanation/","timestamp":"2014-04-20T08:14:56Z","content_type":null,"content_length":"33672","record_id":"<urn:uuid:9038aba7-1068-4cbe-9da6-207de92c8df7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
You are not logged in.
• Index
• » Help Me !
• » Laplace Transform of sin^2t
Post a reply
Topic review (newest first)
2013-05-13 08:58:08
Helsaint wrote:
Sin^2(t) = 1/2 - cos2t
That is not correct.
2013-05-13 08:41:54
Shouldn't this be
Sin^2(t) = 1/2 - cos2t
L{sin^2(t)} = 1/2s - s/(s^2+4)
2012-10-26 15:38:46
Welcome to the forum. That is not correct.
2012-10-26 09:18:26
to be clear I am talking about (sin(t))^2
2012-10-26 09:17:22
I get that the laplace transform of sin^2t = -(sin^2te^-st)/s + 2/s^3+4s evaluated from 0...infinity.
when I evaluate the limit from 0..infinity I get that the transform to equal 0. Did I evaluate that right?
2011-08-10 22:52:21
Wolfram is going to put it in terms of the Dirac delta function, which I think is a step function.
There are diiferent definitions for a fourier transform, that page will partly explain that.
2011-08-10 22:50:01
I just tried the Fourier transform of f(x) = 1 and got
... is that correct? I'll check on Wolfram.
2011-08-10 22:43:50
There are FFT and DFT's. Wikipedia can be a horror story at times. To me that is exactly what that is saying.
I have never seen their notation. They are using small f with a cap ( borrowed from statistics)
2011-08-10 22:43:40
Also what is the notation for a Fourier transform? For Laplace it's a fancy L, is it a fancy F for Fourier transforms?
2011-08-10 22:38:37
Sorry if it's a bother but do you know how to compute Fourier transforms? I'm trying to learn how, I've seen the Wikipedia article and saw this:
for every real number ξ.
Does this mean that if I put in some function of x, such as sin(x), I'll get f(ξ) where ξ is a real number? Not sure, I'll post my working in a second. Sorry if I sound stupid...
2011-08-10 22:21:32
Zeroing the LHS will leave you with just the Laplace term. That should be your answer.
I was just asking to see what you thought about it. Since t approaches infinity it will drown out s no matter how small as long as s > 0.
That is nice, spotting the Laplace Transform there.
In addition zetafunc, welcome to the forum! Why not consider becoming a member here?
2011-08-10 22:20:29
I wasn't given an interval for s, sorry. I am just waiting for my GCSE results (I turn 16 in August) and I'm just trying to extend my knowledge of calculus. I want to learn about
Fourier transforms too hopefully but I need some practice with that.
2011-08-10 22:18:46
What I meant is that I get
Then evaluate RHS at 0 and subtract that from the evaluation at infinity. I got 0... so then we have
Therefore assuming s > 0.
I also tried the Laplace transformation for sin^a(t) and got .
2011-08-10 22:11:21
I am glad to help but we are not done yet.
The LHS has to be evaluated at infinity and then you subtract the evaluation of it at 0. The RHS is untouched.
How are you getting 0 for the LHS?
If s is very small then the LHS is not zero. Were you given some interval for s?
2011-08-10 22:03:07
Thanks for the response again and confirming that my IBP was correct -- I think I get it now - subtract the rightmost term from both sides to get y(s) - 2/(s^3 + 4s), evaluate
the RHS at 0 and infinity to get 0 (0 - 0 = 0), then add 2/(s^3 + 4s) to both sides to get the completed Laplace transform? Is that correct? Phew, thanks for your help.
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
Shouldn't this beSin^2(t) = 1/2 - cos2tL{sin^2(t)} = 1/2s - s/(s^2+4)
Hello, I get that the laplace transform of sin^2t = -(sin^2te^-st)/s + 2/s^3+4s evaluated from 0...infinity. when I evaluate the limit from 0..infinity I get that the transform to equal 0. Did I
evaluate that right?
Wolfram is going to put it in terms of the Dirac delta function, which I think is a step function.There are diiferent definitions for a fourier transform, that page will partly explain that.
Thanks...I just tried the Fourier transform of f(x) = 1 and got
Hi;There are FFT and DFT's. Wikipedia can be a horror story at times. To me that is exactly what that is saying.I have never seen their notation. They are using small f with a cap ( borrowed from
Also what is the notation for a Fourier transform? For Laplace it's a fancy L, is it a fancy F for Fourier transforms?
Sorry if it's a bother but do you know how to compute Fourier transforms? I'm trying to learn how, I've seen the Wikipedia article and saw this:
Hi;Zeroing the LHS will leave you with just the Laplace term. That should be your answer.I was just asking to see what you thought about it. Since t approaches infinity it will drown out s no matter
how small as long as s > 0.That is nice, spotting the Laplace Transform there.In addition zetafunc, welcome to the forum! Why not consider becoming a member here?
I wasn't given an interval for s, sorry. I am just waiting for my GCSE results (I turn 16 in August) and I'm just trying to extend my knowledge of calculus. I want to learn about Fourier transforms
too hopefully but I need some practice with that.
Hi;I am glad to help but we are not done yet.
Hi,Thanks for the response again and confirming that my IBP was correct -- I think I get it now - subtract the rightmost term from both sides to get y(s) - 2/(s3 + 4s), evaluate the RHS at 0 and
infinity to get 0 (0 - 0 = 0), then add 2/(s3 + 4s) to both sides to get the completed Laplace transform? Is that correct? Phew, thanks for your help.
|
{"url":"http://www.mathisfunforum.com/post.php?tid=16166&qid=237166","timestamp":"2014-04-20T06:17:12Z","content_type":null,"content_length":"25174","record_id":"<urn:uuid:18f46bd1-95c0-44f1-a16c-95946f8f4440>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Publications by Jungsang Kim.
Papers Published
1. Kim, J. and Benson, O. and Kan, H. and Yamamoto, Y., Single-photon turnstile device, Nature, vol. 397 no. 6719 (1999), pp. 500 - 503 [17295] .
(last updated on 2007/11/03)
Quantum-mechanical interference between indistinguishable quantum particles profoundly affects their arrival time and counting statistics. Photons from a thermal source tend to arrive together
(bunching) and their counting distribution is broader than the classical Poisson limit. Electrons from a thermal source, on the other hand, tend to arrive separately (anti-bunching) and their
counting distribution is narrower than the classical Poisson limit. Manipulation of quantum-statistical properties of photons with various non-classical sources is at the heart of quantum optics:
features normally characteristic of fermions - such as anti-bunching, sub-poissonian and squeezing (sub-shot-noise) behaviours - have now been demonstrated. A single-photon turnstile device was
proposed to realize an effect similar to conductance quantization. Only one electron can occupy a single state owing to the Pauli exclusion principle and, for an electron waveguide that supports
only one propagating transverse mode, this leads to the quantization of electrical conductance: the conductance of each propagating mode is then given by G[Q] = e^2/h (where e is the charge of
the electron and h is Planck's constant; ref. 9). Here we report experimental progress towards generation of a similar flow of single photons with a well regulated time interval.
|
{"url":"http://fds.duke.edu/db/pratt/ECE/faculty/jungsang/publications/71382","timestamp":"2014-04-19T22:38:35Z","content_type":null,"content_length":"15951","record_id":"<urn:uuid:5efb41b2-c355-4b72-a2c8-47ade3a60577>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pietro Abate homepage
Recently I've discovered a subtle consequence of how the order in which dependencies are specified in debian actually matters. While re-factoring the code of dose3, I changed the order in which
dependencies are considered by our sat solver (of edos-fame) . I witnessed a twofold performance loss just by randomizing how variables were presented to our sat solver. This highlights, on one hand,
how our solver is strongly dependent on the structure of the problem and, on the other hand the standard practice of debian maintainers to assign an implicit priority in the disjunctive dependencies
where the first is the most preferred packages (and maybe the most tested, at least dependency-wise).
The basic idea of distcheck is to encode the dependencies information contained in a Packages file in CNF format and then to feed them to a sat solver to find out if a package has broken dependencies
or if its dependencies are such that no matter what, it would be impossible to install this package on a user machine.
Conflicts are encoded as binary clauses. So if package A conflicts with package B, I add a constraint they says "not (A and B)" , that is A and B cannot be considered together. The dependencies
encoding associates to each disjunction of the depends field a clause that says "A implies B". If a package foo depends on A,B | C,D , I'll add "foo implies A and B" & "foo implies C and D" . This
encoding is pretty standard and it is easy to understand.
The problem is how the sat solver will search for a solution to the problem "Is is possible to install package foo in an empty repository". The solver we use is very efficient and can easily deal
with 100K packages or more. But in general is not very good at dealing with random CNF instances. The reason because edos-debcheck is so efficient lies in the way it exploits the structure of the sat
The goal of a sat solver is to find a model (that is a variable assignment list) that is compatible with the given set of constraints. So if my encoding of the debian repository is a set of
constraints R, the installability problem boils down to add an additional constraint to R imposing that the variable associated to the package foo must be true, and then ask the solver to find a
model to make this possible. This installation, in sat terms, would be just an array of variables that must be true in order to satisfy the given set of constraints.
If you look at the logic problem as a truth table, the idea is to find a row in this table. This is the solution of your problem. Brute force of course is not an option and modern sat solvers use a
number of strategies and heuristic to guide the search in the most intelligent way possible. Some of them try to learn from previous attempts, some of them, when they are lost try to pick a random
variable to proceed.
If we consider problems that have a lot of structure, award winning sat solver do not back track very much. By exploiting the structure of the problem, their algorithm allows them to considerably
narrow down the search only to those variables that are really important to find a solution.
All this long introduction was to talk about the solver that is currently used in edos-debcheck and distcheck (that is a rewrite of the edos-debcheck).
So why dependency order matters ? If we consider any package, even if the policy does not specify any order in the dependencies, it's common practice to write disjunctive dependencies specifying the
most probable and tested alternative first and all other, more esoteric choices later. Moreover real packages are considered *before* virtual packages. Since every developer seems be doing the same,
some kind of structure might be hidden in the order in which dependencies are specified.
Part of the efficiency of the the solver used in our tools is actually due to the fact that its search strategy is strongly dependent on the order in which literal are specified in each clause.
Saying the package foo depends on A and B is "different" then saying it depends on B and A, even if semantically equivalent.
In my tests, I found about a twofold performance loss if the order of literals is either randomized or inverted. This is clearly a specific problem related to our solvers, while other solvers might
not be susceptible to such small structural changes. Sat competitions often employs some form of obfuscation strategies of well known problems with well known structures in order to make useless to
encode a search strategy that exploits the specific structure of a problem.
Since here we're not trying to win a sat competition, but to provide tool to solve a specific problem, we are of course very happy to exploit this structure.
|
{"url":"http://mancoosi.org/~abate/comment/reply/395","timestamp":"2014-04-20T00:43:43Z","content_type":null,"content_length":"20790","record_id":"<urn:uuid:41aee961-a72c-49f7-a01b-0686b2409fd8>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hedwig Village, TX SAT Math Tutor
Find a Hedwig Village, TX SAT Math Tutor
...Here is a list of the subjects I've have taught or am capable of teaching: Math- Pre-Algebra High school, Linear and College Algebra, Geometry, Pre-Calculus, Trigonometry, AP Calculus AB/BC
Calculus 1, 2 and 3 Science- AP and College Biology, AP (B or C) a...
38 Subjects: including SAT math, reading, physics, chemistry
...The skills I’ve acquired while teaching also enable me to better understand the dynamics of the student/teacher/parent relationship. I can use my expertise to not only help with improving math
comprehension, but with anything else associated with math class.I have been teaching/tutoring Algebra ...
6 Subjects: including SAT math, geometry, algebra 1, precalculus
...While tutoring with Spring Branch, I worked with 12th grade students who had failed Algebra II and who were in summer school. At Kumon I worked with elementary age students going over
fractions and getting comfortable working with three digit and large numbers. I strongly believe that it is imp...
15 Subjects: including SAT math, calculus, geometry, algebra 1
...While doing tutoring on the side in a local high school, I discovered my love for teaching. I then received my Texas teaching certification in science for grades 8-12. I taught high school
chemistry and biology for 4 years, but I have experience tutoring other subjects in math and science.
13 Subjects: including SAT math, chemistry, biology, algebra 1
...References are available. I have dynamic experience with all phases of P_Alg whether trying to bring student's grade up or trying to advance them in honors classes. My method advances
familiarity at all levels.
36 Subjects: including SAT math, English, physics, chemistry
Related Hedwig Village, TX Tutors
Hedwig Village, TX Accounting Tutors
Hedwig Village, TX ACT Tutors
Hedwig Village, TX Algebra Tutors
Hedwig Village, TX Algebra 2 Tutors
Hedwig Village, TX Calculus Tutors
Hedwig Village, TX Geometry Tutors
Hedwig Village, TX Math Tutors
Hedwig Village, TX Prealgebra Tutors
Hedwig Village, TX Precalculus Tutors
Hedwig Village, TX SAT Tutors
Hedwig Village, TX SAT Math Tutors
Hedwig Village, TX Science Tutors
Hedwig Village, TX Statistics Tutors
Hedwig Village, TX Trigonometry Tutors
Nearby Cities With SAT math Tutor
Bellaire, TX SAT math Tutors
Brookside Village, TX SAT math Tutors
Bunker Hill Village, TX SAT math Tutors
Cypress, TX SAT math Tutors
Hilshire Village, TX SAT math Tutors
Hunters Creek Village, TX SAT math Tutors
Jacinto City, TX SAT math Tutors
Meadows Place, TX SAT math Tutors
Nassau Bay, TX SAT math Tutors
North Houston SAT math Tutors
Piney Point Village, TX SAT math Tutors
Richmond, TX SAT math Tutors
Southside Place, TX SAT math Tutors
Spring Valley, TX SAT math Tutors
West University Place, TX SAT math Tutors
|
{"url":"http://www.purplemath.com/hedwig_village_tx_sat_math_tutors.php","timestamp":"2014-04-21T15:02:04Z","content_type":null,"content_length":"24546","record_id":"<urn:uuid:5603ffbf-5510-49b1-aad5-3d685c030d72>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Locating interesting parts of an image
While we have been developing a website that displayed user uploaded images in fixed size regions, we have encountered some interesting problems. And this have led to some questions:
• Should we resize the image to fit the fixed size region ?
• Should we crop the image ?
• Maybe we should mix the two ?
After some tests we found that resizing the image is not the optimal solution, since the fixed size region is quite small. Then I tested the second and the third solution by cropping/resizing
manually the images. The tests were convincing !
Needless to say we are not going to crop manually all the images ! we should automate this task. The no-brainer solution is to crop the image by focusing on the center. This naîve solution gave
amazingly acceptable results, but sometimes it fails badly if the interesting region lies outside the cropped zone. Our algorithm should be able to somehow locate automatically the interesting part
of the image.
Information Theoretic solution
The interestingness of an image is subjective and may vary from one person to another. One way to quantitatively measure the interestingness is to measure the informations contained in that image. I
thought that an interesting region of an image is a zone that carries a lot of informations. We need to be able to calculate the information at each individual pixel of the image to find out the
information of a particular region.
One could calculate the information of the pixel based on information theoretic definition:
I = -log(p(i)), where i is the pixel, I is the self information and p(i) is the probability of occurrence of the pixel i in our image.
The probability of occurrence is simply the frequency of that particular pixel. An efficient way to calculate the probability is using a normalized histogram. The histogram stores the frequency of an
intensity measure of the pixel. In our case we convert the RGB image to the CIELAB color space. A color space invented by the CIE (Commission internationale de l'éclairage). It describes all the
colors visible to the human eye.
The problem is reduced to maximizing the total information in a region R(h,w). Or equivalently to find a region of width w and height h with max information. In order to find that region we compute
the information per line (i.e. the sum of the info of the pixels in that line) and the information per column.
For this reason you need only to know how to find the maximum sum subsequence for the lines and the columns. Fortunately this is a well known problem that can be solved in linear time O(n).
I used the GO programming language to implement this algorithm:
// The histogram of the image
var H [256]float64
// The self-informations of lines/columns
Hx := make([]float64, width)
Hy := make([]float64, height)
// Compute the histogram of the image
for x := 0; x < width; x++ {
for y := 0; y < height; y++ {
// I is the intensity of the pixel
// measured in the CIELAB color space
I := CalulatePixelIntensity(m, x, y)
// Increment the histogram
H[I] += 1
// Normalize the histogram
sum := 0.0
for y := 0; y < 256; y++ {
sum += H[y]
for y := 0; y < 256; y++ {
H[y] = H[y] / sum
// Compute the self-information for line/columns
for x := 0; x < width; x++ {
for y := 0; y < height; y++ {
I := CalulatePixelIntensity(m, x, y)
// H[I] = the probability of the pixel
// -log(p) = the information in the pixel
Hx[x] += -math.Log(H[I])
Hy[y] += -math.Log(H[I])
// The x-coordinate of the optimal
// cropping region
Tx, preservedInfoX := FindMaxSubInterval(Hx, rectX)
// The y-coordinate of the optimal
// cropping region
Ty, preservedInfoY := FindMaxSubInterval(Hy, rectY)
This solution worked quite well for most of time, occasionally on some images it fails to capture the interesting part. This problem is present especially when the image has some gradient colors. The
gradient background in those case is not important but has a lot of unique colors. Hence his information measure is quite high.
The following images were rendered using the described algorithm. It’s a heat map representing the self information at each pixel. “Hot” pixel contains a lot of informations.
Minimizing The gradient
Another possible measure of the interestingness of a pixel is simply the gradient norm at that pixel. The gradient is a measure of the horizontal and vertical change of the intensity (we use here the
CIELAB lightness as the intensity) of the pixel. If the neighboring pixels have the same intensity the gradient norm is zero. We define the interestingness of an image as the sum of the gradients of
each individual pixel. The formula of the gradient is :
$ ∇I = \frac { \partial I } { \partial x} e_x +\frac { \partial I}{ \partial y } e_y $
And we want to minimize the function :
$ \phi(u,v) = \sum_{i=u}^W \sum_{j=v}^W \|∇I\|^2 $
We can use the previous algorithm to maximize this function in a similar fashion (line/columns). The following images were rendered using the described algorithm: A heat map representing the gradient
magnitude at each pixel. “Hot” pixel are the most interesting ones.
State of the art visual saliency
Visual salience (or visual saliency) is the distinct subjective perceptual quality which makes some items in the world stand out from their neighbors and immediately grab our attention. [1]
There is several methods used to estimate the saliency of an image. For example some researchers proposed an algorithm based on biological models [2]. This is an interesting approach that combines
neuroscience and computer vision. We have chosen to implement another interesting approach that is based on determining the local contrast of a pixel at different scales [3].
At each pixel we compute the local contrast of a region and it’s neighborhood at different scales. The local contrast of a pixel and a surrounding region is the euclidean distance between the pixel
and the mean of the region. The pixel values lies in the CIELAB space. We obtain using this method several saliency values at different scales. Those values are summed (pixel wise) to obtain the
final saliency value.
[1] http://www.scholarpedia.org/article/Visual_salience
[2] L. Itti, C. Koch, & E. Niebur (1998). A Model of Saliency-Based Visual Attention for Rapid Scene Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11):1254-1259.
[3] Salient Region Detection and Segmentation, Radhakrishna Achanta, Francisco Estrada, Patricia Wils, and Sabine SÄusstrunk
Soumis le 26 Décembre 2012 - 17:34
Unfortunetly most of Tunisian companies and engineers do not have this culture of sharing expertise, IP-TECH is clearly an exception !
Soumis le 02 Janvier 2013 - 16:54
It seems pictures are chosen in a way that objects are put in the first plan, and the background is defocused. So such images don't support your findings and any other random images will invalidate
your algorithm.
|
{"url":"http://www.iptech-group.com/node/492","timestamp":"2014-04-21T14:41:47Z","content_type":null,"content_length":"43876","record_id":"<urn:uuid:36492b6f-ff70-40f1-b41e-f5cdc26224a3>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Automatically Selecting Histogram Bins
Choosing the bin sizes for a histogram can be surprisingly tricky. If there are too few bins, it is hard to pick out the underlying distribution of the data. If there are too many bins, the result
is either unpleasant to look at because the bins have deteriorated into sticks or noise in the data is not sufficiently averaged out, also making it hard to see the underlying distribution. Here we
present several methods for selecting (uniform-width) bins for a histogram.
Fixed number of bins: always use the same number of bins, regardless of the data.
Sturges: the number of bins grows with the log of the size of the data.
Scott: the bin width is proportional to the standard deviation of the values divided by the cube root of the size of the data.
Freedman–Diaconis: the bin width is proportional to the interquartile range of the data divided by the cube root of the size of the data.
Wand: the bin width is chosen to minimize the mean integrated squared error.
By default
rounds bin widths to "nice" values, minimizing some of the differences between the various binning methods.
D. Freedman and P. Diaconis, "On the Histogram as a Density Estimator: Theory,"
Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebeite
, 1981 pp. 453–476.
D. W. Scott, "On Optimal and Data-Based Histograms,"
(3), 1979 pp. 605–610.
H. A. Sturges, "The Choice of a Class Interval,"
Journal of the American Statistical Association
(153), 1926 pp. 65–66.
M. P. Wand, "Data-Based Choice of Histogram Bin Width,"
The American Statistician
(1), 1997 pp. 59–64.
|
{"url":"http://demonstrations.wolfram.com/AutomaticallySelectingHistogramBins/","timestamp":"2014-04-18T20:44:06Z","content_type":null,"content_length":"44675","record_id":"<urn:uuid:c9231137-bbc0-40d7-bb28-2d745ea1763e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Robert Osburn
Robert Osburn
School of Mathematical Sciences
University College Dublin
Ph.D., 2001, Louisiana State University
Contact Information:
School of Mathematical Sciences
University College Dublin
Dublin 4
Office: Engineering Building, 201D
Phone: +353 1 716 2548
Fax: +353 1 716 1196
robert dot osburn at ucd dot ie
Seminar and Teaching:
Algebra and Number Theory Seminar
Past seminars: here and here
Second Semester 2014:
MST 10020 (Calculus II)
Math 20310 (Groups, Rings and Fields)
WSPCA 12 hour endurance run, Arklow, November 22, 2013
Analytic Theory of Automorphic Forms, Chennai, December 9-13, 2013
Séminaire de théorie des nombres de I'IMJ-PRG, Paris, March 24, 2014
Applications of Automorphic Forms in Number Theory and Combinatorics, LSU, April 12-15, 2014
NUI Galway MSAM Seminar, May 8, 2014
The 28th Automorphic Forms Workshop, Moab, May 12-16, 2014
Warwick Number Theory Seminar, June 2, 2014
Building Bridges: 2nd EU/US Workshop on Automorphic Forms and Related Topics, Bristol, July 7-11, 2014.
IHÉS, January 5 to March 5, 2015
MPIM, March 6 to May 31, 2015
research statement
teaching statement
student comments
30. L. Long, R. Osburn and H. Swisher, "On a conjecture of Kimoto and Wakayama",
29. R. Osburn, B. Sahu and A. Straub, "Supercongruences for sporadic sequences",
28. G.E. Andrews, S.H. Chan, B. Kim and R. Osburn, "The first positive rank and crank moments for overpartitions",
27. J. Lovejoy and R. Osburn, "On two 10th order mock theta identities",
The Ramanujan Journal, accepted for publication.
26. J. Lovejoy and R. Osburn, "Mixed mock modular q-series",
Journal of the Indian Mathematical Society, New Series. Special Issue, (2013) 45-61.
25. J. Lovejoy and R. Osburn, "q-hypergeometric double sums as mock theta functions",
Pacific Journal of Mathematics, 264, no. 1, (2013) 151-162.
24. R. Osburn and B. Sahu, "A supercongruence for generalized Domb numbers",
Functiones et Approximatio Commentarii Mathematici, 48, no. 1, (2013) 29-36.
23. J. Lovejoy and R. Osburn, "The Bailey chain and mock theta functions",
Advances in Mathematics, 238 (2013) 442-458.
22. D. Brink, P. Moree, and R. Osburn, "Principal forms X^2 + nY^2 representing many integers",
Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 81, no. 2, (2011) 129-139.
21. R. Osburn and B. Sahu, "Supercongruences for Apéry-like numbers",
Advances in Applied Mathematics, 47, no. 3, (2011) 631-638.
20. R. Osburn and B. Sahu, "Congruences via modular forms",
Proceedings of the American Mathematical Society, 139, no. 7, (2011) 2375-2381.
19. J. Lovejoy and R. Osburn, "Quadratic forms and four partition functions modulo 3",
Integers, 11, no. 1, (2011) 47-53.
18. H. Chan, A. Kontogeorgis, C. Krattenthaler and R. Osburn, "Supercongruences satisfied by coefficients of [2]F[1] hypergeometric series",
Annales des sciences mathématiques du Québec, 34, no. 1, (2010) 25-36.
17. J. Lovejoy and R. Osburn, "M[2]-rank differences for overpartitions",
Acta Arithmetica, 144, no. 2, (2010) 193-212.
16. K. Bringmann, J. Lovejoy, and R. Osburn, "Automorphic properties of generating functions for generalized rank moments and Durfee symbols",
International Mathematics Research Notices, no. 2, (2010) 238-260.
15. K. Bringmann, J. Lovejoy, and R. Osburn, "Rank and crank moments for overpartitions",
Journal of Number Theory, 129, (2009) 1758-1772.
14. J. Lovejoy and R. Osburn, "M[2]-rank differences for partitions without repeated odd parts",
Journal de Théorie des Nombres de Bordeaux, 21, no. 2, (2009) 313-334.
13. R. Osburn and C. Schneider, "Gaussian hypergeometric series and supercongruences",
Mathematics of Computation, 78, no. 265, (2009) 275-292.
12. D. McCarthy and R. Osburn, "A p-adic analogue of a formula of Ramanujan",
Archiv der Mathematik, 91, no. 6, (2008) 492-504.
11. J. Lovejoy and R. Osburn, "Rank differences for overpartitions",
The Quarterly Journal of Mathematics, 59, no. 2, (2008) 257-273.
10. R. Osburn, "Congruences for traces of singular moduli",
The Ramanujan Journal, 14, no. 3, (2007) 411-419.
9. R. Murty and R. Osburn, "Representations of integers by certain positive definite binary quadratic forms",
The Ramanujan Journal, 14, no. 3, (2007) 351-359.
8. S. de Wannemacker, T. Laffey, and R. Osburn, "On a conjecture of Wilf",
Journal of Combinatorial Theory, Series A, 114, no. 7, (2007) 1332-1349.
7. P. Moree and R. Osburn, "Two-dimensional lattices with few distances",
L'Enseignement Mathématique, 52, (2006) 361-380.
6. S. K. K. Choi, A. Kumchev, and R. Osburn, "On sums of three squares",
International Journal of Number Theory, 1, no. 2, (2005) 161-173.
5. R. Osburn, "A remark on a conjecture of Borwein and Choi",
Proceedings of the American Mathematical Society, 133, no. 10, (2005) 2903-2909.
4. R. Osburn, "Vanishing of eigenspaces and cyclotomic fields",
International Mathematics Research Notices, no. 20, (2005) 1195-1202.
3. R. Osburn, "A note on 4-rank densities",
Canadian Mathematical Bulletin, 47, (2004) 431-438.
2. R. Osburn and B. Murray, "Tame kernels and further 4-rank densities",
Journal of Number Theory, 98, (2003) 390-406.
1. R. Osburn, "Densities of 4-ranks of K_2(O)",
Acta Arithmetica, 102, (2002) 45-54.
Graduate Students:
Stefan de Wannemacker, PhD 2006, UCD
(co-advisor: David Lewis).
Senior Researcher
iTec, K.U. Leuven, Kortrijk, Belgium
Dermot McCarthy, PhD, June 2010, UCD
Ad Astra Research Scholarship.
Visiting Assistant Professor
Texas A&M, starting September 1, 2010.
Tenure-track assistant professor
Texas Tech, starting September 2013.
Conor Manning, PhD student, UCD
starting September 2013.
Postdoctoral Fellows:
Marilyn Reece Myers, Dec. 2007-Dec. 2009, UCD
IRCSET Postdoctoral Fellowship.
Tenure-track Assistant Professor
Calvin College, Michigan
Brundaban Sahu, Nov. 2008-Oct. 2010, UCD
SFI Postdoctoral Fellowship.
Tenure-track Assistant Professor
National Institute of Science Education and Research
Bhubaneswar, India
David Brink, August 2009-August 2010, UCD
Danish Research Fellowship
Conferences Organized:
27th Automorphic Forms Workshop, UCD, March 11-14, 2013
Algebraic K-theory and Arithmetic, Banach Center, Bedlewo, Poland, July 22-28, 2012
Prospects in q-series and modular forms, UCD, July 14-16, 2010
UCD/Nottingham Number Theory Day, UCD, November 1, 2008
AMS Special Session on Number Theory and applications in other fields, LSU, March 28-30, 2008
AMS Special Session on Algebraic Number Theory and K-theory, LSU, March 14-16, 2003
SFI Conference and Workshop Programme, March 2013
"27th Automorphic Forms Workshop"
Number Theory Foundation, March 2013
"27th Automorphic Forms Workshop"
UCD Seed Funding Scheme, May 2012
"The Bailey chain and mock theta functions"
SFI Investigator Award Programme, March 2012
"The modularity of q-series"
€282,776 (requested budget, "adminstratively withdrawn by SFI")*
*For more on this controversy in Ireland, see this, this, this,
this, this, or this.
SFI Research Frontiers Programme, April 2008 - October 2011
"Arithmetic properties of coefficients of modular forms"
Number Theory Foundation, January 2010
"Prospects in q-series and modular forms"
IRCSET France-Ireland Ulysses Programme, December 2007
"Ramanujan-type congruences for overpartitions and overpartition pairs"
UCD Seed Funding Scheme, December 2007
"Dyson's rank and overpartitions"
IRCSET Postdoctoral Fellowship Scheme (for Marilyn Myers), June 2007 - June 2009
"Variations of Kummer's conjecture"
UCD Ad Astra Scholarship Programme (for Dermot McCarthy), September 2006 - August 2010
"Number theory, modular forms and combinatorics"
UCD Seed Funding Scheme, August 2006
"Two-dimensional lattices with few distances"
Math Links:
Algebra and Number Theory Group at UCD
UCD School of Mathematical Sciences
Number Theory Web
Math ArXiv
Non-math Links:
David Lynch
Dublin City Gallery
Irish Writers Centre
Science Gallery
Dublin Marathon
Prague Marathon
Wicklow Gaol Break
Connemara Marathon
Dingle Marathon
Waterford Half-Marathon
Tralee Marathon
Strawberry Half-Marathon
Run Ireland
Polar Night Half-Marathon
Wicklow Way Trail and Ultra
Dungarvan Brewing Company
The Beer Nut
|
{"url":"http://maths.ucd.ie/~osburn/","timestamp":"2014-04-18T02:58:37Z","content_type":null,"content_length":"32719","record_id":"<urn:uuid:998489fd-bb6d-4846-a5bd-f654205b13b3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In a parallelogram,
opposite sides are parallel, and equal; opposite angles are equal, and the diagonals (lines inside that intersect) bisect each other.
When you know the length of the diagonals, half of them would be the sides of a traingle they form with one of the sides of the parallelogram.
Use the theorems that (i) when two lines intersect each other, the vertically opposite angles are equal and (ii) sum of the total angles is equal to 360 degrees. This way, all the angles can be
Now, use the Cosine Theorem
a² = b² + c² - 2bcCosA
(where A, B, C are three angles of a traingle and a,b,c are the three sides opposite to angles A,B,C respectively)
for knowing the third side of the triangle, which forms a side of the parallelogram. Following this method, the adjacent side too can be found! Since opposite sides of a parallelogram are equal, we
know all the four sides!
I know this reply is long, and may not be of much help; that's because some Mathematical problems are difficult to explain without a diagram!
Character is who you are when no one is looking.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=13566","timestamp":"2014-04-20T00:50:46Z","content_type":null,"content_length":"13977","record_id":"<urn:uuid:998164ee-aeec-4db5-a8ed-75bfd46599ce>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
LIGHT – REFLECTION & REFRACTION Ppt Presentation
LIGHT – REFLECTION & REFRACTION.:
H.SATHISH KUMAR, CLASS X, RICE MMS. LIGHT – REFLECTION & REFRACTION.
LAWS OF REFLECTION::
PLANE MIRROR::
SPHERICAL MIRROR::
CONCAVE MIRROR::
CONVEX MIRROR::
Key Terminologies::
Key Terminologies: 1. Pole: The centre of the reflecting surface of a spherical mirror is called the pole. It is represented by 'P'. 2. Centre of Curvature: The centre of the sphere is called the
centre of curvature. The spherical mirror is part of a big sphere. The centre of curvature lies outside the mirror. In case of concave mirror it lies in front of the reflective surface. In case of
convex mirror it lies behind the reflective surface. 3. Radius of Curvature: The radius of the sphere is called the radius of curvature. It is represented by 'R'. 4. Principal Axis: The line joining
the pole and the center of curvature is called the principal axis. 5. Principal Focus: In mirrors with small aperture (diameter) roughly half of the radius of curvature is equal to the focus point.
At focus point all the light coming from infinity converge, in case of concave mirrors. The light seem to diverge from f, in case of convex mirrors.
Image Formed by Concave Mirror: (S here stands for distance between object and mirror.):
Image Formed by Concave Mirror: (S here stands for distance between object and mirror.) 1. When S < F, the image is: Virtual, Upright , Magnified (larger) 2. When S = F, the image is formed at
infinity. In this case the reflected light rays are parallel and do not meet the others. In this way, no image is formed or more properly the image is formed at infinity. 3. When F < S < 2F, the
image is: Real, Inverted (vertically), Magnified (larger) 4. When S = 2F, the image is: Real, Inverted (vertically), Same size 5. When S > 2F, the im5. When S > 2F, the image is: Real, Inverted
(vertically), Diminished (smaller) j
USE OF CONCAVE MIRRORS::
IMAGE FORMED BY CONVEX MIRROR::
USE OF CONVEX MIRRORS::
Sign Convention for Reflection by Spherical Mirrors:
Sign Convention for Reflection by Spherical Mirrors While dealing with the reflection of light by spherical mirrors, we shall follow a set of sign conventions called the New Cartesian Sign
Convention. In this convention, the pole (P) of the mirror is taken as the origin. The principal axis of the mirror is taken as the x-axis (X’X) of the coordinate system. The conventions are as
follows: ( i ) The object is always placed to the left of the mirror. This implies that the light from the object falls on the mirror from the left-hand side. (ii) All distances parallel to the
principal axis are measured from the pole of the mirror. (iii) All the distances measured to the right of the origin (along + x-axis) are taken as positive while those measured to the left of the
origin (along – x-axis) are taken as negative. (iv) Distances measured perpendicular to and above the principal axis (along + y-axis) are taken as positive. (v) Distances measured(v) Distances
measured perpendicular to and below the principal axis (along –y-axis) are taken as negative.
Mirror Formula and Magnification:
Mirror Formula and Magnification In a spherical mirror, the distance of the object from its pole is called the object distance (u). The distance of the image from the pole of the mirror is called the
image distance (v). You already know that the distance of the principal focus from the pole is called the focal length (f). There is a relationship between these three quantities given by the mirror
formula which is expressed as 1/v + 1/u = 1/f
MAGNIFICATION: Magnification produced by a spherical mirror gives the relative extent to which the image of an object is magnified with respect to the object size. It is expressed as the ratio of the
height of the image to the height of the object. It is usually represented by the letter m. If h is the height of the object and h ′ is the height of the image, then the magnification m produced by a
spherical mirror is given by m = Height of Image (h') / Height of Object (h) = h' / h The magnification m is also related to the object distance (u) and image distance (v). It can be expressed as:
Magnification (m) = h'/h = -v/u
REFRACTION OF LIGHT::
LAWS OF REFRACTION OF LIGHT::
LAWS OF REFRACTION OF LIGHT: ( i ) The incident ray, the refracted ray and the normal to the interface of two transparent media at the point of incidence, all lie in the same plane. (ii) The ratio of
sine of angle of incidence to the sine of angle of refraction is a constant, for the light of a given colour and for the given pair of media. This law is also known as Snell’s law of refraction. If i
is the angle of incidence and r is the angle of refraction, then, sin i / sin r = constant This constant value is called the refractive index of the second medium with respect to the first.
Refractive Index of Some Media:
Refractive Index of Some Media
REFRACTION BY SPERICAL LENSES: A transparent material bound by two surfaces, of which one or both surfaces are spherical, forms a lens. This means that a lens is bound by at least one spherical
surface. In such lenses, the other surface would be plane. A lens may have two spherical surfaces, bulging outwards. Such a lens is called a double convex lens. It is simply called a convex lens. It
is thicker at the middle as compared to the edges. Convex lens converges light rays, hence convex lenses are called converging lenses. Similarly, a double concave lens is bounded by two spherical
surfaces, curved inwards. It is thicker at the edges than at the middle. Such lenses diverge light rays as shown and are called diverging lenses. A double concave lens is simply called a concave
lens. A lens, either a convex lens or a concave lens, has two spherical surfaces. Each of these surfaces forms a part of a sphere. The centres of these spheres are called centres of curvature of the
IMAGE FORMED BY CONCAVE ::
IMAGE FORMED BY CONVEX::
Lens Formula and Magnification:
Lens Formula and Magnification
POWER OF LENS::
POWER OF LENS: The degree of convergence or divergence of light rays achieved by a lens is expressed in terms of its power. The power of a lens is defined as the reciprocal of its focal length. It is
represented by the letter P. The power P of a lens of focal length f is given by: P =1/f
POWER OF LENS::
POWER OF LENS: The SI unit of power of a lens is ‘dioptre’. It is denoted by the letter D. If f is expressed in metres, then, power is expressed in dioptres. Thus, 1 dioptre is the power of a lens
whose focal length is 1 metre. 1D = 1m–1. Power of a convex lens is positive and that of a concave lens is negative. Opticians prescribe corrective lenses indicating their powers. Let us say the lens
prescribed has power equal to + 2.0 D. This means the lens prescribed is convex. The focal length of the lens is + 0.50 m. Similarly, a lens of power – 2.5 D has a focal length of – 0.40 m. The lens
is concave. Many optical instruments consist of a number of lenses. They are combined to increase the magnification and sharpness of the image. The net power (P) of the lenses placed in contact is
given by the algebraic sum of the individual powers P1, P2, P3, … as P = P1 + P2 + P3 +…
PowerPoint Presentation:
We are most welcome your Comments.
PowerPoint Presentation:
Contact us on: sathishkumar.kumar642@gmail.com
|
{"url":"http://www.authorstream.com/Presentation/hsk.-1605266-light-reflection-refraction/","timestamp":"2014-04-17T21:23:36Z","content_type":null,"content_length":"135781","record_id":"<urn:uuid:440d8209-cae8-4c4f-b5e3-1bade8325ffb>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
$1.3 billion Thirty Meter Telescope (TMT) gets approved
I've once heard a senior astronomer claim that anything beyond 20 meters is technologically not feasable. But I hope they'll overcome all the difficulties, because this is just too exciting!
Hence, "senior" - Just like the Victorian minds (18-19th century) told Einstein that he was full of crap. Although this is merely one example. In my opinion, life will perpetually continue to advance
technology into more complex and astounding measures.
There are only the limits we set and make real, the universe knows of no such limits.
|
{"url":"http://www.physicsforums.com/showthread.php?t=685699","timestamp":"2014-04-19T19:48:14Z","content_type":null,"content_length":"26606","record_id":"<urn:uuid:d894b0ce-9a18-4dce-b82a-cb0e06e4261d>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Universal property of blowups
up vote 1 down vote favorite
Can anyone help me with a proof of the following claim (see for example the book Higher algebraic geometry of Olivier Debarre, proof of Proposition 1.43, page 31):
Let X be a complex manifold, and let W be a complex submanifold of X, with codimension $\geq 2$. Let $\pi :Y \rightarrow X$ be a bimeromorphic morphism, which is not an isomorphism, with the
exceptional set $E$ so that $\pi (E)=W$ and $E$ is irreducible. Then there is a factorization $Y\rightarrow B_W(X)\rightarrow X$, where $B_W(X)$ is the blowup of $X$ at $W$.
There is also a statement for a universal property of blowup in Griffiths - Harris "Principles of algebraic geometry" but without proof as well. I also would like to know a proof of that fact.
Thank you very much.
ag.algebraic-geometry cv.complex-variables
Is there a particular part where you are stuck? – Yemon Choi Aug 23 '11 at 1:32
I understand the statements of those universal properties, and they seem plausible, but can not figure out how to prove them. By the way, the universal property of blowup in Griffiths-Harris that I
mention is in the part "Blowup of manifold" in their book. – anonymous Aug 23 '11 at 3:17
See the reference to Hartshorne mentioned by roy smith below and note that when Debarre applies the universal property of blowups he has reduced to the case that $Y$ is smooth. – ulrich Aug 23 '11
at 6:45
Although I accepted the answer by Roy Smith, I would like to see whether there is an analytic proof of this universal property. One reason is to know that the map $Y\rightarrow B_W(Y)$ is
holomorphic. Also, I have question of whether this universal property imply the universal property stated in Griffiths-Harris: If the fiber of any point on W is the projective space, then $Y=B_W(X)
$. – anonymous Aug 23 '11 at 13:23
add comment
3 Answers
active oldest votes
I am kind of a rookie at this, but what if Y is a small resolution of a double point on a threefold X, with one dimensional excepTional locus. Then it seems false to expect a
factorization through the blowup since the curve exceptional locus could not map onto the two dimensional exceptional locus of the blowup of X. what am i missing?
As pointed out by Anton, I am missing that the target space X is smooth. In that case the proof of Zariski's main theorem in Mumford Cx Proj Vars, p.49 shows there the exceptional locus
E contains a cartier divisor through every point. Hence when E is irreducible it is cartier. Then the universal property of blowing up in Hartshorne implies the factorization exists in
the algebraic category. The proof probably also works in the analytic category.
As to an analytic argument for the G-H universal property, it seems the hypothesis there is that you have a holomorphic map of manifolds Y-->X, the inverse image E of a certain
submanifold W is also a submanifold of codimension one, and the fiber over every point of W is a projective space of dimension equal to the difference in the dimensions of the two
submanifolds. Is that right?
Then you want a factorization through the blowup of X along W and you want it to be an isomorphism. Assume for simplicity the manifolds are compact. Then think what a blow up means. You
are replacing the target submanifold W by its projectivized normal bundle in X. Hence the natural factorization would be via the derivative of the original map f. In fact an examination
of Hartshorne's factorization will show it is simply the derivative.
Your hypotheses imply that at each point of E, the kernel of the derivative equals the tangent space to the fiber. Hence the tangent space to E surjects onto the tangent space space to
W, and the image of the full tangent space of Y is of dimension one larger than that of W. Thus the derivative defines an injection from the normal line bundle of E, into the normal
bundle of W in X. That is precisely an induced map from E to the exceptional locus of the blowup of X along W.
This map is holomorphic on E, since it is the derivative of a holomorphic map. One needs only check that it glues in as a continuous, hence holomorphic, extension of f, and this needs
be done only in the normal direction to E, where it is essentially the definition of a derivative as a limit of difference quotients.
To see that the factorization is an isomorphism, it suffices to check it is bijective, which need only be checked on E. There we have on each fiber of f, a surjective holomorphic map of
projective spaces of the same dimension. By the dimension counts above a regular value of this map is also regular for the factorization, hence each holomorphic map of projective spaces
up vote 4 also has degree one, hence is an isomorphism.
down vote
accepted Here is the universal property in a nutshell:
1) Blowing up an ideal is a functor.
I.e. if f:Y-->X is a map, and (g1,...,gr) is a ideal of functions on X, then f lifts to a morphism from the blowup of Y along the ideal (g1of,...,.grof), to the blowup of X along
2) Blowing up a principal ideal does nothing.
Hence, if the pull back of an ideal is a principal ideal, i.e.defines a cartier divisor, then the original map factors through the blowup of the target space.
This universal property is essentially trivial. I.e. if you get away from all the proj's and gr's the blowup of the subvariety of X defined by {g0,...,gn} is just the closure in XxP^n
of the graph of the meromorphic function g:X-->P^n, defined by the {gj}.
hence if f:Y-->X is holomorphic then so is (fx1):YxP^n-->XxP^n, and it takes the closure of the graph of (gof) into the closure of the graph of g. Moreover if n = 0, nothing happens.
Done. [The gr, proj stuff comes in to show this is all independent of choice of generators of the ideals.]
the reference below to Fischer seems excellent. the access i have through Amazon only gives the special case of a one point blowup, but by implication, that case is crucial. We can see
this is true by observing that our definition of the local blowup agrees with the pull back by the map g, of the blowup of the point 0 in C^(n+1).
I.e. if we blowup the point 0 on C^n+1, by taking the closure in C^(n+1)xP^n of the graph of the map defined by the coordinate functions on C^(n+1), and then map X into C^n+1 by the map
g, the induced map of XxP^(n) into C^(n+1)xP^n pulls back the blowup of 0 in C^(n+1) to the blowup of the zero scheme of g in X.
I was going to say the same thing, but figured "complex manifold" probably means that it has to be smooth. Is the exceptional locus over a smooth point always a divisor? – Anton
Geraschenko Aug 23 '11 at 3:51
ok let me be more careful. from page 104 of the 1st ed. of shafarevich and page 48 of mumford's cx proj varieties, it seems the inverse image of W is of codimension one. from page 164
of hartshorne it seems the factorization exists iff the pullback ideal of W to Y is an invertible sheaf of ideals. now it seems that the sheaf of ideals of an irreducible subvariety
of codimension one is invertible on a smooth variety, so I need still to know that Y is smooth along the pullback of W. ??? – roy smith Aug 23 '11 at 4:27
In the proof of Debarre mentioned in the question, $Y$ is smooth when the universal property of blowups is used, so I think what you say is correct. – ulrich Aug 23 '11 at 6:45
Thank you. I forgot to say that Y is smooth. The reference in Hartshorne solves my question. However, do you have an analytic proof of this fact? Also, does this universal property
implies the following universal property in Griffiths-Harris: Let X, Y, E, W and $\pi$ be are as above, and the fiber at any point on W is the projective space. Then Y is the blowup
$B_W(X)$. – anonymous Aug 23 '11 at 13:01
By analytic method, I mean some kind of Hartogs principle, which I guess can be used to prove the result. @Roy Smith: Under the conditions in my question, then in the book of Debarre
it is shown that E is a hypersurface. – anonymous Aug 23 '11 at 13:07
show 2 more comments
The book by Fischer, "Complex analytic geometry" has a nice treatment of the blow-up and its universal property in chapter 4. The proofs are given in the analytic category.
up vote 1
down vote
Thank you Sylvain. It seems interesting. I will look at the book. Do you know is there an electric file of that book? The book in my school's library has been checked out. – anonymous
Aug 25 '11 at 3:10
You can email me (you'll find my address on math.utoronto.ca) – Sylvain Bonnot Aug 25 '11 at 5:18
Thank you for your offer. I asked for interlibrary loan, and am waiting for the hardcopy. – anonymous Aug 26 '11 at 12:49
I got the book. It is very good. It is not very thick, it is balance between not going very detail into proofs but it gives a lot of helpful examples. It is kind of helps people
strengthen understanding after having some knowledge in the subject. – anonymous Sep 2 '11 at 3:45
add comment
Thank you Roy Smith for your excellent answer.
So as I understand, in the smooth case, the universal property goes as follows:
Let $f:Y\rightarrow X$ be a surjective holomorphic map between complex manifold, let W be a submanifold of X so that its inverse image is a submanifold E of Y. Assume moreover that both E
and W are irreducible. We can work locally, so can assume that both E and W have good tubular neighborhoods NE and NW, which are isomorphic to their normal bundles. Now the derivative of f
will give a lifting map
$F: B_EY\rightarrow B_WX$. In case $E$ is a hypersurface then $B_EY=Y$ and we obtain the universal property referred to by Debarre.
up vote 0
down vote Now for the universal property in Griffiths-Harris:
Now if f is moreover biholomrphic from $Y-E$ to $X-W$ then the map $F$ must be surjective? (If instead we just ask that $f$ is finite to one on $Y-E$, do we still have this property? And in
general, if we just ask $f$ to be surjective, is the map F surjective?)
Now if moreover E is a hypersurface, then F maps each fiber on E (which is a $P^k$) to each fiber of the exceptional divisor of the blowup $B_WX$ (which is also a $P^k$). This map $F$
restricts to $P^k$ is holomorphic surjective to itself, and thus must be finite- to-one (since $P^k$ is Kahler). Thus the map $F$ restricted to $E$ is finite-to-one. Then since for points
in a neighborhood of $E$ but not on $E$, the degree of $F$ is $1$, it must be so on $E$ as well; hence an isomorphism.
surjectivity follows from properness. – roy smith Aug 25 '11 at 13:33
i.e. assume the original map is proper, as occurs for compact manifolds, or any restriction of a map of compact manifolds over any subset of the target. then the induced map of blowups is
also proper, hence closed. Thus if the original map was surjective, the induced map is dense and closed, hence surjective. – roy smith Aug 26 '11 at 2:33
Thanks. I see. In the case under the conditions of G-H universal property, the properness of the map f is easy to check, or am I wrong? – anonymous Aug 26 '11 at 12:51
i have not checked their hypotheses, and that is why i assumed compactness to make properness automatic, but properness is usually assumed. – roy smith Aug 28 '11 at 3:00
look at mumford's lemma 3.11 page 44 of his cx alg geom 1, and see if you can crank up the proof to a proof that a map with compact inverse image of a compact set is proper on some nbhd,
but i am not sure it is true. – roy smith Aug 28 '11 at 3:04
show 7 more comments
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry cv.complex-variables or ask your own question.
|
{"url":"http://mathoverflow.net/questions/73454/universal-property-of-blowups","timestamp":"2014-04-17T15:42:22Z","content_type":null,"content_length":"83114","record_id":"<urn:uuid:3f9c555e-a07d-4a74-9831-18b897e9eeb3>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
More on MBH98 Figure 7
There’s an interesting knock-on effect from the collapse of MBH98 Figure 7 (see here and here).
We’ve spent a lot of time arguing about RE statistics versus r2 statistics. Now think about this dispute in the context of Figure 7. Mann "verifies" his reconstruction by claiming that it has a high
RE statistic. In his case, this is calculated based on a 1902-1980 calibration period and a 1854-1901 verification period. The solar coefficients in Figure 7 were an implicit further vindication in
the sense that the correlations of the Mann index to solar were shown to be positive with a particularly high correlation in the 19th century, so that this knit tightly to the verification periods.
But when you re-examine Mann’s solar coefficients, shown again below, in a 100-year window, which is a period that is sized more closely to the size of the calibration and verification periods, the
19th century solar coefficient collapses and we have a negative correlation between solar and the Mann index. If there’s a strong negative correlation between solar and the Mann index in the
verification period, then maybe there’s something wrong with the Mann index in the verification period. I don’t view this as an incidental problem. A process of statistical “verification” is at the
heart of Mann’s methodology and a figure showing negative correlations would have called that verification process into question.
There’s another interesting point when one re-examines the solar forcing graphic on the right. I’ve marked the average post-1950 solar level and the average pre-1900 solar level. Levitus and Hansen
have been getting excited about a build-up of 0.2 wm-2 in the oceans going on for many years and attributed this to CO2. Multiply this by 4 to deal with sphere factors and you need 0.8 wm-2 radiance
equivalent. Looks to me like 0.8 wm-2 is there with plenty to spare.
I know that there are lots of issues and much else. Here I’m really just reacting to information published by Mann in Nature and used to draw conclusions about forcing. I haven’t re-read Levitus or
Hansen to see how they attribute the 0.2 wm-2 build-up to CO2 rather than solar, but simply looking at the forcing data used by Mann, I would have thought that it would be extremely difficult to
exclude high late 20th century solar leading to a build-up in the oceans as a driving mechanism in late 20th century warmth. In a sense, the build-up in the ocean is more favorable to this view as
opposed to less favorable.
None of this "matters" to Figure 7. It’s toast regardless. I’m just musing about solar because it’s a blog and the solar correlations are on the table.
UC adds
With window length of 201 I got bit-true emulation of Fig 7 correlations. Code in here. Seems to be OLS with everything standardized (is there a name for this?), not partial correlations. These can
quite easily be larger than one.
The code includes non-Monte Carlo way to compute the ’90%, 95%, 99%
significance levels’. The scaling part still needs help from CA statisticians, but I
suspect that the MBH98 statement ‘The associated confidence limits are approximately constant between sliding 200-year windows’ is there to add some HS-ness to the CO2 in the bottom panel:
(larger image )
This might be outdated topic (nostalgia isn’t what it used to be!). But in this kind of statistical attribution exercises I see a large gap between the attributions (natural factors cannot explain
the recent warming!) and the ability to predict the future:
181 Comments
1. Leaving aside the issue of Mann’s wordings and claims wrt a 100 year window, the 200 year window makes more sense than a 100 year one since it includes more data.
2. Leaving aside the issues of Mann’s incorrect methodology, cherry-picking, false claims of robustness, misleading descriptions, bad data, abysmal statistical control, his paragraphing and
punctuation are excellent.
[sarcasm OFF]
The data is bad, TCO. It doesn’t matter how wide the windows are.
3. Sure. But that is a seperate criticism. If you read Chefen (or McK, can’t remember), he makes the specific point that this is an error of methodology. Don’t do that weaving thing that Steve does
sometimes when I want to examine the criticisms one by one.
4. Changing from a 200 to 100 year window should not cause the correlations to collapse like that though. If the correlations are real then the change should be minor, at least until you get down to
the different signal regime below 10 years-ish. That the correlations are so dependent on window choice indicates that either the correlation method is wrong or the signals are in fact not
You can say “it adds more data”, but that is neither a positive nor negative act without a good justification. After all, why not just correlate the *entire* data set then?
Here’s a question, what sort of signal will correlate randomly well or badly with a given other signal depending on the choice of window length?
5. #1, #3, TCO…
the simple linear logic of your statement in #1 then makes an argument for a 500 year window, or a 1000 year window, or 10,000…simply because it “contains more data” does NOT mean it is optimal.
robustness may fall apart at 10 or 20 year windows (decadal fluctuations, anyone?), but 100 years should be sufficient if the data are any good.
6. This is posted for Chefen who’s having trouble with the filter:
Changing from a 200 to 100 year window should not cause the correlations to collapse like that though. If the correlations are real then the change should be minor, at least until you get down to
the different signal regime below 10 years-ish. That the correlations are so dependent on window choice indicates that either the correlation method is wrong or the signals are in fact not
You can say "it adds more data", but that is neither a positive nor negative act without a good justification. After all, why not just correlate the *entire* data set then?
Here’s a question, what sort of signal will correlate randomly well or badly with a given other signal depending on the choice of window length?
7. TCO, I don’t think that the main issue is even whether a 100-year window or 200-year window is better. That’s something that properly informed readers could have decided. But he can’t say that
the conclusions are “robust” to the window selection, if they aren’t. Maybe people would have looked at why there was a different relationship in different windows. Imagine doing this stuff in a
It’s the same as verification statisitcs. Wahl and Ammann argue now that verification r2 is not a good measure for low-frequency reconstructions. I think their argument is a pile of junk and any
real statistician would laugh at it. But they’re entitled to argue it. Tell the readers the bad news in the first place and let them decide. Don’t withhold the information.
8. The “window” is a moving average filter, right?
9. #7: Steve, read the first clause of my comment. I’m amazed how people here think that I am excusing something, when I didn’t say that I was excusing it. Even without the caveat, my point stands
as an item for discussion on its own merits. But WITH IT, there is a need to remind me of what I’ve already noted?
I make a point saying, “leaving aside A, B has such and such properties”, and the response is “but wait, A, A, A!”. How can you have a sophisticated discussion? RC does this to an extreme extent.
But people here do it too. It’s wrong. It shows a tendancy to view discussion in terms of advocacy rather then curiosity and truth seeking.
10. Chefen, Mark and Dave: Thanks for responding to the issue raised. Although I want to think a bit more about this. I still think the 200 years is better then the 100 (and 500 even better, but I’m
open to being convinced of the opposite, for instance there are issues of seeing a change rapidly enough…actually this whole thing is very similar to the m,l, windows in VS06 (the real VS06, not
what Steve calls VS06). (For instance look at all the criticism we’ve had of the too short verification/calibration periods–it’s amusing to see someone making the opposite point now.) As far as
how much of a difference in window causes a difference in result and what that tells us, I think we need to think about different shapes of curves and to have some quantititive reason for saying
that 200 to 100 needs to have similar behavior, but 200 to 10 different is ok–how do we better express this?
Steve and John: boo, hiss.
11. TCO Nobody cares which window you think is better, the MBH claim was that window size does not matter (“robust”), but it does matter. Therefore the claim does not hold.
12. I disagree TCO, because it is a “moving window”, I don’t think that the size of the window is nearly as important as the congruence between results from different window sizes. Correct me if I am
wrong, but you are viewing the data in the window over certain time periods, and any significant variance between the results in, say, the 200 and 100 year windows does require explanation. The
variance we apparently see suggests very strongly that either different processes are at work over the different time scales, or that no real correlation exists.
I know you are not supporting the 200 year window exclusively, but I do think you are de-emphasising the startling divergence of the results.
Now where is Tim Lambert when you want him, I hope he observes this issue and adds it to the list of errors (being polite) made by Mann et al. His post on this is going to get seriously large !
13. Ed: I’m just struggling to understand this intuitively. Is there a window size that makes more than another one? How much should 100-200 year windows agree, numerically? Is there perhaps a better
way to describe the fragility of the conclusions than by susceptibility to changed window size? Some other metric or test? Intuitively, I think that Mann’s attribution efforts are very weak and
sketchy and rest on some tissue paper. Yet, also intutitively, I support larger data sets for correlation studies than smaller ones. The only case in which this does not make sense is where one
wants to really narrow into correlation as a function of time (really you want the instantaneous correlation in that case and the less averaging the better). But it’s a trade-off between accuracy
of looking at the system changing wrt time, versus having more data. I’m just thinking here…
I guess in a perfect multiple correlation model, we would expect to have an equation that describes the behavior over time and that the correlation coefficients would not themselves be functions
of time. That’s more what I’m used to seeing in response mapping in manufacturing. Might need a couple squared terms. But there are some good DOS programs that let you play with making a good
model that minimizes remaining variance and you can play with how many parameters to use and how many df are left. Really, this is a very classic six sigma/DOE/sociology type problem. I wonder
what one of them would make of the correlation studies here.
14. Why in the hell do you want Tim Lambert’s opinion. He doesn’t even understand this thread!
15. RE: #14. Well it is entertaining to watch him put his foot in it. :-)
16. Re: 13. I would vary it from 1 to 300 and plot the change in correlations. How do you know 100 is not cherry picking?
17. TCO, regardless of exactly what features you’re trying to zoom in on the fact remains that the correlations should be fairly stable at similar time scales *until* you run into a feature of one of
the signals. All the features of sigificance lie around the 10-year mark, there is the 11-year sunspot cycle in the solar data and the 10-year-ish switch over in power law behaviour of the
temperature data. There isn’t really anything else of significance, particularly on the 100+ year level. So while the correlations may alter somewhat in going from a 100 to 200 to 300 year
correlation, they definitely shouldn’t dramatically switch sign if the signals *truly* are correlated that well at 200 years. If they are actually uncorrelated then the behaviour would be
expected to vary arbitrarily with window size, without any further knowledge.
You can reasonably suggest that the true dependence is non-linear and that is stuffing things up. But then where does the paper’s claim of “robustness” with window size come from?
18. TCO, as Chefen’s post makes clear, the “optimal” window size depends on the theory you are trying to test. I don’t think it is something that can be decided on statistical grounds alone. For
example, if you theorized that the “true” relationship was linear with constant coefficients you would want as long a data set as possible to get the best fix on the values of the coefficients.
On the other hand, if you thought the coefficients varied over time intervals as short as 10 years you would want narrow windows to pick up that variation. If the theory implies the relationship
is non-linear, you shouldn’t be estimating linear models in the first place, and so on. The point here, however, is that these results were presented as “robust” evidence that (1) the temperature
reconstruction makes sense and (2) that the relative importance of CO2 and solar, in particualr, for explaining that temperature time series varied over time in a way that supported the AGW
hypothesis. We have seen from the earlier analysis that the purported temperature reconstruction cannot be supported as such — the supposed evidence that it tracked temperature well in the
verification period does not hold up, the index is dominated by bristlecones although the evidence is that they are not a good temperature proxy, there were simple mistakes in data collection and
collation that affected the reconstruction etc. Now we also find that the supposed final piece of “robust evidence” linking the reconstruction to CO2 and solar fluctuations is also bogus. One can
get many alternative correlations that do not fit any knwon theory depending on how the statistical analysis is done, and no reason is given for the optimality of the 200-year window. Just a
bogus claim that the window size does not matter — one gets the same “comforting” results for smaller window sizes too.
19. The thing is this looks like the tip of the attribution iceberg. Just taking one major attribution study, Crowley, Science 2000, “Causes of Climate Change over the Past 2000 Years” where CO2 is
attributed by removing other effects, you find Mann’s reconstruction. So much for it ‘doesn’t matter’.
Its been argued that greater variance in the past 1000 years leads to higher CO2 sensitivity. But that is only if the variance is attributed to CO2. If the variance is attributed to Solar, then
that explains more of the present variance, and CO2 less.
20. The end of Peter Hartley’s post (#18) is such a nice summary of these results that I want to give it a little more emphasis. So anyone strugling to understand the meaning of these latest results,
and their relation to Steve’s earlier findings, I suggest you start from this:
The point here, however, is that these results were presented as “robust” evidence that (1) the temperature reconstruction makes sense and (2) that the relative importance of CO2 and solar,
in particualr, for explaining that temperature time series varied over time in a way that supported the AGW hypothesis. We have seen from the earlier analysis that the purported temperature
reconstruction cannot be supported as such “¢’ ¬? the supposed evidence that it tracked temperature well in the verification period does not hold up, the index is dominated by bristlecones
although the evidence is that they are not a good temperature proxy, there were simple mistakes in data collection and collation that affected the reconstruction etc. Now we also find that
the supposed final piece of “robust evidence” linking the reconstruction to CO2 and solar fluctuations is also bogus. One can get many alternative correlations that do not fit any knwon
theory depending on how the statistical analysis is done, and no reason is given for the optimality of the 200-year window. Just a bogus claim that the window size does not matter “¢’ ¬? one
gets the same “comforting” results for smaller window sizes too.
21. #19. Take a look at my previous note on Hegerl et al 2003, one of the cornerstones. I haven’t explored this in detail, but noticed that it was hard to reconcile the claimed explained variance
with the illustrated residuals. We discussed the confidence interval calculations in Hegerl et al Nature 2006 recently.
Now that we’ve diagnosed one attribution study, the decoding of the next one should be simpler. The trick is always that you have to look for elementary methods under inflated language.
We didn’t really unpack Hegerl et al 2006 yet – it would be worth checking out.
22. #21. Yes the Hegerl study started me on a few regressions. I had some interesting initial observations. Like very high solar correlation and low to non-existent GHG correlations with their
reconstructions. One hurdle to replication is expertise with these complex models they use even though they are probably governed by elementary assumptions and relationships. Another thing you
have to wade through is the (mis)representation. In Hegerl, because the pdf’s are skewed, there is a hugh difference between the climate sensitivity as determined by the mode, the mean and the 95
percentile. The mode for climate sensitivity, i.e. the most likely value for 2XCO2 is very low, but not much is made of it.
23. Any comments on the the implications to Mann and Michael Mann and Kerry Emanuel’s upcoming article in the American Geophysical Society’s EOS? Science Daily has an article about it at http://
Some key quotes include:
Anthropogenic factors are likely responsible for long-term trends in tropical Atlantic warmth and tropical cyclone activity
To determine the contributions of sea surface warming, the AMO and any other factors to increased hurricane activity, the researchers used a statistical method that allows them to subtract
the effect of variables they know have influence to see what is left.
When Mann and Emanuel use both global temperature trends and the enhanced regional cooling impact of the pollutants, they are able to explain the observed trends in both tropical Atlantic
temperatures and hurricane numbers, without any need to invoke the role of a natural oscillation such as the AMO.
Absent the mitigating cooling trend, tropical sea surface temperatures are rising. If the AMO, a regional effect, is not contributing significantly to the increase, than the increase must
come from general global warming, which most researchers attribute to human actions.
24. #22. The skew was one reason for plotting the volcanics shown above. That distribution hardly meets regression normality assumptions.
25. #24. Yes, and the quantification of warmth attributed to GHGs arises from a small remainder after subtracting out much larger quantities, over a relatively short time frame.
26. Re #23: With all eyes here focused on hockey sticks, occasionally in the distance a leviathan is seen to breach. Pay no attention.
27. In addition to the CO2-solar-temperature discussion here, Mann refered to Gerber e.a. in reaction on my remarks on RC about the range of variation (0.2-0.8 K) in the different millennium
reconstructions. The Gerber climate-carbon model calculates changes in CO2 as reaction on temperature changes.
From different Antarctic ice cores, the change in CO2 level between MWP and LIA was some 10 ppmv. This should correspond to a variation in temperature over the same time span of ~0.8-1 K. The
“standard” model Gerber used (2.5 K for 2xCO2, low solar forcing) underestimates the temperature variation over the last century, which points to either too low climate sensitivity (in general)
or too high forcing for aerosols (according to the authors). Or too low sensitivity and/or forcing for solar (IMHO).
The MBH99 reconstruction fits good in the temperature trend of the standard model. But… all model runs (with low, standard, high climate sensitivity) show (very) low CO2 changes. Other
experiments where solar forcing is increased (to 2.6 times minimum, that is the range of different estimates from solar reconstructions), do fit the CO2 change quite well (8.0-10.6 ppmv) for
standard and high climate sensitivity of the model. But that also implies that the change in temperature between MWP and LIA (according to the model) is 0.74-1.0 K and the Esper (and later
reconstructions: Moberg, Huang) fit the model results quite well, but MBH98 (and Jones ’98) trends show (too) low variation…
As there was some variation in CO2 levels between different ice cores, there is a range of possible good results. The range was chosen by the authors to include every result at a 4 sigma level
(12 ppmv) of CO2. That excludes an experiment with 5 times solar, but includes MBH99 (marginally!).
Further remarks:
The Taylor Dome ice core was not included, its CO2 change over the MWP-LIA is ~9 ppmv, which should reduce the sigma level further. That makes that MBH99 and Jones98 are outliers…
Of course everything within the constraints of the model and the accuracy of ice core CO2 data (stomata data show much larger CO2 variations…).
Final note: all comments on the Gerber e.a. paper were deleted as “nonsense” by the RC moderator (whoever that may be)…
Final note: all comments on the Gerber e.a. paper were deleted as “nonsense” by the RC moderator (whoever that may be)…
This leaves what left?
29. Re #23
Jason, according to the article:
Because of prevailing winds and air currents, pollutants from North American and Europe move into the area above the tropical Atlantic. The impact is greatest during the late summer when the
reflection of sunlight by these pollutants is greatest, exactly at the time of highest hurricane activity.
Have a look at the IPCC graphs d) and h) for the areas of direct and indirect effect of sulfate aerosols from Europe and North America. The area of hurricane birth is marginally affected by North
American aerosols and not by the European aerosols at all, due to prevailing Southwestern winds…
Further, as also Doug Hoyt pointed out before, the places with the highest ocean warming (not only the surface), are the places with the highest insolation, which increased with 2 W/m2 in only 15
years, due to changes in cloud cover. That points to natural (solar induced?) variations which are an order of magnitude larger than any changes caused by GHGs in the same period. See the comment
of Wielicki and Chen and following pages…
30. Re #27,
Comments about the (global) temperature – ice core CO2 feedback were nicely discussed by Raypierre.
31. Steve,
MBH98 has been carved up piece by piece here. In fact, there have been so many comments regarding its errors that it’s nigh on impossible to get a feel for the number and extent of the problems.
Ed’s comments to the effect that Tim Lambert’s post on the list of errors must be getting substantial got me thinking – wouldn’t it be nice to have a post, with a two column table and in one,
place the MBH claim, while in the other, place a link to the Climate Audit post(s) refuting it. That would be a really powerful way to summarise the body of work on this blog relating to that
You could call it, “Is MBH98 Dead Yet?”, or maybe, “Come back here and take what’s coming to you! I’ll bite your legs off!”
32. #31. I guess you’r referring to the famous Monty Python scene where the Black Knight has all his arms and legs cut off and still keeps threatening. David Pannell wrote about a year ago and
suggested the simile for the then status of this engagement. I guess it’s in the eye of the beholder because I recall William Connolley or some such using the same comparison but reversing the
There’s a lot of things I should do.
33. Yep, that’s the one. Did William Connolley really do that? Honest? ;)
34. 1. If the model shows correlation coefficients of factors varying as a function of time, then we’re not doing a good job of specifying the forcings in the system.
2. I still want some numerical feel for why and how much 100-200 may not differ and the reverse for 200-10. If you can’t give one, then maybe the differences are not significant.
3. Hartley: but A! but A!
35. TCO, as regards 1. and 2. you need to look back at comments #17 and #18. The answers are right there and can’t be made much simpler. You need to consider what exactly correlation coefficients are
and think about how you’d expect them to behave given the properties of the data you have.
36. Here is a question for those with the technical ability, software, time and inclination (of which I have one of the four – I’ll leave you to guess which).
This problem appears to be asking for the application of the Johansen technique.
We have multiple time series and we have an unknown number of cointegrating relationships. Johansen would help us discover which ones matter, if any, and with what sort of causality. At the same
time it was sepcifically designed to be used with non-stationary time series so will easily stride over the problems of finding correlated relationships in stochastically trended series such as
Steve, I am sure Ross would be familiar with Johansen and could do some analysis.
37. Chefen: Disagree. Answer to my question 2 is non-numeric. Give me something better. Something more Kelvin-like. More Box, Hunter, Hunter like. On 1, think about it for a second. If I have a
jackhammer feeding into a harmonic oscillator of springs and dampers, does that change the relationship F=mA over time? No. If you understant the system, you can give a relationship that
describes the behavior. Sure the solar cycle as a FORCING may be changing over time. But if you have modeled system behavior properly, that just feeds through your equation into output behavior.
38. If CO2 is more highly correlated with a 200 year window then a 100 year window then that could suggest; that the climatic response to C02 is slow. That is there is a time lag. However the slower
the response of the climate to CO2 the less information there is to estimate that correlation. The rise in the correlation for the 200 year window towards the end of the millennium I think is
problematic because it suggests that the response to CO2 might not be linear with the log of the CO2. However, this is all conjecture Mann’s results have been shown to be week.
39. Something confuses me about the CO2 graph. It says it is plotting the log of the CO2 but the curve looks exponential. If CO2 is rising exponentially shouldn’t the log of it give a straight line.
If that is the log of CO2 then CO2 must be rising by e^((t-to)^4) or so. Has anyone verified this data?
40. re #39: John C, it says indeed in the figure that it’s “log CO2″, but it is actually the direct CO2 concentration (ppm), which is used (without logging) for the correlation calculations as Steve
notes here. Don’t get confused so easily, the whole MBH98 is full of these “small” differencies between what is said what is actually done. :)
41. I don’t know if anyone is interested in this but with all the criticisms of Mann’s work I have been thinking if I could devise a more robust method of determining the correlation coefficients. My
method starts by assuming that the signal is entirely noise. I use the estimate of the noise to whiten the regression problem (A.k.A weighted least means squares).
I then get an estimate of the regression coefficients and an estimate of the covariance of those regressions coefficients. I then try to estimate the autocorrelation of the noise by assuming the
error in the residual is independent of the error in the temperature due to a random error in the regression parameters. With this assumption I estimate the correlation.
This gives me a new estimate of the error autocorrelation which I can use in the next iteration of the algorithm. This gave me the correlation coefficients:
0.3522 //co2
0.2262 //solar
-0.0901 //volcanic
With the covariance matrix
0.0431 -0.0247 0.0008
-0.0247 0.0363 -0.0002
0.0008 -0.0002 0.0045
Where the correlation coefficients relate how one standard deviation change in the parameter causes a standard deviation change in the temperature where the standard deviation is measured over
the instrumental records.
The algorithm took about 3 iterations to converge. Using the normal distribution the 99% confidence intervals are given by:
0.2413 0.4631 //co2
0.1326 0.3197 //solar
-0.1017 -0.0784 //volcanic
Which is about:
0.1732 standard deviations
I am not sure how this method relates to standard statically techniques. If anyone knows of a standard technique that is similar or better please tell me. Some points of concern is that I was not
able to use the MATLAB function xcorr to calculate the autocorrelation as it resulted in a non positive definite estimate of the covariance matrix. I am not sure if my method of calculating the
correlation is more numerically robust but I know it is not as computationally efficient as the MATLAB method since the MATLAB method probably uses the fft.
Also I found most of my noise was very high frequency. I would of expected more low frequency noise. I plan to mathematically derive an expression for the confidence intervals for signal and
noise with identical poles which are a single complex pare. I think this is a good worst case consideration and it is convenient because the bandwidth of both the signal and noise is well
The bandwidth is important because it is related to the correlation length which tells us how many independent verifications of the signal we have. I could also do some numerical tests to verify
this observation.
function RX=JW_xcorr(a,b,varargin)
function y=normalConf(mu,P,conf)
if sum(size(P)>1)==1; P=diag(P); end
for i=1:length(mu)
y(i,:)=[(mu(i)-abs(delta)) (mu(i)+abs(delta))]
%%%%%%%%%%%%%Script to process the data bellow%%%%%%%%%%%
%%%%%%%%%%%%%Part I load the data%%%%%%%%%%%%%%%%%%%%%%%%
clear all;
%%%%%%%%%%%%%%%%Part One load The Data%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
nhmean; %Load the mean temperature data
fig7_co2; %Load the Co2 data.
m_fig7_solar; %load the solar data
m_fig7_volcanic; %load the volcanic data
tspan=[1611 1980]; %The range of overlapping temperature
%%%%%%%%%Part II Normilize and center the data%%%%%%%%%%%%%%%%%%%%%
[I1 J1]=find((m_co2>=tspan(1))&(m_co2=tspan(1))&(mtemp=tspan(1))&(m_solar=tspan(1))&(m_volcanic
42. Hmmm…..All my code wasn’t pasted. The rest of the script:
%%%%%%%%%%%%%%%%Part II Normilize and center the data%%%%%%%%%%%%%%%%%%%%%
[I1 J1]=find((m_co2>=tspan(1))&(m_co2=tspan(1))&(mtemp=tspan(1))&(m_solar=tspan(1))&(m_volcanic
43. grrr…….looks like I can’t paste my code. I guess you’ll have to wait untill I do it on gocities.
44. John C , I think that WordPress considers the slash sign as an operator and gets mixed up with the code.
45. My code is now posted here:
I’ll upload the figures in a second.
46. Okay, the figures are now added. The figures were generated by MATLAB
47. John A, fix the sidebar.
48. The problem is not people posting long links, the problem is your sidebar. Fix the computer program, not the user behavior.
49. This is the only blog with this problem, that I have seen.
50. It’s not Microsoft’s fault either and Internet Explorer is NOT an obscure application.
51. Originally I was wondering why Michal Mann was plotting the correlation vs time since if the system is stationary the correlation should be constant with time and you should get a better estimate
using more data. I know Michael Mann’s argument is that most of the data there is too much noise to show any significant correlation but I don’t buy this argument. What Steve has shown is that
Mann’s correlation estimate is not robust to low frequency noise.
I think that it is an interesting question if the correlation changes with time but I think that a more important question is why. If we are just interested in the correlation between temperature
and carbon dioxide we may have an expression like:
t temperature
c carbon diocide
s solar
v volcanic
t temperature
where a(i) is the linear correlation coefficient and (1+s(i)+v(i)+t(i))a(i) is our nonlinear coefficient which we use to estimate the relationship of the correlation with time. Once we have a
suitable estimate of correlation with time along with the error bounds we can make better claims as to how reasonable the proposition of a tipping point is.
52. John C., IMHO the real problem in MBH98 is not how the correlation is estimated, the real problem is that the correlations are more or less spurious (MBH98 is not representative of the
temperature history). So I don’t fully get where you are heading to?
In any case, it might be useful (if you have some spare time) to calculate the correlations with respect to different “spaghetti reconstructions”… it would be interesting to see if they share
anything in common! Another interesting testing would be to correlate the individual proxies with those forcings. This might give some indication to what degree they serve as whatever type of
53. I pointed this out a while ago. Correlations changing with time imply that we don’t have a good understanding of the system. There is some other factor driving the changes. I don’t really think
it’s acceptible or meaningful to have correlations change over time.
Correlations changing with time imply that we don’t have a good understanding of the system.
Not at all. It implies that the statistics are really that: non-stationary. I.e. the forcings change over time. We can understand a system just fine in the face of non-stationary statistics, but
that does not mean we can track it.
55. re 52
if they share anything in common! Another interesting testing would be to correlate the individual proxies with those forcings. This might give some indication to what degree they serve as
whatever type of proxy.
Exactly. This has bugged me for over a year now. If tree rings (or any other proxy series) are to be used as “thermometers,” then EACH damn series should correlate with temperatures IN THE SAME
GRIDCELL. When only one or two selected groups of trees correlate only with “global temperature” (e.g., certain bristlecone trees), then something is really amiss. How can these guys keep a
straight face?
56. The rest of the world is not capable of understanding the implications of a lack of correlation to local temperatures vs. global temperatures.
57. 52. Which means something else is changing in the system. There is some variable that is not being accounted for. A non-stationary system becomes stationary once we specify all of the things that
drive the behavior.
58. Like, a whole bunch of variables, in the case of tree ring data. Precipitation, sunlight/shade, nutrients, CO2, to name a few. Can you ever claim that living things will exhibit stationarity?
59. I think you need to do things like age-normalizing and the like. Think that there are a lot of confounding variables. In the long term, would not lose hope that we can find ways to get the
information by combining different proxies mathematically (more equations helps solve more unknowns) or by finding new proxies (better forensics in a sense). Sometimes, I get the impression that
people here don’t want to know what the behavior was in the past. Just because Mannian methods are not sufficient or have been overplayed, would not lose hope that we can’t figure something out.
Look at all the advances that science has made in the last 100 years: isotopes, thermoluminscence, computer analysis and statistical methods, DNA, etc. etc. We may find a way to answer the
60. TCO: I agree. I like the treeline approach.
61. #52 (Jean) I think we can resolve spurious correlations and once we are able to do that either with some standard method or one that we derive then we can go further and try to answer the
question of if the statistics are stationary and weather we can identify the nonlinearities that result in non stationary statistics.
I am not sure about if standard statistics methods exist but I think it is a good learning exercise for met to derive my own. In the link I posted above I calculated the autocorrelation wrong but
it was a good learning experience. I have just hypothesized that complex poles and non positive poles in the autocorrelation indicate less degrees of freedom in the measurements.
Or in other words given a time series Y of length N that has an autocorrelation with non positive real poles then there exist no linear transformation T such that:
W=T Y
and W is a white noise time series of length N.
Thus to whiten the signal we must find a T that reduces the dimensionality of Y. Intuitively this makes sense as the more deterministic the signal is the narrower in bandwidth it will be. Or
equivalent the closer the poles will be to the unit circle in the discrete case and the positive real axis in the continuous case.
Statistics cannot be used to prove how likely a deterministic model fits a signal, it can only can be used to estimate a likelihood region where the poles of the signal can be found. This is
because only random independent measurements give information and there is no way to separate a purely deterministic signal into random independent measurements.
62. re # 57:
Which means something else is changing in the system. There is some variable that is not being accounted for. A non-stationary system becomes stationary once we specify all of the things that
drive the behavior.
Uh, not necessarily true. If we can actually back-out all the variables and remove their impact, then perhaps you can create a stationary data out of non-stationary data. I suppose that’s what
you mean when you say “There is some variable that is not being accounted for.”? Unfortunately, this cannot work if there are non-linearities, i.e. even if you know what all the variables are,
you can’t back out non-linear effects and the data remain non-stationary.
I think it is hampered even further by interdependence between variables as well (which I think will manifest as a non-linearity).
I think you need to do things like age-normalizing and the like.
That’s a good point. Also, what happens if a tree gets damaged for a while, say during an extreme drought, how long does it take before the tree is again “normal?”
64. Interdependance is handled by joint factors.
65. #52 good points. At first I wondered if what you said was always true but then I realized that what is of primary importance is not if our measurements have stationary statistics but rather what
is important is if the model parameters we are trying to estimate have stationary statistics. If the statistics of our measurement are stationary that just makes the job easier and I guess
implies the system is likely linear.
If we assume the model is linear then non stationary regression coefficients implies the model is different at different times. The less stationary the regression coefficients are the less the
model at one time has to do with the model at another point in time. This has many implications including:
-there is less information to identify the model since it is only valid over a short period of time
-identification of model parameters at one time may have nothing to do with model parameters at another time. For example in the case of tree rings identification of proxies in the instrumental
period may have nothing to do with temperatures 1000 years ago.
Interdependance is handled by joint factors.
TCO, joint factors are not separable using PCA, MCA or ICA.
67. So what? Did he (or does he need to) do PCA for the 3 forcing analysis? Just do basic multiple correlation. Joint factors (x1 times x2) can be handled just like an x3 or an x28. It’s a
confounding factor either way. Go read Box, Hunter, Hunter for guidance on when introduction of additional factors is warrented (it is a trade-off between degrees of freedom and fitting all the
variance). This stuff is like super-basic. Minitab will do this in a jiffy.
Someone here (Steve M?) made some remark about Mann not having a polynomial least squares ability when I talked about looking at the higher order factors. I was like, duh. Just put x in one
column of excel, y in the other, make a column for x squared. Do the damn linear regression of xsq versus y. Any moron with excel can do that.
This stuff is like college undergraduate design of experiments stuff. Oh…and if you had modeled a system of forcings in any basic engineering class or in a manufacturing plant that does six sigma
and gone ahead and blithely and unconcernedly just made the correlations for all the forcings be functions of time, people would laugh at you! That that was your result. That it didn’t even
bother you. You don’t know the system then! You don’t even know it phenomenologically.
68. 63. If you read the tree lit, there are individual studies where they look at previous year effects on this year (this moves towards the damaging understanding). It’s not a new thought at all. I
think part of the issue comes though if you use data in a Mannian approach that just mushes everything together. However, if you use individual reconstructions (that have already taken this
effect into consideration), rather then more basic data, this may resolve some of that issue.
So what? Did he (or does he need to) do PCA for the 3 forcing analysis?
My point, TCO, is that you cannot separate them if you do not have a-priori information. I.e. you can pull out the joint (dependent) vars, but you cannot discern the difference between them.
Of course, PCA does not tell you which source is which anyway. FWIW, I’m finding ICA to be more interesting. Higher order statistics are cool.
70. You can look at joint var as well as each independant factor. Adding x1x2 into the modeling (and leaving x1 and x2 there) is handled the same way as if you had added in an x3. This is a very
normal process in DOE.
71. I’ve been banging my head over the estimation problem trying to derive an expression for the probability function that includes both the error in the estimate of the mean and the error in the
estimate of the autocorrelation values.
Then it occurred to me that they are not independent. You can map N estimates of the error in the mean to N estimates of error in the higher order statistics. Consequently you can get a
probability distribution in terms of the error in the mean or the error in the higher order statistics or the static’s but not both simultaneously.
Since we generally assume the error in the mean is Gaussian we express the probability distribution in terms of the error in the mean as as opposed to the error in the higher order statistics
since it is a much more mathematically tractable estimation problem. This brings me back to the Gaussian likely hood function:
P(x|y,r(k delta_t))=(1/((2*pi*det(P))^(n/2)))exp(-(1/2)*(y-Ax)’inv(P)(y-AX))
r(delta_t*(i-j)) is the discrete autocorrelation function of the error in the mean
Notice I wrote the likelihood function in terms of a conditional probability function. The actual probability function is given by the chain rule:
P(x,y, r(k delta_t))=P(x|y,r(k delta_t))P(y,r(k delta_t))
The idea of maximum likelihood is that the actual solution
P(y,r(k delta_t)) is near the maximum of the probability distribution. The likelihood function is an estimate of the probability distribution. Given no knowledge of the mean and autocorrelation a
reasonable assumption (priori information) is that P(y,r(k delta_t)) is a uniform distribution in which case the maxium of the probability distribution is given by the maximum of P(x|y,r(k
delta_t)). The use of prior information P(y,r(k delta_t)) in an estimation problem is known as Bayesian estimation.
An interesting question is how robust is maximum likelihood estimation to changes in P(y,r(k delta_t)). It is my opinion if P(x|y,r(k delta_t)) is narrow and the estimate using P(x|y,r(k
delta_t)) yields an estimate that lies within a few standard deviations of the maximum of P(y,r(k delta_t)) then the estimate will be robust. I think that most reasonable assumptions of P(y,r(k
delta_t)) will yield a robust maximum likelihood estimator of P(x,y, r(k delta_t)).
The only likely probablamatic case I can think of is a probability distribution P(y,r(k delta_t)) that has two maximums which are a much greater distance apart then the width of the likelihood
function. However I am still not sure if this case is problemamatic because if the conditional probability function is sufficiently narrow then the weighting of P(y,r(k delta_t)) may not effect
the maximum. I am going to try out maximum likelyhood and see if my computer can handel it. I am then going to see if I can use it to compute the confidence intervals of the correlation
coefficients of temperature drivers over the period of instrumental data. To compute the confidence intervals I am going to assume that P(y,r(k delta_t)) is a uniform distribution. I may later
try to refine this knowing the distribution properties of the higher order statistics.
72. CO2 Negative Correlation w/ Temp! I tried to different techniques on Mann’s data for MBH98. With regular least squares I got a positive correlation coefficients with solar:
0.3187 //CO2
0.2676 //Solar
-0.1131 //volcanic
I then used an iterative technique where I iteratively estimated the error covariance the fit from the autocorrelation of the residual. When it converged the power spectrum of the previous
iteration was the same as the iteration before it. I got the following correlation coefficients:
0.4667 //CO2
-0.1077 //Solar
-0.0797 //volcanic
Clearly this latter result is wrong but why should it give a different answer then the previous technique? If you look at the plots of solar and CO2, the CO2 curve looks like a smoothed version
of the solar curve. Since the CO2 curve is less noisy and the higher frequency parts of the solar curve don’t appear to match the temperature curve well, the regression prefers to fit the low
frequency “hockey stick like shape” of the CO2 then the low frequency hockey stick like shape of the sloar forcing curve. If the CO2 gives too much of a hockey stick and the solar gives not
enough of a hockey stick the regression can fit the hockey stick by making the solar correlation negative and the CO2 correlation extra positive.
What I think is wrong is since the low frequency noise is easily fit by both the CO2 curve and the low frequency parts of the solar curve, the residual underestimates the error in the low
frequency part of the spectrum. As a consequence in the next iteration the regression algorithm overweight the importance in the low frequency parts of the signal in distinguishing between the
two curves. Since the low frequency parts of the CO2 curve and solar curve are nearly collinear the estimation is particularly sensitive to low frequency noise. I think this shows some of the
pitfalls of trying to estimate the error from the residual and perhaps gives a clue as why the MBH98 estimate of the correlation is not robust.
73. re 55
Proxies are more accurate than thermometers. That’s the only explanation for 2-sigmas of MBH99. Maybe it is possible to use the same proxies to obtain all the other unknowns (than temperature)
for 1000-1980.
74. Estimating w/ Unknown Noise
So I don’t get too adhoc and incoherent I decided to search the web fore estimation techniques that make no assumptions about the noise. This link is the best I’ve found:
It looks promising but I don’t fully understand it. I think when possible these techniques should be tried.
75. Negative Power Spectrim :0
I’ ve been trying some to estimate the auto correlation of the residual and I noticed that it leads to a negative power spectrum. I don’t understand way but there seems to be a known solution to
the problem:
The idea is to maximize the entropy of the signal and it doesn’t even assume Gaussian noise. I noticed Steve has discussed the power spectrum of the residual with respect to some of Mann’s
papers. I wonder if Steve or the opposing Hockey stick team have tried using these techniques to estimate the power spectrum.
“Abstract: We formulate generalized maximum entropy estimators for the general linear
model and the censored regression model when there is first order spatial autoregression
in the dependent variable and residuals. Monte Carlo experiments are provided to
compare the performance of spatial entropy estimators in small and medium sized
samples relative to classical estimators. Finally, the estimators are applied to a model
allocating agricultural disaster payments across regions.”
I have a hunch that this paper will tell me what I need to know to properly apply regression to the case where the statistics of the error are not known in advance. I’ll say more once I have done
reading it.
77. Are Solar and DVI based on proxies as well? And NH is proxy + instrumental. Lots of issues, indeed..
78. #77 I suppose all the non temperature components of the proxi are suppose to cancel out when you fit the data to the proxies. Since many or the drivers are correlated I don’t think this is
guarantied to happen.
79. 45 :
Your method is bit hard to follow, maybe because people tend to have
different definitions for signal and noise. Is the MBH98 ‘multivariate
regression method’ LS solution for the model
mtemp= a*CO2+b*solar+c*volcanic+noise ?
i.e. your ‘no noise assumption fit’. And what is time-dependent correlation of MBH98? Sometimes the Earth responds to Solar and sometimes to CO2?
80. #79 Oh, gosh, given the lack of responses on this thread I didn’t think anyone was following it. Anyway, you have the right definition of the temperature signal plus noise. Sometimes, people try
to fit deterministic signals to a best fit and use a least squares criterion. This is effectively the same as fitting a stochastic signal but assuming white noise with a standard deviation of
one. There are several methods of deriving weighted least means squares and they all give the same solution. The standard methods are: finding the Cramer Roa lower bound, minimizing maximum
likelihood, or whitening the least squares error equation then Appling the normal least squares solution to the whitened equation.
I haven’t updated that link since I posted it because I haven’t been satisfied with the results. I’ll take a look at it now and respond further.
81. Later today, I’ll clean up the code some in the link there will be new figures and this will be my introduction:
“The Climate scientist that make headlines often use adhoc statistical methods in order justify there presupposed views without doing the necessary statistical analysis to either test there
hypothesis or justify their methods. One assumption used by Michael Mann in one of his papers is that the error can be estimated from the residual. In the following we will iteratively use this
assumption to try to improve the estimator and show how it leads to an erroneous result. By proof by contradiction we will say that Mann’s assumption is invalid.
There have been several techniques in statistics that attempt to find solutions to estimation without knowledge of prior statistics. They even point out that estimation of the true
autocorrelation via the autocorrelation of the measurement data is inefficient and can lead to biased results. Worse then that the estimation of the power spectrum can lead to negative power
values for the power spectrum as a result of windowing. This is a consequence of the window not exceeding the coherence length of the noise.
I will later address these techniques once I read more but believe that entropy may be a neglected constraint. I conjecture that if we fix the entropy based on the total noise power then try to
find the optimal estimate of the noise based on this constraint, then absurd conclusions like negative power spectrum will be resolved. Entropy is a measure of how random a system is. Truly
random systems will have a flat power spectrum and therefore a high entropy relative to the power.
82. Thinking more about coherence length I recalled that the coherence length is inversely proportional to the width of the power spectrum peak, thus narrow peaks cannot be estimated over a limited
window. This tells me that a good estimate of the power spectrum will not have overly narrow peaks. Thus for the iterative technique to work of it must constrain the power spectrum to certain
smoothness properties based on the window length.
83. Re 81: Right on! Though perhaps a bit too diplomatic. Many Climate Scientists, brilliant though they may be, inhabit the closed form world of Physics. In this world, if the equations say it
happened, it happened and there is no need for empirical verification. Statistics also are not required; “We’re beyond the need for that” seems to be the attitude of some modelers at least. End
of discussion. This sort of mindset does not tend to produce good researchers. (And the good ones doing the solid research don’t produce headlines.)
Will be very busy at work but will try to revisit this blog and thread as much as I can. Thanks for your comment, John.
84. #81. John Cr, I haven’t been keeping up for obvious reasons, but intend to do so.
85. Re #81: Good stuff John. However, you might want to address the spelling errors, word omissions etc if you don’t want to be exposed to the related credibility risk.
86. #85 I’m not worried. I’m far from ready to publish. I am just at the brain storming stage and have much to learn about estimation when the statistics of the noise aren’t known. I am curious
though about how Mann estimates the power spectrum of the residual as I am finding out it is not a trivial problem.
I guess, I won’t be cleaning up the link I posted above with corrections to the code figures and a new introduction today. I have been instead fiddling a bit with the power spectrum and auto
correlation estimation. I want to see if I can do better at estimating the true auto correlation then just auto-correlating the data.
My current approach involves auto-correlating the data, taking the fft, taking the magnitude of each frequency value, taking the inverse fft, then smoothing in the frequency domain by multiplying
by a sinc function in the time domain. It sounds ad hoc so I want to compare the result with just auto-corelating the data.
I know iteratively trying to improve your estimate via estimating the noise by taking the auto-correlating of the data yields erroneous results. I am not sure if you can improve your estimate by
using the smoothing feature I proposed above in the iterative loop. I want to compare the results for curiosity. Weather I take it to the next level of statistical testing though motie carlo
analysis will depend if I believe it is a better use of my time to test and validate my ad-hoc technique or read about established techniques as I posted above.
87. #86. John Cr, I’ve not attempted to replicate Mann’s power spectrum of residuals, but he has written on spectra outside of MBH98 in earlier work and I presume that he uses such methods – see Mann
and Park 1995. I would guess that he calculates 6 spectra using orthogonal windows (6 different Slepian tapers); then he calculates eigenspectra.
In the calibration period, because there is already a very high degree of overfitting – his method is close to being a regression of temperature on 22-112 near-orthogonal series – and the
starting series are highly autocorrelated. Thus simple tests for white noise are not necessarily very powerful.
88. One way to test the algorithm is to simulate the NH_temp data, e.g.:
and use the s_temp instead of NH_temp in the algorithm. MBH98 algorithm seems to show changing correlation even in this case. Normalize/standardize problem.
89. #88 I agree simulation is a good test method. What kind of noise did you assume in you simulation? Anyway, it occurred to me today that in weighted least mean squares there is a tradeoff between
the bias introduced in the non modeled parameters (e.g. un-modeled red noise) and estimator efficiency (via a whitening filter). The correlation length of the noise tells us how many independent
measurements we have. We can try to remove the correlation via a whitening filter to improve the efficiency but there is an uncertainty in the initial states of the noise dynamics. The longer the
correlation length due to those states (time constant) the more measurements are biased by this initial estimate. Bias errors are bad because they will add up linearly in the whitening filter
instead of in quadrature. We can try to remove these bias errors by modeling them as noise states but this also reduces our degrees of freedom and thus may not improve our effective number of
independent measurements.
90. JC,
It’s not that people don’t want to follow this thread. It’s just that the language is sufficiently opaque that you’re not going to get a lot of bites. I understand the need to be brief. But if
you’re too brief, the writing can end up so dense as to be impenetrable to anyone but the narrowest of audiences. If the purpose of these posts is simply to document a brainstorm, where you are
more or less your own audience (at least for awhile), maybe label it as such, and be patient in waiting for a reply? Just a thought. Otherwise … keep at it!
Will try to follow, but it will have to be in pulses, as time permits. Meanwhile, maximum clarity will ensure maximum breadth of response, maximum information exchange, and a rapid rate of
Consider posting an R script, where the imprecision of english language packaging is stripped away, leaving only unambiguous mathematical expressions. A turnkey script is something anyone can
contribute to. It is something that can be readily audited. And if we can’t improve upon it, at least we might be able to learn something from it.
91. Hopefully, it will be clearer sooner. I am making progress on estimating the correlation of CO2. My iterative estimation of the noise seems to be working because the peaks in the power spectrum
stay in the same location after several iterations. It takes about 5 iterations to converge. The regression coefficients of the solar seem to decrease by half of what it was doing non weighted
regression well the CO2 seems to stay about the same. I expected the solar regression coefficient to catch up to the CO2. However, the power spectrum has three dominate peaks and if I model those
with an autoregressive model the situation may change. I also haven’t tried working out the confidence intervals.
Keep in mind I am using Mann’s data and my techniques can’t correct for errors in the data.
92. I just tried fitting the temperature data to an ARMA model. Since there were 3 polls pairs, I used 6 delays for the auto regressive part. I used 2 delays for each input. My thinking was to keep
the transfer function proper in the case that each complex pole pair was due to each input. More then likely though the poles are due to the sunspot cycles or weather patters.
The matrix was near singular so in my initial iteration I added a constant to each diagonal element so that that the condition number didn’t exceed 100. This first iteration give me a nearly
exact fit. The power of the peaks in the power spectrum of the error were less then 10^-3. As a comparison non weighted least squares without modeling the noise dynamics gives a power for the
peaks of 30.
Therefore the ARMA model significantly reduced the error. Unfortunately when I tried to use this new estimate of the error in the next iteration the algorithm diverged giving peaks in the error
power spectrum of 10^5. I think I can solve this problem by using an optimization loop to find the optimal value to add to the diagonal of the matrix I invert in my computation of the pseudo
For those curious the pseudo inverse is computed as follows:
inv(A’ A) A’
Where ” denotes transpose
A is the data matrix. For instance one column is the value of carbon dioxide and another column represents the solar divers, other columns represent the auto regressive parts and other columns
represent the moving average part.
Adding to the diagonal effectively treats the columns of the data matrix as signal plus white noise and the diagonal matrix added to A’A represents the white noise in the columns of A. Although
the value of the columns of A may be known precisely the noise could represent uncertainty in our model. The effect of adding a diagonal matrix to A’A is to smooth the response. This technique of
adding a constant to the diagonal has many applications. For instance it is used in iterative inverse solvers such as used in the program P spice. It is also a common technique used in model
predictive controls.
93. cont. #79:
It seems that inv(A’*A)*A’*nh, replicates MBH98 results more accurately than http://data.climateaudit.org/scripts/chefen.jeans.txt code (no divergence at the end of the data). Tried to remove CO2
using this model and the whole data set, the solar data does not explain the residual very well. Climate science is fun. I think I’ll join the Team.
86: just take fft and multiply by conjugate, there’s the PSD. Gets too complex otherwise.
94. At some point, this topic needs to be pulled together in 500 words and submitted to Nature.
95. Yeah and when you do that, take a careful Sean Connery look to make sure that you don’t go a bridge too far. Cheffen already did. But had the Co-Jones to admit it.
96. Here is the code if someone is interested:
97. Are the input files also posted somewhere? i.e.:
98. See the scripts linked here
99. What?! You openly allow people to access data, and you share your scripts?! Don’t you know that that’s a threat to your monopoly on the scientific process? You’re mad! You’re setting a dangerous
precedent! :)
But seriously. I am surprised how much good material there is buried here in these threads. Do you have a system for collating or scanning through them all? I’ve used the search tool; I’m
wondering if there’s some other more systematic way to view what all is here. e.g. I have a hard time remembering under which thread a particular comment was posted. And once a thread is off the
‘hot list’ (happens often under heavy posting), then I have a hard time sifting through the older threads. Any advice?
(You may delete 2nd para, unless you think other readers will find the answer useful.)
100. The categories are useful and I use them.
If you google climateaudit+kenya or something like that, you can usually find a relevant post. I’ve been lazy about including hyperlinks as I go which is now frustrating that I didn’t do it at
the time.
As an admin, there are search functions for posts and comments, which I often use to track things down. But that doesn’t seem to be available to readers or at least I don’t know how to
accommodate it.
There are probably some ways of improving the indexing, but I’ll have to check with John A on that.
101. Steve
Setting up an FTP site would probably be the best/easiest.
Usually if you point the browser to the directory of the FTP site it will list everything letting people find it themselves.
I’m sure you are aware of this, but probably didn’t make the connection here. The necesarry neurons are probably taken up with the memory of a Lil Kim video.
I am not sure if standard statistics methods exist but I think it is a good learning exercise for met to derive my own.
JC, I definitely do not want to discourage you from carrying on with your innovative work. In fact, I think I will start paying a little closer attention to what you are doing. It is starting to
look interesting (as I slowly wade my way though this thread).
I just want to point out that this attitude can be somewhat dangerous. It validates my earlier remark when I described the Mannomatic as an hoc invention (and MBH98 as the unlikely product of a
blind watch-maker). Innovating is great fun. Until you get burned. So, if you can, be careful.
In general, CA must be careful that a “good learning exercise” for one does not turn out to be a hard lesson for all. We have seen how innovation by a novice statistician can lead to a serious
credibility loss. Let THAT be the lesson we learn from.
That being said, JC is obviously a MATLAB whiz, and that is a very good thing. Keep it up JC. (Can we start him a scratch-pad thread off to the side where he can feel free to brainstorm?)
103. Sure.
Johm Creighton, if you want to collate some of your posts and email it to me, I’ll post it up as a topic, so that people can keep better track/
104. Steve, I might check tonight to see what I might want to move.
Bender not to worry, I think my thought process is converging on something standard.
In the link above the error and the regression parameters are simultaneously estimated. How do you do this? Well, here is my guess. (When I read it I “ll see if I have the same method a better
method or a worse method.) Treat each error as another measurement in the regression problem:
[y]=[A I][ x]
[0]=[0 I][e]
With an expected residual covariance of:
[I 0]
[0 I]
This can be solved using weighted least mean squares
Once x is estimated, estimate we estimate
P=E[(Y-AX) (Y-AX)']
And compute the cholesky factorization of P
S S’=P
This gives us a new regression problem becomes:
[y]=[A S][ x]
[0]=[0 S][e2]
With an expected covariance in the residual of:
[P 0]
[0 P]
Subsequent iterations go as follows:
P(n)=E[(Y - A X(n)) (Y - A X(n))']
[y]=[A S(n)][ x(n+1)]
[0]=[0 S(n)][e2(n+1)]
With an expected error in the residual of:
[P(n) 0 ]
[0 P(n)]
We keep repeating using weighted least mean squares until P(n) and x(n) converge. Hopefully, by this weekend I can show some results If I don’t play too much Tony Hawlk pro skater. Lol Well, I
got to go roller balding now.
Welcome to the team the UC :) You can estimate the power spectrum the way you suggest but it will give a nosier estimate unless you average several windows. It can also give a biased estimate. In
the data Mann’s used for estimating the correlations, the number of measurements isn’t so large that I can’t compute the auto correlation directly by definition but the optimization you suggest
should be use when computer time is more critical.
105. Re #104 multiple regression with autocorrelated error
In the link above the error and the regression parameters are simultaneously estimated. How do you do this?
JC, you’re precious. And I’m not kidding. But there is a difference between “how you do this” and “how a statistician would write the code in MATLAB so that a normal human could do this”.
Your link suggests you want to do multiple regression with autocorrelated error. If that’s all you want to do, don’t need to re-invent the wheel. Use the arima() function in R. The xreg parameter
in arima() is used to specify the matrix of regressors.
What we are after are turnkey scripts that TCO could run, preferably using software that anybody can download for free, and whose routines are scrupulously verified to the point that they don’t
raise an eyebrow in peer review. That’s R.
I still have not digested all you’ve written, so be patient. My comment pertains only to the link you provided. (I do enjoy seeing your approach as it develops.)
106. #105 I typed help arima on my version of MATLAB and I couldn’t find any documation about this routine. I searched arima+MATLAB on google and I didn’t find any code right away. Maybe I’m
reinventing the wheel but I still think it is a good learning exercise. I learn best when I can anticipate what I am about to learn. If I get it working on MATLAB I can try righting a visual
basic macro for it so people can use it in excel. Or better yet maybe I can find some code someone already written that is free.
107. # 104: IMO, if the noise part is strongly correlated over time, we need to change the model. And the simple-fft method in my code shows that the residuals have strong low-freq components (you
are right, it can be noisy, but fits for this purpose). That is, linear model with i.i.d driving noise can be rejected. Maybe there is additional unknown forcing, or responses to CO2 and Solar
are non-linear and delayed. My guess is that nh reconstruction has large low-frequency errors.
108. #106 Google arima+R
#107 You already know there are other forcings, intermediary regulatory processes, lag effects, threshold effects & possible nonlinearities. So you know a priori your model is mis-specified.
[This model does not include heat stored in the oceans. I'm no climatologist, but I presume this could be a significant part of the equation.] So you know there is going to be low-frequency
signal that is going to be lumped in with the residuals and assumed to be part of the noise. So it’s a fair bet your final statement is correct.
Re #41
My method starts by assuming that the signal is entirely noise
Given above, why would you start by assuming THAT? (I know: because it’s helpful to your learning process. … Carry on.)
109. #108 : My final statement is independent of the former statements ;) Other issues imply that NH reconstruction has no ‘statistical skill’ and it usually leads to low-frequency errors.
It seems to me that fig-7corrs are estimates of linear regression parameters (LS), not partial correlations. And that explains the divergence when http://data.climateaudit.org/scripts/
chefen.jeans.txt is used.
110. re #109, and others: Thanks for the hard work, UC, JC & bender! I’ll check those once I’m back from the holidays. If you feel need to keep busy on research, I suggest you take a look on MBH99
confidence intervals ;) I’m sure it is (again) something very simple, but what exactly?!??! The key would be to understand how those “ignore these columns” are obtained… try reading Mann’s
descriptions if they would give you some ideas… Notice also that they have also third (!!!) “confidence intervals” for the same reconstruction (post 1400) given in (with another description):
Gerber, S., Joos, F., Bruegger, P.P., Stocker, T.F., Mann, M.E., Sitch, S., Constraining Temperature Variations over the last Millennium by Comparing Simulated and Observed Atmospheric CO2,
Climate Dynamics, 20, 281-299, 2003.
It seems to me that fig-7corrs are estimates of linear regression parameters (LS), not partial correlations.
You may well be right on this issue. I though that they are partial correlations, as those gave
pretty good fit compared to ordinary correlations. Based on Mann’s description, they may well be regeression parameters, or actually almost anything :) :
We estimate the response of the climate to the three forcings based on an evolving multivariate regression method (Fig. 7). This time-dependent correlation approach generalizes on previous
studies of (fixed) correlations between long-term Northern Hemisphere temperature records and possible forcing agents. Normalized regression (that is, correlation) coefficients r are
simultaneously estimated between each of the three forcing series and the NH series from 1610 to 1995 in a 200-year moving window.
111. A brief side comment on how uncertainty is portrayed in EVERY ONE of these reconstructions. The graphs are deceptive, possibly intentionally so. And many, many good people are being deceived,
whether they know it or not. I think most people are looking at the thread of a curve in the centre of the interval, thinking that IS the recontruction. It is NOT. It is only one part of it. What
you should be looking at are the margins of the confidence envelope, thinking about the myriad possibilities that could fit inside that envelope.
An honest way of portraying this reality would be to have the mean curve (the one running through the centre of the envelope) getting fainter and fainter as the confidence interval widens the
further you go back in time. That way your eye is drawn, as it should be, to the envelope margins.
North specifically made a passing reference to this problem in the proceedings, when he pointed out that the background color on one of the reconstruciton graphics was intentionally made darker
and darker as you go back in time. (I don’t recall the name of the report.) That addresses the visual problem, but in a non-quantitative way. I think the darkness of the centreline should be
proportional to the width of the confidence envelope at each point in time. THEN you will see the convergent truth that is represented at the intersection of all these reconstructions: a great
fat band of uncertainty.
112. #110: Yep, hard to track what is going on; the story continues:
The partial correlation with CO2 indeed dominates over that of solar irradiance for the most recent 200-year interval, as increases in temperature and CO2 simultaneously accelerate through to
the end of 1995, while solar irradiance levels off after the mid-twentieth century.
113. Now, as we have the model we can use years 1610-1760 to calibrate the system and reconstruct whole 1610-present (CO2, Solar and Volcanic as proxies). Calibration residuals look OK. http://
www.geocities.com/uc_edit/rc1.jpg :)
114. “‘My method starts by assuming that the signal is entirely noise’
Given above, why would you start by assuming THAT? (I know: because it’s helpful to your learning process. … Carry on.) ”
With the algorithm I am envisioning
I don’t think that is a necessary assumption. I expect it to converge to the same value regardless of the initial assumption of the noise. But for a robustness I can try initializing it with the
assumption that the signal is all noise and with the assumption that the noise is white with a standard deviation of one and see if I get the same results.
115. I see your point.
See, this is the problem with posting every detailed thought. I thought you were suggesting it was somehow a helpful assumption to start with. Maybe not “necessary”, but somehow “better” in a
substantive way. My attitude is: if it’s “not a necessary assumption”, why tell us about it? I guess because you write stream-of-consciousness style? Fine, but it makes it hard for us to separate
wheat from chaff.
Never mind. It’s really a minor issue. Use the blog as you will. Looking forward to the result.
116. #115, I am not sure a useful algorithm can’t be derived on the basis of that assumption. However, I abandoned that approach. Anyway, I was thinking about error bars today and what they mean if
you are both estimating, the natural response, the forced response due to error and the forced response due to known forcing effects all simultaneously.
Joint Gausian’s are such that you have hyper ellipses with a constant value in the pdf. The shape of the ellipse is defined by the eign vectors and eign values of the covariance matrix. The N
standard deviation ellipse in the coordinates space defined by linear combinations of the eignvectors centered about the mean value of the estimate vector (the estimate vector is made up of the
estimates of the parameters and errors) is defined as follows:
Where vn is the amount we deviate from the mean (the mean is the estimate vector) in the direction of the nth eigenvector and lambdan is the nth eigen value.
There exists a transformation that transforms our space of possible estimates (includes estimations of the parameters and errors) to our eigen coordinate space. There exists another
transformation that transforms our space of possible estimates to our measurement space measurement space (includes actual measurements augmented with the expected value of the error (the
expected value of the errors is zero)).
We define the N standard deviation error bars for a given coordinate in our measurement space as the maximum and minimum possible estimate of that coordinate that lies on the N standard deviation
hyper ellipse given by:
I suspect non of the following are always at the center of these error bars: the fit including initial conditions, known forcing and estimated error forcing; the fit including only known forcing
centered vertically within the error bas as best as possible; the actual temperature measurements.
Another question is, are these error bars smoother then the temperature measurements and any of the fits described in the last paragraph. I conjecture they are.
117. On the subject of gain. The correlation coefficients are only a really good measure of gain if the output equals a constant multiplied by the input. Other types of gain include the DC gain, the
maximum peak frequency gain, the matched input gain, and the describing function gain.
I will define the describing function gain as follows:
Average of (E[y y_bar]/sigma_x) over the time interval of interest
Where y is the actual output, y_bar is the output response due to input x and signa_x is the standard deviation of input x. I may try to come up with a better definition later.
118. #116 Sorry, again hard to follow, maybe it’s just me. But I’d like to know as well what those error bars actually mean. Multinormal distributions are out, try to plot CO2 vs. NH or Solar vs. NH.
and you’ll see why. IMO ‘forcing’ refers to functional relationship, not to statistical relationship.
forcing’ refers to functional relationship, not to statistical relationship
Re #118. Yes. Forcing by factors A, B, C on process X just means the effects a, b, c are independent. When modeling the effects in a statistical model, one would start by assuming the forcings
are additive: X=A+B+C, but obviouly that is the simplest possible model, no lags, nonlinearities, interactions, non-stationarities, etc.
Not sure how your comment relates to #116, but then again not sure where #116 is going either. (But you gotta admit: it is going.)
120. #118 The error bars represent the range of values that can be obtained with plausible model fits (fits to parameters end errors). That is we consider a confidence region of fits based on the
statistics and we plot the supremum (least upper bound) and infimum (greatest lower bound) of all these fits.
This gives us an idea of how much of our model prediction we can rely on and how much is due to chance.
121. # 119 , # 120 Don’t worry, we’ll get in tune soon. One more missing link: how are the fig7-corrs.dat confidence intervals computed? Is there a script somewhere?
122. You mean how did Mann do it?
Steve has a few posts on that. Click on the link on the side bar that say statistics. You should see a few topics on it as you go though the pages.
123. I’ve discussed MBH confidence intervals for the reconstruction, try the sidebar MBH – Replication and scroll down. So far, Jean S and I have been unable to decode MBH99 confidence intervals nor
has von Storch nor anyone else. We didn’t look at confidence intervals for the regression. Given the collinearity of the solar and CO2 forcing and the instability of correlation by window size,
one would assume that the confidence intervals for Mann’s correlation coefficients should be enormous – but no one’s checked this point yet to my knowledge. Why don’t you see what you turn up?
124. # 123
Sure, I can try. Not exactly the numbers in MBH98 but close:
125. I am not sure if I am on the way to anything good yet. In the ARMA model I used a four pole model and three zeros for each input. In the figure bellow, the solid line is the fit using my method,
the dotted line is the fit using regression on a simple linear model (no lags), and the dots are the temperature measurements. My fit is better but it is not fair comparison because I am using a
higher order model.
Another interesting plot is a plot I did of the DC gain:
The model seems to say that the equilibrium response of the climate to a DC (0 frequency a constant signal) of either solar or CO2 of one standard deviation would change the temperature by an
amount that is only about 10% of the amount we have seen the climate vary over the best 1000 years.
This is only very preliminary work and I need to spend a lot of time looking for bugs, and analyzing the code. Trying to better understand the theory and hopefully eventually some Monty Carlo
testing. I’ll post the code in a sec. One thing that doesn’t quite look right is I was expecting the error parameters to have a standard deviation of one.
126. The code can be found here:
To get it to work in MATLAB, copy all the files into your working directory, change the .txt extension to a .m extension. Then run script2. In the yes know question it asks, type ‘y’ your first
time though. Don’t forget to put in the single quotes. On you’re second time though you can type ‘n’. This user input was added because MATLAB seems to take a while to initialize the memory it
needs for arrays. This can really slow a person down if they are working on code.
127. Bump
John Creighton, if you put a full link in the first few lines of your post, it messes up the front page for Internet Explorer users. If you use the link button, you can put a link text instead of
the URL, and the problem does not arise.
128. And again …
129. Avast ye, landlubbers …
130. The devil damn thee black, thou cream faced URL …
131. where got thou that goose look …
132. # 124 cont.
How should we interpret these confidence intervals? Here’s my suggestion: It is very unlikely that the NH reconstruction is a realization of AR1 process with p??? In an assignment A(:,matrix) =
B, the number of elements in the subscript of A and the number of columns in B must be the same.
Error in ==> Z:\script2.m On line 148 ==> Ry(:,k-1)=Rx3(LE:end);%We keep track of the auto corelation at each itteration
133. UC, I don’t get that error. To help me find the bug once you get the error message type:
I suspect that Ry might be initialized to the wrong size.
134. Here is my suggestion. On line 85 change
I think that the version of MATLAB I am using (version 7) initializes Ry in the assignment statement where you got the error. I’ll go make this change in my code now.
135. Thanks, seems to work now. I’m still not quite sure what it does, though ;) I have a fear of overfitting.
136. Glad it is working. I’ve been thinking about where to go with it and bothered me that the noise parameters were not white. Well, they looked white except for the standard deviation not being
one. I think this is caused by me weighting too much the estimated error in the estimation of the error auto correlation and not weighting enough the residual in the estimate of the error auto
I am going to add some adaptation into the algorithm as follows. The initial assumption is that the measurement part of the residual is white noise. My first estimate of the error auto
correlation will be done by taking the sum of the residual error plus a weight times the estimated error and then auto correlating that weighted sum. I will choose the weight so after I do my
frequency domain smoothing the value of the auto correlation at zero is equal to the expected standard deviation of the residual.
This gives me an estimate of the error covariance which I use in the regression and I get a new estimate of the error parameters. To get my new estimate of the measurement residual standard
deviation, I will multiply what I previously thought it was by the standard deviation of the error parameters. The idea being to force the standard deviation of the error parameters to one. Then
hopefully at convergence all of the residual covariance error can be explained by the mapping of the error parameters onto the measurements via the square root of the covariance matrix. (fingers
crossed :))
137. #136
So your final goal is to explain the NH-temp series using only forcings and linear functions of them, in a way that residual will be std 1 and white? Sorry if Im completely lost here!
138. Forcing include the error and the known inputs. The estimation of the error should help make up for model uncertainty. The optional solution is a statistical since of the linear equation
Is equal to the least squares solution of:
R is the choleskey factorization of the covariance of the noise (e=y-AX)
E[e e^T]=P=R’ * R
P is the covariance matrix of the noise (residual)
R is the choleskey factorization of P
So the optimal solution is:
X~=inv(A’ R’ R A’) R A y
The problem is we don’t know the noise so how do we find the most efficient estimator? Consider the equation:
[y]=[A S][X]
[0]=[0 I][e]
Where e is a white noise vector. We want to estimate e and x and we can do this by weighted regression but if we want the most efficient estimate we have to whiten our regression problem as we
did above.
This can be done by multipolying the top part of the matrix equation by R
To get:
[Ry]=[RA I][X]
[0] = [0 I][e]
And the residual error is:
[e]=[Ry]-[RA I][X]
[e]=[0 ]- [0 I][e]
Which is white noise:
Thus although we don’t know R or S we know what the residual should look like if we choose S correctly. One of my conjectures is that is if we choose an S that gives a residual error that looks
white then we have a good estimate. My second conjecture is that if we pick a well conditioned S and estimate the residual based on that, we can get successively better estimates of S until we
are close enough to S to get a good estimate. I conjecture we will know we have a good estimate when the error looks like white noise (flat spectrum standard deviation of one)
139. Let’s see if I got it now. So, you assume y=Ax+S*e, where A is the [co2 solar volc] data, and e is white noise. Due to matrix S, we have correlated noise process (S should be lower diagonal to
keep the system causal, I think). If we know S, we can use weighted least squares to solve x. But you are trying to solve x and S? If this is the case, it would be good see how the algorithm
works with simulated measurements, i.e. choose x and S, generate e using Matlab randn-command, and use y_s=A*x+S*e as an input to your program.
140. #139 (UC) you got it exactly :). Thanx for that lower diagonal idea. I am not sure if it is necessary in the algorithm for S to be lower diagonal but for simulation purposes I think it is a good
idea. You can have non causal filters (smoothing) so I am not sure if non-causalisty is that bad. For simulation purpose I am not sure what S you should choose as I think some choices may be
difficult to solve for. Those choices that I think that would be difficult are where S results in non station error statistics.
On another note, I was thinking about the co linearity of proxies and how to best handle this in a regression problem. It occurred to me that we can add extra error terms to introduce error in
the cost function if the regression proxies are not collinear. We do this by introducing an error (residual) measurement that is the difference between two proxies.
141. Physical model should be causal, present is independent of future (just my opinion..). And actually, in this case, weighted least squares does the smoothing anyway, it uses all the data at once.
S for AR1 is easy, that’s a good way to start.
142. It’s been a while since I posted in this thread. I had some thinking to do about my results and as a result I made some code changes. The ideas are the same but I thought of a much better way
numerically to converge on the results I had in mind. Is the algorithm an efficient and accurate way to get the fit? I am not sure but:
1.The fit looks really good,
2.The noise parameters looks like white noise with standard deviation one as I expected.
3.The estimate of the error attributes more error to low frequency noise.
4.All the parameters appear to converge and all the measures of the error appear to converge.
6.The predicted DC gain is not that far off from a basic regression fit with simple linear model (no auto regressive or moving average legs)
The figures in my post 125
Are replaced by my new figures and all the code in the link given in that post is replaced. Unfortunately at this time I have no expression for the uncertainty. On the plus side the algorithm
estimates the noise model as well as the system model so several simulations can be done with different noise to see the effect the noise has on the estimation procedure and on the temperature
trends. The algothithum is interesting because unlike most fits that use a large number of parameters only a few of the parameters are attributed to the system and the majority of the parameters
are attributed to the error.
Thus although I do not have a measure of uncertainty my procedure tells me that 60% of the variance in the signal is attributed to the noise model. Well this does not give us the error bars it
tells us that from the perspective of an arma model with 3 zeros and 4 poles and a noise model of similar order that the majority of the signal is due to noise and not the external drivers CO2,
solar and volcanism.
143. #89 I think a good part of temperature variation cannot be explained with simple linear models as a result of the combined forcing agents, solar, volcanic and CO2. I believe the dynamics of the
earth induce randomness in the earth climate system. I’ve tried doing regression with a simultaneous identification of the noise here:
I don’t think the results are that different from stand multiple regression for the estimates of the deterministic coefficients but it does show that the system can be fit very well, to an ARMA
model plus a colored noise input. Regardless of what regression technique you use a large part of the temperature variance is not explained by the standard three forcing agents alone. Possible
other forcing agents (sources of noise) could be convection, evaporation, clouds, jet streams and ocean currents.
144. opps, did my html wrong:
145. Please ignore the above two posts, I meant to post here:
Regardless of what regression technique you use a large part of the temperature variance is not explained by the standard three forcing agents alone
That is true, but I would put it like this: ‘large part of MBH98-reconstructed-NH temperature variance is not explained by the given three forcing agents alone’. I don’t think that MBH98 NH-recon
is exact.
147. I made some changes to the file script2.txt
The algorithm wasn’t stable for a 20 parameter fit, now I know it is stable for at least up to a 20 parameter fit. I enhanced figure 2 so you could compare it to a standard regression fit of the
same order. I will do the same for the plot of the DC gains and the bode plot.
Oh yeah, I added a bode plot for the parameters identified by my algorithm. Not to interesting. The biggest observation is that the earth seems to have a cutoff frequency of about 0.2 radians per
year or equavalanetly 0.06 cycles per yer. Or another words inputs with a period less then 16 years are attenuated by about 2DB (50%)
148. Re #143, you say:
I think a good part of temperature variation cannot be explained with simple linear models as a result of the combined forcing agents, solar, volcanic and CO2.
Let me suggest that you might be looking at the wrong forcing agents. You might therefore be interested in LONG TERM VARIATIONS IN SOLAR MAGNETIC FIELD, GEOMAGNETIC FIELD AND CLIMATE, Silvia
Duhau, Physics Department, Buenos Aires University.
In it, she (he?) shows that the temperature can be explained quite well using the geomagnetic index (SSC), total solar irradiation (TSI) and excess of length of day (LOD). Look at the October 29,
2005, entry at http://www.nuclear.com/environment/climate_policy/default.html for more info.
149. Whillis, that are some very interesting articles at the site you gave. The mechanisms sound very plausible and it is disappointing the figures don’t seem to contain much high frequency
information. Of course perhaps a lot of the high frequency components of global average temperatures are due to the limited number of weather stations over the globe.
150. Here is an interesting section from the link Willis gave:
…On the sources of climate changes during the last century
Besides CMEs, which impact in the Earth’s environment may be measured by the SSC index introduced by Duhau (2003a), other sources of climate changes of natural origin are variations in solar
radiative output which strength is measured by solar total irradiation index (TSI) (see e.g. Lean and Rind, 1999) and changes in the Earth’s rotation rate which is measured by the excess of
length of day variations (LOD) (Lambek and Cazenave, 1976; Duhau, 2005).
The best fitting to the long-term trend in NH surface anomaly that includes the superposition of the effect of the above three variables is given by the equation (Duhau, 2003b):
NHT(t-NHT(to)) = 0.0157[SSC(t-SSC(to)] + 0.103[STI(t-STI(to)] – 0.022[LOD(to-LOD(to)],
where XLT(to) is the long-term trend in the X variable at to = 1900 yr. This particular time was chosen prior to the industrialization process. Since data to compute SSC start at 1868 and end
at 1993, the three terms of the above equation (figure 7) and its superposition (figure 8) has been computed during this period.
I’ll see if I can find some data.
151. hmmm more driver data sets (ACI)
George Vangengeim, the founder of ACI, is a well-known Russian climatologist. The Vangengeim-Girs classification is the basis of the modern Russian climatological school of thought. According
to this system, all observable variation in atmospheric circulation is classified into three basic types by direction of the air mass transfer: Meridional (C); Western (W), and Eastern (E).
Each of the above-mentioned forms is calculated from the daily atmospheric pressure charts over northern Atlantic-Eurasian region. General direction of the transfer of cyclonic and
anticylonic air masses is known to depend on the distribution of atmospheric pressure over the Atlantic-Eurasian region (the atmosphere topography).
152. Another interesting climite factor:
153. I found some data on the earths rotation:
It seems to have a hockey stick shape from 1700 to year 2000. From the link I posted earlier the slowing of the earths rotation would have a cooling effect so this produces and anti hockey stick
effect. This makes senesce because it would increase the standard deviation of the temperature on earth thereby increasing the cooling since heat flux is proportional to the forth power of the
154. This attribution stuff is quite interesting. I used to think that the methods ‘they’ use are fairly simple, but maybe they aren’t.
I’ve been following the topical RC discussion, where ‘mike’ refers to Waple et al Appendix A, which is a bit confusing. Specially Eq. 9 is tricky. I would put it this way:
$\hat{s}_f=s_f+\frac{\langle FN \rangle}{\langle F^2\rangle}$
i.e. the last term is the error in the sensitivity estimate. This shows right away that high noise level (other forcings, reconstruction error, ‘internal climate noise’) makes the sensitivity
estimate noisy as well. Same applies if the amplitude of the forcing is low during the chosen time-window. That is one explanation for #113 (CO2 does not vary much during 1610-1760).
But maybe I’m simplifying too much.. See these:
For the sake of our foregoing discussions, we will make the conventional assumption that the large-scale response of the climate system to radiative forcing can reasonably be approximated, at
least to first order, by the behavior of the linearized system.
(Waple et al) I thought first order approx. is the linear model..
These issues just aren’t as simple as non-experts often like to think they are. -mike
155. I see, eq. should be ‘estimate of s equals s + E(FN)/E(FN*FN) ‘. Hope you’ll get the idea.
156. US, is my Latex edit in 154 OK?
157. Yes, thanks. The original paper uses prime, and that didn’t get through. I think they mean ‘estimate of s’. But on the other hand, they say s (without prime) is ‘true estimate of sensitivity’.
This is amazing, it takes only 4 simple looking equations and I’m confused. These issues just aren’t as simple as non-experts like me like to think.
I thought first order approx. is the linear model..
Obviously you don’t have a proper training in Mann School of Higher Undertanding ;)
159. # 158
I see ;) Robustly estimated medians, true estimates etc. I need some practice.. Seriously, I think MBH98 fig 7 and MBH99 fig 2 are the ones people should really look into.
But when I really think about it, this is my favourite
All of the extra CO2 in the atmosphere is anthropogenic
160. I know people here don’t like RC, but I think this topic is a good introduction to MBH98 fig 7 (and even more general) problems.
Some highlights:
None of your analyses take account of the internal variability which is an important factor in the earlier years and would show that most of the differences in trends over short periods are
not significant. Of course, if you have a scheme to detect the difference between intrinsic decadal variability and forced variability, we’d love to see it. -gavin (#94)
gavin started to read CA? If there is no scheme to detect difference between ‘internal variability’ and forcings, then we don’t know how large is the last term in #154 eq, right?
The big problem with Moberg (or with any scheme that separates out the low frequency component) is that it is very very difficult to calibrate that component against observed instrumental
data while still holding enough data back to allow for validation of that low frequency component. – gavin (#94)
Hey, how about MBH99 then? Or is this different issue?
Following up Gavin’s comment, it has indeed already been shown- based on experiments with synthetic proxy data derived from a long climate model simulation (see Figure 5 herein)- -that the
calibration method used by Moberg et al is prone to artificially inflating low-frequency variability. -mike (#94)
experiments, synthetic, simulation, mike hasn’t started to read CA.
161. The discussion between Rasmus and Scarfetta is a hoot over at RC. Rasmus persists in complaining about methods to get a linear relationship based on delta(y)/delta(x). He has the bizarre stance
of saying that because the method is in danger with low differentials (which only Rasmus tries, not Scarfetta), that the method is inherently wrong. I guess Rasmus thinks Ohm’s Law is impossible
to prove as well! Poor Scarfetta, he just doesn’t know what a moron he is dealing with.
Gavin on the other hand is more artful in his obfuscation. But doesn’t bother putting Rasmus out of his misery (nor does he support his silly failure to understand). All that said, I think Moberg
is a very suspect recon, so that examinations based on it, are less interesting. Scarfetta does make the cogent point that D&A studies and modeling studies vary widely based on what they mimic
(Moberg or Mann) so that given the uncertainty in recon, there is uncertainty in the models.
162. I think Gavin has good points, but they are in conflict with MBH9x. Dialogue between Scafetta and Gavin might lead to interesting conclusions.
163. The distribution of the last term in #154 eq is very important. And not so hard to estimate, it is just a simple linear regression. Thus, if terms in N are mutually uncorrelated (does not hold,
but let’s make the assumption here), then the variance of that error term is
$\sigma ^2(X^TX)^{-1}$
and X is simply a vector that contains CO2 (or any other forcing in question) values, and $\sigma ^2$ is the variance of N. So, the variance of N is important, but so is the variance (and
amplitude) of the forcing! I think we should think about MBH98 figure 7 from this point of view, and forget those significance levels they use..
164. I think that the error variance, as shown in # 163 explains a lot. Shorter window, more error. More noise (N), more error. Variability of the forcings within the window does matter. And finally,
if (for example) solar and CO2 are positively correlated within the window, the errors will be negatively correlated. I tried this with MBH98 fig 7 supplementary material, makes sense to me.
And I think these are the issues Ferdinand Engelbeen points out in that RC topic (# 71). And Mike says that it is nonsense.
165. Here’s the Matlab script, if someone wants try/correct/simulate/etc
166. UC, could you make the script turn-key so that the figures load correctly from the ftp site?
167. bender, don’t know how to force Matlab to load directly from ftp (or was that the question?)
http://ftp.ncdc.noaa.gov is down, but http://www.nature.com/nature/journal/v430/n6995/extref/nature02478-s1.htm seems to work. ‘Save as’ to working direcory and then it should work (my 5-version
Matlab changes fig7-co2.txt to fig7_co2 when loaded, not sure what the new verions do)
168. re #167: worked fine on my ver 6.5. :) The command to use is mget, but I think it was introduced in ver 7 :(
169. Re #167
That was precisely the question. Thx.
170. Re #168
But you have the files already in a local directory, correct? I want it to run without having to download anything.
171. re #170: Yes, and the only solution I know for Matlab to do that is with the wget-command, which, I think, was introduced in ver 7…
But, hey, check the Hegerl et al paper I linked… it has an attribution section … ;)
172. Yicks! I’m looking back at what I did and I’m scratching my head. I think I need to document it a little better. No wonder their wasn’t more discussion.
173. Looking back I see what I’ve done and I am wondering why I didn’t use a state space model. The temperature, carbon dioxide and volcanic dust should all be states. Perhaps to see the contribution
each drive has on temperature you don’t need a state space model but a state space model will help to identify the relationship between CO2 and temperature. It will also allow predictions and
perhaps even the identification of a continuous model.
174. John, you may find this page useful.
175. #174 that could be of interest. I have though been developing my own Kalamn filter softare. I lost interest in it for a while but this could reinvigorate my interest. The software I developed
before can deal with discontinuous nonlinearties. What was causing me trouble was trying to simplify it.
176. Michael Man tried to fit the northern hemisphere mean temperature to a number of drivers using linear regression in mbh98. To the untrained eye the northern hemisphere temperatures looked quite
random. As a consequence it was hard to tell how good the fit was. One can apply various statistical tests to determine the goodness of fit. However many statistical tests rely on some knowledge
of the actual noise in the process.
I had become frustrated that I could not determine what a good fit was for the given data. I decided to try to break the signal into a bunch of frequency bands because I was curious how the
regression coefficients of the drivers was dependent upon the frequency band. I decided to graph each band first before trying any fitting technique.
The three bands were a low frequency band which results from about a 20 year smoothing. A mid frequency band that results from a frequencies in the period of 5 years and 20 years and a high
frequency band where the frequency is less then a 5 years period.
Each filter is almost linear phase. The filter produces signals that are quite orthogonal and extremely linearly independent. In figure 1-A the solid line is the original northern hemisphere
temperature signal and the dotted line is the sum of the three temperature signals produced by the filters. As you can see the sum of the three bands nearly perfectly adds up to the original
What is clear from the graphs is the low frequency signal looks much more like the graph of the length of day then the carbon dioxide signal. If carbon dioxide was the principle driver then the
drip in temperature between 1750 and 1920 would be very difficult to explain. Conversely the length of day fell between 1750 and 1820 and then started to rise again after 1900. One wonders why
the length of day would impact the earth much more then carbon dioxide but the evidence is hard to ignore.
The mid frequency band looks like it has frequency components similar to the sunspots cycle but the relationship does not appear to be linear. The high frequency band looks difficult to explain.
My best bet is cloud cover.
177. The code to make the above graphs can be found here:
178. My new suspension is volcanoes are the cause of global warming. In Man’s graph if you look at the rise in temperature from 1920 to 1950 and then look down bellow at his measure of volcanic
activity you will notice that before 1920 volcanoes were quite frequent but afterwards they almost stopped in terms of intensity and frequency. Additionally if CO2 was the cause you would expect
the temperatures to continue to rise but the rise in temperature between 1920 and 1940 far surprises any temperature rise after that. Man’s data for the northern temperature mean looks like a
first order response to a step change. It does not look like a response to an exponentially increasing input driver.
179. Re #176 John it’s intriguing that the low-frequency variation in NH temperature seems to match better (visually at least) with length of day variation than with CO2.
Length of day on a decadal scale, per this article , seems to be related to activity in Earth’s core, which in turn could be forced by internal dynamics and/or solar system activity and/or
surface events. How that ties to surface temperature, though, is unclear.
Perhaps an alternate factor affecting length of day is that warming oceans expand, which would move them farther from Earth’s center, which would slow the planet’s rotation.
Something to ponder.
180. I don’t know if this article is mainstream or not, but it may be of interest to those with a solar / climate interest.
If Earth’s rotation (length of day) is affected by solar factors, and if John’s work above shows a correlation between length of day and NH temperature reconstructions for recent centuries, then
maybe there’s a clue somewhere in this about a solar / climate relationship.
181. With window length of 201 I got bit-true emulation of Fig 7 correlations. Code in here. Seems to be OLS with everything standardized (is there a name for this?), not partial correlations. These
can quite easily be larger than one.
The code includes non-Monte Carlo way to compute the ’90%, 95%, 99%
significance levels’. The scaling part still needs help from CA statisticians, but I
suspect that the MBH98 statement ‘The associated confidence limits are approximately constant between sliding 200-year windows’ is there to add some HS-ness to the CO2 in the bottom panel:
(larger image )
This might be outdated topic (nostalgia isn’t what it used to be!). But in this kind of statistical attribution exercises I see a large gap between the attributions (natural factors cannot
explain the recent warming!) and the ability to predict the future:
One Trackback
1. […] week, through Chefen, Jean S and myself, here here here and here , we showed that MBH98 contained questionable statistical methodology for assessing the relative […]
Post a Comment
|
{"url":"http://climateaudit.org/2006/05/31/more-on-mbh98-figure-7/?like=1&_wpnonce=cb13afa090","timestamp":"2014-04-18T18:22:45Z","content_type":null,"content_length":"338964","record_id":"<urn:uuid:841bc530-1acd-44c7-b41b-b2c4e1e0b5a4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dynamic Programming and Optimal Control, Vol. II, 4th Edition: Approximate Dynamic Programming
Dimitri P. Bertsekas
The fourth edition of Vol. II of the two-volume DP textbook was published in June 2012. This is a major revision of Vol. II and contains a substantial amount of new material, as well as a
reorganization of old material. The length has increased by more than 60% from the third edition, and most of the old material has been restructured and/or revised. Volume II now numbers more than
700 pages and is larger in size than Vol. I. It can arguably be viewed as a new book!
Approximate DP has become the central focal point of this volume, and occupies more than half of the book (the last two chapters, and large parts of Chapters 1-3). Thus one may also view this new
edition as a followup of the author's 1996 book "Neuro-Dynamic Programming" (coauthored with John Tsitsiklis). A lot of new material, the outgrowth of research conducted in the six years since the
previous edition, has been included.
Dynamic Programming and Optimal Control, Vol. II: Approximate Dynamic Programming, ISBN-13: 978-1-886529-44-1, 712 pp., hardcover, 2012
Click here for preface and detailed information.
Click here to order at Amazon.com
Click here to download lecture slides for a 7-lecture short course on Approximate Dynamic Programming, Caradache, France, 2012.
Click here to download lecture slides for the MIT course "Dynamic Programming and Stochastic Control (6.231), Dec. 2011. The last six lectures cover a lot of the approximate dynamic programming
Click here to download research papers and other material on Dynamic Programming and Approximate Dynamic Programming.
1. Discounted Problems - Theory
1. Minimization of Total Cost - Introduction
1. The Finite-Horizon DP Algorithm
2. Shorthand Notation and Monotonicity
3. A Preview of Infinite Horizon Results
4. The Finite-Horizon DP Algorithm
5. Randomized and History-Dependent Policies
2. Discounted Problems - Bounded Cost per Stage
3. Scheduling and Multiarmed Bandit Problems
4. Discounted Continuous-Time Problems
5. The Role of Contraction Mappings
1. Sup-Norm Contractions
2. Discounted Problems - Unbounded Cost per Stage
6. General Forms of Discounted Dynamic Programming
1. Basic Results Under Contraction and Monotonicity
2. Discounted Dynamic Games
7. Notes, Sources, and Exercises
2. Discounted Problems - Computational Methods
1. Markovian Decision Problems
2. Value Iteration
1. Monotonic Error Bounds for Value Iteration
2. Variants of Value Iteration
3. Q-Learning
3. Policy Iteration
1. Policy Iteration for Costs
2. Policy Iteration for Q-Factors
3. Optimistic Policy Iteration
4. Limited Lookahead Policies and Rollout
4. Linear Programming Methods
5. Methods for General Discounted Problems
1. Limited Lookahead Policies and Approximations
2. Generalized Value Iteration
3. Approximate Value Iteration
4. Generalized Policy Iteration
5. Generalized Optimistic Policy Iteration
6. Approximate Policy Iteration
7. Mathematical Programming
6. Asynchronous Algorithms
1. Asynchronous Value Iteration
2. Asynchronous Policy Iteration
3. Policy Iteration with a Uniform Fixed Point
7. Notes, Sources, and Exercises
3. Stochastic Shortest Path Problems
1. Problem Formulation
2. Main Results
3. Underlying Contraction Properties
4. Value Iteration
1. Conditions for Finite Termination
2. Asynchronous Value Iteration
5. Policy Iteration
1. Optimistic Policy Iteration
2. Approximate Policy Iteration
3. Policy Iteration with Improper Policies
4. Policy Iteration with a Uniform Fixed Point
6. Countable State Spaces
7. Notes, Sources, and Exercises
4. Undiscounted Problems
1. Unbounded Costs per Stage
1. Main Results
2. Value Iteration
3. Other Computational Methods
2. Linear Systems and Quadratic Cost
3. Inventory Control
4. Optimal Stopping
5. Optimal Gambling Strategies
6. Nonstationary and Periodic Problems
7. Notes, Sources, and Exercises
5. Average Cost per Stage Problems
1. Finite-Spaces Average Cost Models
1. Relation with the Discounted Cost Problem
2. Blackwell Optimal Policies
3. Optimality Equations
2. Conditions for Equal Average Cost for all Initial States
3. Value Iteration
1. Single-Chain Value Iteration
2. Multi-Chain Value Iteration
4. Policy Iteration
1. Single-Chain Policy Iteration
2. Multi-Chain Policy Iteration
5. Linear Programming
6. Infinite-Spaces Problems
1. A Sufficient Condition for Optimality
2. Finite State Space and Infinite Control Space
3. Countable States -- Vanishing Discount Approach
4. Countable States -- Contraction Approach
5. Linear Systems with Quadratic Cost
7. Notes, Sources, and Exercises
6. Approximate Dynamic Programming - Discounted Models
1. General Issues of Simulation-Based Cost Approximation
1. Approximation Architectures
2. Simulation-Based Approximate Policy Iteration
3. Direct and Indirect Approximation
4. Monte Carlo Simulation
5. Simplifications
2. Direct Policy Evaluation - Gradient Methods
3. Projected Equation Methods for Policy Evaluation
1. The Projected Bellman Equation
2. The Matrix Form of the Projected Equation
3. Simulation-Based Methods
4. LSTD, LSPE, and TD(0) Methods
5. Optimistic Versions
6. Multistep Simulation-Based Methods
7. A Synopsis
4. Policy Iteration Issues
1. Exploration Enhancement by Geometric Sampling
2. Exploration Enhancement by Off-Policy Methods
3. Policy Oscillations - Chattering
5. Aggregation Methods
1. Cost Approximation via the Aggregate Problem
2. Cost Approximation via the Enlarged Problem
3. Multistep Aggregation
4. Asynchronous Distributed Aggregation
6. Q-Learning
1. Q-Learning: A Stochastic VI Algorithm
2. Q-Learning and Policy Iteration
3. Q-Factor Approximation and Projected Equations
4. Q-Learning for Optimal Stopping Problems
5. Q-Learning and Aggregation
6. Finite Horizon Q-Learning
7. Notes, Sources, and Exercises
7. Approximate Dynamic Programming - Nondiscounted Models and Generalizations
1. Stochastic Shortest Path Problems
2. Average Cost Problems
1. Approximate Policy Evaluation
2. Approximate Policy Iteration
3. Q-Learning for Average Cost Problems
3. General Problems and Monte Carlo Linear Algebra
1. Projected Equations
2. Matrix Inversion and Iterative Methods
3. Multistep Methods
4. Extension of Q-Learning for Optimal Stopping
5. Equation Error Methods
6. Oblique Projections
7. Generalized Aggregation
8. Deterministic Methods for Singular Linear Systems
9. Stochastic Methods for Singular Linear Systems
4. Approximation in Policy Space
1. The Gradient Formula
2. Computing the Gradient by Simulation
3. Essential Features for Gradient Evaluation
4. Approximations in Policy and Value Space
5. Notes, Sources, and Exercises
8. Appendix A: Measure-Theoretic Issues in Dynamic Programming
1. A Two-Stage Example
2. Resolution of the Measurability Issues
|
{"url":"http://web.mit.edu/dimitrib/www/dpchapter.html","timestamp":"2014-04-20T10:52:37Z","content_type":null,"content_length":"10046","record_id":"<urn:uuid:2c5986b1-f032-4847-a871-58074977a273>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Why is the void open?
consider a statement of form "for all elements of set S, property P is true".
If S is empty this statement is true, :"vacuously".
this aNSWERS YOUR QUESTION, say in a metric space. i.e. openness is defined by a ":universal" quntifier: "for all p in S, there is an open ball around p also contained in S".
|
{"url":"http://www.physicsforums.com/showpost.php?p=1350136&postcount=6","timestamp":"2014-04-19T02:19:04Z","content_type":null,"content_length":"7473","record_id":"<urn:uuid:8ec25078-c0a3-4468-9732-4e44a584e658>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PIRSA - Perimeter Institute Recorded Seminar Archive
Black Hole Entropy from Loop Quantum Gravity
Abstract: There is strong theoretical evidence that black holes have a finite thermodynamic entropy equal to one quarter the area A of the horizon. Providing a microscopic derivation of the entropy
of the horizon is a major task for a candidate theory of quantum gravity. Loop quantum gravity has been shown to provide a geometric explanation of the finiteness of the entropy and of the
proportionality to the area of the horizon. The microstates are quantum geometries of the horizon. What has been missing until recently is the identification of the near-horizon quantum dynamics and
a derivation of the universal form of the Bekenstein-Hawking entropy with its 1/4 prefactor. I report recent progress in this direction. In particular, I discuss the covariant spin foam dynamics and
and show that the entropy of the quantum horizon reproduces the Bekenstein-Hawking entropy S=A/4 with the proper one-fourth coefficient for all values of the Immirzi parameter.
Date: 30/05/2012 - 2:00 pm
|
{"url":"http://pirsa.org/12050053/","timestamp":"2014-04-18T03:04:02Z","content_type":null,"content_length":"8352","record_id":"<urn:uuid:48d85270-5ec7-49fe-b1b0-a44cd043fe0b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Estimating error using differentials
Use differential to estimate the amount of paint needed to apply a coat of paint 0.05cm thick to a hemispherical dome with diameter 50cm.
S= 2 pi r^2
ds= 4 pi r dr.
r= 50/2= 25 cm
dr= 0.05cm
So ds= 4 pi 25*0.05= 5 pi cm^2
|
{"url":"http://mathhelpforum.com/calculus/12796-estimating-error-using-differentials.html","timestamp":"2014-04-16T13:12:08Z","content_type":null,"content_length":"40027","record_id":"<urn:uuid:1c197bfc-a379-4e88-b3e1-1d4a27f0fb6a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exploring Pentation - Base e
12/18/2007, 02:57 PM
(This post was last modified: 12/18/2007 03:01 PM by jaydfox.)
Post: #1
jaydfox Posts: 367
Long Time Fellow Joined: Aug 2007
Exploring Pentation - Base e
I've somewhat reached a natural stopping point in my experimenting with the natural slog for base e. There's more to do, but I'm at a point of diminishing returns and want to do something else,
hoping to get inspiration.
I've decided to move on to extending the continuous tetration solution to a continuous pentation solution. The first thing we need to know is what the fixed points are. Hyperbolic fixed points tell
us where logarithmic singularities will be in the inverse function (the penta-logarithm, or whatever it's called). The location of the closest such fixed point tells us what the radius of convergence
of the power series will be, which we can use as a rough validation tool for any power series we might try to derive, e.g., by an Abel matrix solution.
For base e, the first fixed point I've identified is at about -1.85. This can be seen trivially to exist by looking at the graph of tetration for base e. Without looking at the graph, we know that
$\exp_e^{\circ {\small -2}}(1) = -\infty$
$\exp_e^{\circ {\small-1}}(1) = 0$
Therefore, somewhere in that interval, we must have a crossing. And we can also tell that the fixed point will be repelling under tetration, because the slope at the crossing will be greater than 1.
The quick and dirty way to find the fixed point is to take iterated superlogarithms. As it turns out, this is also how we can extend pentation to negative iterations. I'll use a triple arrow to
notate pentation, though I suppose that
$\mathrm{sexp}_e^{\circ n}(1)$
would work as well.
We know that
, and
. But we can find
by finding
, which is -1. Then we can find
by finding
. This will quickly take us outside the radius of convergence, so in order to get maximum accuracy, we'll find
Using my 1200-term accelerated solution, the first few iterations give us the following:
$e\uparrow\uparrow\uparrow0=1$ $e\uparrow\uparrow\uparrow-1=\mathrm{slog}_e(1)=0$ $e\uparrow\uparrow\uparrow-2=\mathrm{slog}_e(0)=-1$ $e\uparrow\uparrow\uparrow-3=\mathrm{slog}_e(-1)=
-1.636358354286028979629049436$ $e\uparrow\uparrow\uparrow-4=\mathrm{slog}_e(-1.636358354286028979629049436)=-1.813170483098635639971748853$
And so on... Taken to similar precision, the fixed point is -1.850354529027181418483437788.
Going in the forward direction for iteration:
$e\uparrow\uparrow\uparrow1=\mathrm{sexp}_e(1)=2.718281828459045235360287471$ $e\uparrow\uparrow\uparrow2=\mathrm{sexp}_e(2.718281828459045235360287471)=2075.968335058065833574141757$
And so on... Obviously, the next iteration is beyond the scope of scientific notation.
In table form, the integer pentations of e, from -20 to 2:
n | e penta n
2 | 2075.968335058065833574141757
1 | 2.718281828459045235360287471
0 | 1.000000000000000000000000000
-1 | 0.000000000000000000000000000
-2 | -1.000000000000000000000000000
-3 | -1.636358354286028979629049436
-4 | -1.813170483098635639971748853
-5 | -1.844484246898395061868430374
-6 | -1.849443081393375287759562240
-7 | -1.850213384630118386703548774
-8 | -1.850332680687076371299817524
-9 | -1.850351147243492593015231122
-10 | -1.850354005584711078364293582
-11 | -1.850354448007332020493809851
-12 | -1.850354516486711680128769074
-13 | -1.850354527086133925479340624
-14 | -1.850354528726740890531493457
-15 | -1.850354528980678429204206706
-16 | -1.850354529019983561302333809
-17 | -1.850354529026067314878454466
-18 | -1.850354529027008974544720674
-19 | -1.850354529027154727148927025
-20 | -1.850354529027177287127222746
Plotted, we get the following for integer pentations, noting that the second pentation is at about 2,076, well off the top of this graph:
Note that if we flip this graph about the line y=x, we'll see the pentalog. There will be a logarithmic singularity at about x=1.850354529. We can calculate the base of the logarithm by dividing the
differences of two consecutive pairs of integer pentates. Going out to -100 iterations, This yields a value of about 6.460671295681839390208370083.
We can also get the value by considering the slog and the reciprocal of its derivative at -1.850354529... This is outside the radius of convergence, so we can't simply take the derivative of the
power series I developed at 0. However, we can get there using the Abel functional definition of the slog:
$\mathrm{slog}(z) = \mathrm{slog}\left(\exp(z)\right)-1$ $<br /> \begin{eqnarray}<br /> D_z \left[\mathrm{slog}(z)\right] <br /> & = & D_z \left[\mathrm{slog}\left(\exp(z)\right)-1\right] \\<br /> \
mathrm{slog}^{'}(z) & = & \mathrm{slog}^{'}\left(\exp(z)\right)\exp(z) \\<br /> \end{eqnarray}<br />$
This evaluates to 0.1547826772534266617145246066. The reciprocal is 6.460671295681839390208370083, which matches the value I previously calculated by comparing successive negative iterates.
We now have the location and base of the logarithmic singularity. The only potential problem is if there are closer singularities in the complex plane, meaning there are other fixed points of the
slog near the origin (which at a glance I doubt). But I'll cross that bridge if and when I get there.
~ Jay Daniel Fox
12/18/2007, 04:45 PM
Post: #2
andydude Posts: 467
Long Time Fellow Joined: Aug 2007
RE: Exploring Pentation - Base e
I came to a similar conclusion in
this thread
, only using a linear approximation, and our results seem to agree, especially on the specific value:
$x{\uparrow}{\uparrow}{\uparrow}-2 = -1$
Andrew Robbins
12/19/2007, 06:01 AM
Post: #3
jaydfox Posts: 367
Long Time Fellow Joined: Aug 2007
RE: Exploring Pentation - Base e
Yeah, as I was thinking about extending pentation to hexation, I began to visualize the alternative horizontal and vertical asymptotes, but I wasn't sure.
Note that, with a real fixed point of tetration, we can find a continuous pentation with either an Abel solution or a regular solution. Hopefully they'll agree with each other, but I think that
depends on whether the real fixed point at -1.85 is the closest to the origin.
~ Jay Daniel Fox
01/27/2008, 11:44 PM
(This post was last modified: 02/02/2008 10:59 PM by GFR.)
Post: #4
GFR Posts: 166
Member Joined: Aug 2007
RE: Exploring Pentation - Base e
Dear Jaydfox!
jaydfox Wrote:We know that $e\uparrow\uparrow\uparrow0=1$, and $e\uparrow\uparrow\uparrow-1=0$. But we can find $e\uparrow\uparrow\uparrow-2$ by finding $\mathrm{slog}_e(0)$, which is -1. Then we
can find $e\uparrow\uparrow\uparrow-3$ by finding $\mathrm{slog}_e(-1)$. This will quickly take us outside the radius of convergence, so in order to get maximum accuracy, we'll find $\mathrm
Using my 1200-term accelerated solution, the first few iterations give us the following:
And so on... Taken to similar precision, the fixed point is -1.850354529027181418483437788.
Going in the forward direction for iteration:
And so on... Obviously, the next iteration is beyond the scope of scientific notation.
I am very happy to see that you are approaching this problem exactly like (KAR and myself) we did (perhaps with less precision) in a progress report we posted to the NKS Forum on 25-07-2006, copy
attached (NKS Forum III - Final).
In fact, in the case of pentation (to the base b), we may indicate it as:
y = b-penta-x, or y = b ยง x , or y = b [5] x (GFR-KAR conventions), or:
y = b ||| x, according to your conventions (sorry for the arrows, they are .... up!).
As a matter of fact, in that occasion and for the particular case of of b = e, we have shown that pentation is definable also for (integer) hyperexponents x < 0 and that this generates an asymptotic
behaviour for x -> -oo. See the attachment, Section 4, pages 8 and 9, formulas 13 to 15.
We noticed that two near successive points of the plot in that area must always be linked by y(x-1) = sln y(x), where sln is the "natural" slog (base e), and by y(x+1) = sexpn y(x), where sexpn is
the "natural" tetration operator (base e). We then concluded that the asymptotic value of y could be immediately found by putting:
sexpn(y) = slog(y)
This means that we can find the asymptotic value of y, for x -> -oo, at the intersection of sln(x) with sexpn(x) and we called "Sigma" this numerical value. We got, with our first approximations:
Sigma = -1.84140566043697..
But, very probably, your numerical value is better.
I understand that this was also your conjecture, which was verified through your calculations. Could I have confirmation of the most precise value of Sigma obtainable via more formal and precise
calculations, for example using the Andrew's excellent slog approximation? Or... else? I (together with KAR) would be very happy if you could kindly produce that.
Thank you very much in advance.
(Annex attached on 2-02-2008. Previous attachment missing. Sorry)
01/28/2008, 11:01 AM
(This post was last modified: 01/28/2008 11:03 AM by Ivars.)
Post: #5
Ivars Posts: 366
Long Time Fellow Joined: Oct 2007
RE: Exploring Pentation - Base e
This has a fascinating intutive appeal, especially the appearance of even negative integers-just like trivial 0 of Riemann zeta function.
What are the few next values on other axis (- 3, ..., - 5, ... ) and how accurate they seem to be? Meaning is e.g. -1,85.. really close to asymptotic value or it can be - 1,9..
Are there any analytical means to get these values, for any base- does there exist such base? Hoping it will be e^(pi/2) , of course, but may be some other nice value.
02/02/2008, 05:05 PM
Post: #6
Ivars Posts: 366
Long Time Fellow Joined: Oct 2007
RE: Exploring Pentation - Base e
Ivars Wrote:Hello,
This has a fascinating intutive appeal, especially the appearance of even negative integers-just like trivial 0 of Riemann zeta function.
What are the few next values on other axis (- 3, ..., - 5, ... ) and how accurate they seem to be? Meaning is e.g. -1,85.. really close to asymptotic value or it can be - 1,9..
Are there any analytical means to get these values, for any base- does there exist such base? Hoping it will be e^(pi/2) , of course, but may be some other nice value.
Just repeating the question...
02/02/2008, 10:50 PM
Post: #7
Ivars Posts: 366
Long Time Fellow Joined: Oct 2007
RE: Exploring Pentation - Base e
I am asking because of this sum:
which makes sense to me.
Ivars Fabriciuss
02/02/2008, 11:01 PM
Post: #8
GFR Posts: 166
Member Joined: Aug 2007
RE: Exploring Pentation - Base e
Please see the NKS Forum III attachment of my previous posting. Sorry. It was missing.
02/04/2008, 08:07 PM
Post: #9
Ivars Posts: 366
Long Time Fellow Joined: Oct 2007
RE: Exploring Pentation - Base e
Dear GFR,
That is great paper. Visionary. Also easy to understand, as it omits non-essential things.
Could You send to me all Yours papers You have available which You think may give insights in hyperoperations (insights, not definitions based on limits or sets) . I feel fine plugging in infinities
and h-s/g- s directly in my head.
I would like to learn to work with them directly, starting from exponentation, tetration, pentation.
I would like to "integrate" (or differentiate) e.g pentation to obtain "slower" operation (or faster). I would like to do hyperoperation math with objects that they can deal with without focusing on
rigor but developing intuition"already in that level. I think it is more important. Luckily, Euler did not have time to write down his intuitive results in this direction ( he must have had )
Thank You in advance.
02/04/2008, 09:19 PM
Post: #10
GFR Posts: 166
Member Joined: Aug 2007
RE: Exploring Pentation - Base e
All the papers I wrote jointly with KAR were prepared as "Progress Reports" and only "published" on the Web. They are available at the Wolfram Research Institute Forum, at:
Please, click the Thread Starters column head and look for G. F. Romerio. You'll find them all, ... in the attachments. Thank you for your interest. If you dont succeed, just tell me.
User(s) browsing this thread: 1 Guest(s)
|
{"url":"http://math.eretrandre.org/tetrationforum/showthread.php?tid=103","timestamp":"2014-04-18T08:11:53Z","content_type":null,"content_length":"57939","record_id":"<urn:uuid:00662d4e-5a7b-49d5-a8ba-d83862d1affa>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rockledge, PA Math Tutor
Find a Rockledge, PA Math Tutor
...Math can also require work but it should never be so hard as to make a student give up or cry. I am passionate about Math in the early years, from Pre-Algebra through Pre-Calculus. Middle
school and early High School are the ages when most children develop crazy ideas about their abilities regarding math.
9 Subjects: including geometry, Microsoft Outlook, algebra 1, algebra 2
...Regular homework, which includes review problems, is a cornerstone of effective learning. I strive to help my students to progress with each session, and appreciate any constructive feedback
towards this end. In order to be best prepared I do appreciate knowing any specific trouble topics, or a being given a copy of the course syllabus ahead of time.
3 Subjects: including calculus, chemistry, physics
...As a lifelong learner I've found the resources offered on the Internet a powerful tool in helping to explain topics presented in the classroom and I expect to use it while helping students.
The number of different ways to teach a topic can be as varied as the number of learners and I expect to c...
16 Subjects: including geometry, ASVAB, algebra 1, algebra 2
I am currently a freshman at Rutgers University and wish to tutor students on the SAT. I performed well on the SAT registering a 2130 and scoring a perfect on the essay portion. I missed one
question on the reading and two on the math section.
16 Subjects: including algebra 2, chemistry, European history, geometry
...Even completing this cycle once will improve your student's score; I recommend taking 2 or more practice test to maximize your student's score. The writing and reading tests on the SAT can be
quite challenging, but we can work together to understand the tests, problems, and strategies to get your best score. I specialize in helping high school student "cram" for the SATs.
20 Subjects: including ACT Math, geometry, SAT math, algebra 1
Related Rockledge, PA Tutors
Rockledge, PA Accounting Tutors
Rockledge, PA ACT Tutors
Rockledge, PA Algebra Tutors
Rockledge, PA Algebra 2 Tutors
Rockledge, PA Calculus Tutors
Rockledge, PA Geometry Tutors
Rockledge, PA Math Tutors
Rockledge, PA Prealgebra Tutors
Rockledge, PA Precalculus Tutors
Rockledge, PA SAT Tutors
Rockledge, PA SAT Math Tutors
Rockledge, PA Science Tutors
Rockledge, PA Statistics Tutors
Rockledge, PA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/rockledge_pa_math_tutors.php","timestamp":"2014-04-21T13:04:43Z","content_type":null,"content_length":"23935","record_id":"<urn:uuid:1836eee4-3ef7-464c-aa3d-8aa5c5e29347>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math 160
Test 2
October 22, 2013 Name: Key
You must SHOW ALL WORK to receive credit!
Be sure to include units in answers wherever appropriate.
Questions that ask you to interpret should be answered in complete English sentences.
Find the derivative of each of the following functions. You do not need to simplify your answers.
5. a. Find by implicit differentiation:
b. Find the slope of the line tangent to the curve at the point (1,1)
6. The value of an investment is after t years. Compute and interpret A’(4)
4 years after this investment is made, its value is growing at a rate of $2754.26 dollars/year.
Testing this: After 4 years, the value is
While this is not exactly the same as what we estimated, it is in the same ballpark, which is encouraging. If we looked at a smaller time interval, the estimate would be much better. For example,
one day (1/365 of a year) later, the derivative indicates that the account should grow by $2754.26/365 = $7.55. In reality, it grows to
7. ACME’s cost of producing x widgets is C(x) = 1000+8x - 0.01x^2 dollars.
a. Evaluate C(100) and interpret what this value means to ACME.
. This is ACME’s total cost if it produces 100 widgets.
b. Evaluate C’(100), and interpret what this value means to ACME.
. This is ACME’s marginal cost. If ACME is currently producing 100 widgets, each additional would cost an additional $6 to produce.
8. ACME (continuing from the previous problem) has price demand function p(x) = 20-.04x
a. Find ACME’s revenue function, R(x).
b. How many widgets should ACME produce to maximize its profit? Recall that profit = revenue – cost
When P’(x) is positive, ie when x<200, the profit increases when production increases. So if ACME is making fewer than 200 widgets, it should increase production.
When P’(x) is negative, ie when x>200,the profit decreases when production increases. So if ACME is making more than 200 widgets, it should decrease production.
Putting these two statements together, the profit must be maximized if ACME produces exactly 200 widgets.
At this production level, its profit is
9. If $3000 is invested at 5% interest, find the value of the investment at the end of 5 years if
a. the interest is compounded quarterly?
b. the interest is compounded continuously?
10. The half-life of radioactive cesius-137 is 30 years. Suppose you have a 100-mg sample.
a. Write a formula that gives the mass that remains after t years.
After 30 years, 50 g remains so using
b. After how long will only 1 mg remain?
|
{"url":"http://www.montgomerycollege.edu/~rpenn/201420/160/160t2a.htm","timestamp":"2014-04-18T08:16:39Z","content_type":null,"content_length":"162323","record_id":"<urn:uuid:5b0c371e-a4de-4bc9-a410-2366cb42ca88>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Minimum-weight twoconnected spanning networks
Results 1 - 10 of 32
, 1993
"... We consider the survivable network design problem-- the problem of designing, at minimum cost, a network with edge-connectivity requirements. As special cases, this problem encompasses the
Steiner tree problem, the traveling salesman problem and the k-edge-connected network design problem. We establ ..."
Cited by 44 (12 self)
Add to MetaCart
We consider the survivable network design problem-- the problem of designing, at minimum cost, a network with edge-connectivity requirements. As special cases, this problem encompasses the Steiner
tree problem, the traveling salesman problem and the k-edge-connected network design problem. We establish a property, referred to as the parsimonious property, of the linear programming (LP)
relaxation of a classical formulation for the problem. The parsimonious property has numerous consequences. For example, we derive various structural properties of these LP relaxations, we present
some algorithmic improvements and we perform tight worst-case analyses of two heuristics for the survivable network design problem.
- MATH. PROG , 1995
"... We consider most of the known classes of valid inequalities for the graphical travelling salesman polyhedron and compute the worst-case improvement resulting from their addition to the subtour
polyhedron. For example, we show that the comb inequalities cannot improve the subtour bound by a factor gr ..."
Cited by 25 (1 self)
Add to MetaCart
We consider most of the known classes of valid inequalities for the graphical travelling salesman polyhedron and compute the worst-case improvement resulting from their addition to the subtour
polyhedron. For example, we show that the comb inequalities cannot improve the subtour bound by a factor greater than 10/9. The corresponding factor for the class of clique tree inequalities is 8/7,
while it is 4/3 for the path configuration inequalities.
, 1992
"... We consider the important practical and theoretical problem of designing a low-cost ..."
, 2000
"... This dissertation is the result of a project funded by Belgacom, the Belgian telecommunication operator, dealing with the development of new models and optimization techniques for the long-term
planning of the backbone network. The minimum-cost two-connected spanning network problem consists in find ..."
Cited by 14 (4 self)
Add to MetaCart
This dissertation is the result of a project funded by Belgacom, the Belgian telecommunication operator, dealing with the development of new models and optimization techniques for the long-term
planning of the backbone network. The minimum-cost two-connected spanning network problem consists in finding a network with minimal total cost for which there exist two node-disjoint paths between
every pair of nodes. This problem, arising from the need to obtain survivable communication and transportation networks, has been widely studied. In our model, the following constraint is added in
order to increase the reliability of the network : each edge must belong to a cycle of length less than or equal to a given threshold value K. This condition ensures that when traffic between two
nodes has to be re-directed (e.g. in case of failure of an edge), we can limit the increase of the distance between these nodes. We investigate valid inequalities for this problem and provide
numerical results obtai...
- In Networks , 2005
"... For the past few decades, combinatorial optimization techniques have been shown to be powerful tools for formulating and solving optimization problems arising from practical situations. In
particular, many network design problems have been formulated as combinatorial optimization problems. With the ..."
Cited by 14 (0 self)
Add to MetaCart
For the past few decades, combinatorial optimization techniques have been shown to be powerful tools for formulating and solving optimization problems arising from practical situations. In
particular, many network design problems have been formulated as combinatorial optimization problems. With the advances of optical technologies and the explosive growth of the Internet,
telecommunication networks have seen an important evolution and therefore, designing survivable networks has become a major objective for telecommunication operators. Over the past years, a big
amount of research has then been done for devising efficient methods for survivable network models, and particularly cutting plane based algorithms. In this paper, we attempt to survey some of these
models and the optimization methods used for solving them.
- in Proceedings of EURO/INFORMS Meeting , 1997
"... Designing low-cost networks that survive certain failure situations is one of the prime tasks in the telecommunication industry. In this paper we survey the development of models for network
survivability used in practice in the last ten years. We show how algorithms integrating polyhedral combinato ..."
Cited by 13 (1 self)
Add to MetaCart
Designing low-cost networks that survive certain failure situations is one of the prime tasks in the telecommunication industry. In this paper we survey the development of models for network
survivability used in practice in the last ten years. We show how algorithms integrating polyhedral combinatorics, linear programming, and various heuristic ideas can help solve real-world network
dimensioning instances to optimality or within reasonable quality guarantees in acceptable running times. The most general problem type we address is the following. Let a communication demand between
each pair of nodes of a telecommunication network be given. We consider the problem of choosing, among a discrete set of possible capacities, which capacity to install on each of the possible edges
of the network in order to (i) satisfy all demands, (ii) minimize the building cost of the network. In addition to determining the network topology and the edge capacities we have to provide, for
each demand, a routing such that (iii) no path can carry more than a given percentage of the demand, (iv) no path in the routing exceeds a given length. We also have to make sure that (v) for every
single node or edge failure, a certain percentage of the demand is reroutable. Moreover, for all failure situations feasible routings must be computed. The model described above has been developed in
cooperation with a German mobile phone provider. We present a mixed-integer programming formulation of this model and computational results with data from practice.
, 1999
"... We give a 17/12-approximation algorithm for the following NP-hard problem: Given an undirected graph, find a 2-edge connected spanning subgraph that has the minimum number of edges. The best
previous approximation guarantee was 3/2. We conjecture that there is a 4/3-approximation algorithm. Thus ..."
Cited by 12 (1 self)
Add to MetaCart
We give a 17/12-approximation algorithm for the following NP-hard problem: Given an undirected graph, find a 2-edge connected spanning subgraph that has the minimum number of edges. The best previous
approximation guarantee was 3/2. We conjecture that there is a 4/3-approximation algorithm. Thus our main result gets half-way to this target.
- In 13th Annual ACM-SIAM Symposium on Discrete Algorithms , 2002
"... We study the approximability of dense and sparse instances of the following problems: the minimum 2-edge-connected (2-EC) and 2-vertex-connected (2-VC) spanning subgraph, metric TSP with
distances 1 and 2 (TSP(1,2)), maximum path packing, and the longest path (cycle) problems. The approximability of ..."
Cited by 11 (0 self)
Add to MetaCart
We study the approximability of dense and sparse instances of the following problems: the minimum 2-edge-connected (2-EC) and 2-vertex-connected (2-VC) spanning subgraph, metric TSP with distances 1
and 2 (TSP(1,2)), maximum path packing, and the longest path (cycle) problems. The approximability of dense instances of these problems was left open in Arora et al. [3]. We characterize the
approximability of all these problems by proving tight upper (approximation algorithms) and lower bounds (inapproximability). We prove that 2-EC, 2-VC and TSP(1,2) are Max SNP-hard even on 3-regular
graphs, and provide explicit hardness constants, under P 6= NP. We also improve the approximation ratio for 2-EC and 2-VC on graphs with maximum degree 3. These are the rst explicit hardness results
on sparse and dense graphs for these problems. We apply our results to prove bounds on the integrality gaps of LP relaxations for dense and sparse 2-EC and TSP(1,2) problems, related to the famous
metric TSP conjecture, due to Goemans [18].
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=53330","timestamp":"2014-04-16T15:00:47Z","content_type":null,"content_length":"35316","record_id":"<urn:uuid:1b802418-9f4e-4dc8-b160-0a83daa544a5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
z390 Mainframe Assembler Coding Contest for Computer Programmers
Welcome to the z390 Mainframe Assembler Coding Contest. The primary objective of this contest is to have some fun and learn more about mainframe assembler. This contest is open to anyone interested
in learning about mainframe assembler and/or sharing their knowledge about mainframe assembler. You can submit new problems or solutions to problems already posted that you think are better. The
top ranked solutions are posted on this site along with the author's name. Below you will find links to all the coding problems posted to date, plus the top ranked solutions submitted with source
code and generated output. Solutions can be submitted using a shared macro ZMFACC which is portable across z390, Hercules MVS 3.8, z/OS, z/VM CMS, and VSE. More.
The z390 open source Portable Mainframe Assembler project encourages all developers working with IBM mainframe systems to learn High Level mainframe assembler (HLASM for short). To that end the z390
project is sponsoring the z390 Mainframe Assembler Coding Contest open to everyone. For a limited time volunteers are being solicited to serve as members of a 3 member judges panel to rank submitted
solutions The top three ranked solutions for each posted problem will be listed here along with the name of the programmer and their institution of choice. Rankings will be based on the specific
requirements such as speed, storage, or best practices. New problems and new rankings will be updated as soon as judges have reviewed current pending submittals.
Hope you enjoy the contest! All questions and suggestions welcome.
Don Higgins, President
Automated Software Tools
Current z390 Mainframe Assembler Coding Contest Problems and Solutions:
1. Swap two 20 byte fields optimized for speed.
Submitted by Don Higgins University of South Florida
2. Swap general purpose register 0 and 1 without using any other register or storage areas.
Submitted by Don Higgins for University of South Florida
1. P2MD1.MLC/LOG by Mark Dixon University of Western Australia - 3 XR'S (could also be XGR)
3. Convert memory bytes to hex display bytes.
Submitted by Melvyn Maltz
1. P3MM1.MLC/LOG by Melvyn Maltz - single TROT (CC3 retry added per Michael Poil)
2. P3LKM1.MLC/LOG by Lindy Mayfield - loop to convert byte at a time with no table
3. P3DW1.MLC/LOG by David Wilkinson - unpack and TR using 16 byte table for 4 byte parm
4. Sort array of full word integers using fastest execution method.
Submitted by Don Higgins University of South Florida
5. Convert display hex characters to binary bytes.
Submitted by Mark Dixon University of Western Australia
1. P5DW1.MLC/LOG by David Wilkinson - using single TROT with truncated table to save memory
2. P5MM1.MLC/LOG by Melvyn Maltz - using TR and PACK
6. Given a byte, create 8 EBCDIC zero and one characters displaying the individual bits in the byte.
Submitted by John R. Erhman University of Illinois
7. Calculate the result of 311/99 using single precision hexadecimal floating point and display the result in decimal scientific notation with the correct number of significant digits without using
CTD conversion macro. Submitted by Don Higgins University of South Florida
1. P7EH1.MLC/LOG by John Erhman - using AW un-normalized add to align bits
2. ?
3. ?
8. Calculate and display as many significant digits of PI as possible using extended floating point instructions and display the result in decimal scientific notation using the CTD and SNAP macro
Submitted by Don Higgins University of South Florida
1. P8MM1.MLC/LOG by Melvyn Maltz - using Gregory/Leibniz/Machin arctan series
2. P8LM1.MLC/LOG by Lindy Mayfield - using Rexx solution series with all positive terms
3. ?
9. Convert DC PL8'-1234567.90" to DC C" ($1,234,567.90)'.
Submitted by Mark Dixon University of Western Australia
1. P9MM2.MLC/LOG by Melvyn Maltz - EDMK with $() improved per Benyamin Dissen
10. Code instructions required to convert any unsigned 128 bit integer value in even/odd 64 bit general purpose register pair generated by MLG or MLGR to EBCDIC decimal display character format using
as few basic instructions as possible and no library services such as z390 CTD. Note 2**127 has 39 significant digits, extended floating point only supports 34 significant digits, and packed
decimal only supports 31 significant digits. Please submit solutions using the ZMFACC macro for portability across platforms.
Submitted by Don Higgins University of South Florida
1. P10MB1.MLC/LOG by Mats Broberg at SEB using fewer instr. and single ED (Note execution of this solution on z390 requires latest z390 v1.3.08h PTF to fix overflow bug)
2. P10DSH1.MLC/LOG by Don Higgins - using about 27 instructions and no loops
3. ?
4. ?.
11. Code two routines: one to add 8 byte opcode mnemonic key and table entry address to a hash table and another routine to retrieve the address of opcode table entry given the 8 byte mnemonic as
key. To test the efficiency of the two routines a table of the 856 z390 mnemonic machine instructions and their hex opcodes is provided in a copybook here, and a model program using the ZMFACC
macro to build the table and then fetch all the opcodes 100 times is provided here. You can run the model program without change to verify it works in your environment before adding your code. It
executes 689,307 instructions in the z390 environment doing nothing in the add routine and simply returning the input key address via LR in the find routine. See the resulting log file here. The
fastest 3 solutions supporting random access will be posted.
12. Calculate the mean and standard deviation for a set of 500000 response times using a precision of .001. Assume each value may not exceed 1000 seconds- by Tony Matharu.
1. P12DSH1.MLC/LOG by Don Higgins using BFP to calc standard. deviation for (1, 2, 3, 6) = 1,87
2. P12DSH2.MLC/LOG by Don Higgins using HFP to calc standard. deviation for (1, 2, 3, 6) = 1,87
3. P12DSH3.MLC/LOG by Don Higgins using DFP to calc standard. deviation for (1, 2, 3, 6) = 1,87 Note this solution uses new z390 proto-type millicode for missing SQXTR instruction.
13. Given a decimal number with 2 decimal places representing the total cost of one or more items and another decimal number representing the quantify, calculate the unit price with 2 decimal places
rounded half up? Problem was derived from question posted on IBM Mainframe Assembler-List by Ludmila Koganer.
1. P13SC1.MLC/LOG by Steve Comstock - using 5 packed decimal instructions
2. P13DSH1.MLC/LOG by Don Higgins - using 7 DFP decimal floating point instructions
3. ?
14. Code a macro assembler program to calculate the value of the Ackerman function a(4,1) = 65533. The Ackerman function a(m,n) is a recursively defined function. If m = 0, then a(m, n) = n+1. If m >
0 and n = 0, then a(m, n) = a(m-1,1). If m> 0 and n > 0, then a(m,n) = a(m-1,a(m,n-1)). Submitted by Don Higgins:
1. P14MW1.MLC/LOG by Martin Ward - using PD instructions to calculate solutions up to 31digits
2. P14DSH1.MLC/LOG by Don Higgins - using recursive macro code only (limited to 32 bit integers)
3. ?
15. Given a number of the form 12345678.1234567, divide it by another number of the same format, rounding to 7 digits after decimal point using packed decimal. Submitted by Ludmila Koganer,
1. P15WR1.MLC/LOG by Werner Rams using DP and SRP
2. P15DSH1.MLC/LOG by Don Higgins using DP, AP, and CP
3. ?
16. Given an input number 1 to x. Given a bit array of x bits where x is multiple of 8. (1) Code a routine to convert the input number into a bit setting in the bit array. (2) Code a routine to
display the "one" bits as clrresponding decimal numbers. Submitted by Jim Connelley.
1. P16WR1.MLC/LOG by Werner Rams using simpler faster bit table for primes from 3 to 97
2. P16DSH1.MLC/LOG by Don Higgins using SETBIT and TESTBIT macro for primes 3 to 97
3. ?
4. ?
17. Given the source character string DC CL80'LABEL OPCODE PARMS' code a transparent space compression routine to create compressed string and a decompression routine to expand the compressed string
back to original. The wining solution will optimize speed and size.
1. P17DW1.MLC/LOG by David Wilkinson compress and decompress using TRT and CRB for total of 628 instructions.
2. P17WR1.MLC/LOG by Werner Rams compresses and decompresses 3 records using CLCL to find end of duplicate spaces for total of 827 instructions.
3. ?
18. Write a benchmark program to calculate the percent performance improvement to 2 decimal places when replacing the following loop code:
LOOP DS 0H
BCTR R1,0
*** APPLICATION CODE COMMENTED OUT FOR TEST ***
LTR R1,R1
JNE LOOP
with the following optimized loop code using the new z10 compare and branch opcode code CIJNE:
LOOP DS 0H
BCTR R1,0
*** APPLICATION CODE COMMENTED OUT FOR TEST ***
CIJNE R1,0,LOOP
The performance improvement in this case comes from replacing 2 instruction cycles fetching a total of 6 bytes with a single instruction cycle fetching 6 bytes. You can use whatever interval
timing method is available on your system such as TIME BIN (requires running standalone). The initial values in R1 must be set to perform enough iterations to reduce the timing error due to
interval timer precision etc. To code and unit test solution on z390 you will need the latest version v1.4.01+ with the new z10 opcode support. To run the real test, you will need an IBM z10
mainframe and updated HLASM.
1. P18DSH1.MLC/LOG - solution using new DAT.MLC interval timer display showing time of day JDBC time-stamp format down to nano-seconds, total instruction counts, and MIPS. Running z390 v1.4.01a
on Intel 2.1 Duo Core chip, the MIP rates were 8.7 and 7.3 for 15% reduction in MIP rate but there was also an 8% reduction in elapsed time in nano-seconds using the z10 compare and branch
loop with BCTR, CIJNE versus the BCTR, LTR, JNZ loop. The 2 instruction loop has lower MIP rate but faster execution time than the 3 instruction loop.
19. Write code to find the last non-blank character in an 80 byte line of text with the fewest instructions.
1. P19WR1.MLC/LOG - by Werner Rams using TRTR executing 28 instructions. Honorable mention also goes to Steve (S.R.K www.mysrk.com/) for email suggesting TRTR before Werner submitted complete
program the same day.
20. Write integer random number generator and test program to determine the longest sequence without duplication that it produces for a given seed number. The longest sequence of non-repeating
pseudo-random numbers wins.
1. P20WR1 - by Werner Rams using published random number reference (runs for hours)
21. Code a binary search and test it, by searching in turn, for all of the elements in a 20 entry sorted integer array containing the values (1, 3, 7, 9, 13, 18, 19, 20, 25, 27, 30, 31, 32, 40, 41,
45, 47, 50, 65, 80) plus the following values not in the array: 0, 28, and 99. Submitted by David Wilkinson.
1. P21DW1.MLC/LOG - by David Wilkinson using 1649 instructions
2. ?
3. ?
22. Code fastest instruction sequence to count bits in an arbitrary string of bytes using currently available z/Architecture instructions prior to new instruction coming with z196 which is estimated
to be 5 times faster.
Many thanks to David Bond for running these 4 solutions on a real z10-EC machine with the following results:
1) Glen Herrmannsfeldt 0.127 microseconds
2) Fritz Schneider 0.254 microseconds
3) Don Higgins 0.452 microseconds
4) Melvyn Maltz 8.102 microseconds
This just proves that pipelining and register versus main memory instruction and data accesses really do matter for maximum performance on machines with caches etc.
On 08/06/10 a test version of z390.jar with new POPCNT instruction was added based on SHARE Presentation on 08/04/10 by Don Greiner here:
Included with this z390 test version are 3 test programs:
1. TESTINS1.MLC - test assembly of all opcodes including POPCNT
2. TESTINS4.MLC - regression test POPCNT instruction (first of z196 opcodes)
3. P22DSH2.MLC/LOG - solution to problem #22 using POPCNT which executes 282 instructions including support for odd bytes at start and end. The test files including updated java sources are here:
• Benchmark Timing - 18
• Boolean logic
• Branch logic
• Comparisons
• Compression and de-compression - 17
• Converting Data
□ Conversion to display characters - 3, 6, 9, 10
□ Conversion of display characters to binary - 5
• Encryption and decryption
• File access methods
• Floating point calculations
□ Binary Floating Point (BFP)
□ Calculate constants - 12 (pi, e, golden ratio, etc.)
□ Decimal Floating Point (DFP) -13
□ Extreme precision arithmetic
□ Hexadecimal Floating Point (HFP)- 7
□ Rounding - 13
□ Statistics - 8 (Variance, Standard Deviation, Present Value, Interest rate, etc.)
□ Trig functions
• Heuristics
• Integers (32, 64, 128 bit)
□ Calculate series: Prime numbers, Harmonic numbers, Bernoulli Numbers, Fibonacci, Perfect Numbers
□ Combinations
□ Date math
□ Factoring
• Manipulating data structures (add, delete, change, and find entries in lists, tables, stacks, queues, etc.)
□ Merging
□ Searching - 11
□ Sorting - 4
□ String functions - 19
□ Swapping - 1, 2
• Packed Decimal - 13, 15
• Random numbers
• Recursive functions - 14
• Totally useless just for fun
How to submit a new problem
To submit a new problem for the contest, send a brief description of the problem along with your name and alma mater to ZMFACC Submit Problem. Remember the problem must be solvable using less than
100 problem state mainframe assembler instructions.
How to submit a solution to a problem
To submit a solution for a problem attach the program code in an ASCII text file format and send it along with your name and optional alma mater to ZMFACC Submit Solution. All problems must be
solvable using 100 or less mainframe problem state instructions. Solutions must be submitted in the form of a single ASCII text type source program file which can be assembled, linked, and executed
using the latest version of z390 on Windows or Linux for evaluation by contest judges. Any z/Architecture problem state instruction omissions or bugs should be reported via the z390 RPI Request Form.
The 4 best solutions in the opinion of the judges (currently me) will be posted on this contest web page. All solutions submitted will be posted on the contest group email for discussion by the
members. Originality and timing count! The first 4 different solutions submitted will be the winners unless different solutions submitted later are deemed by the judges to warrant ranking in the top
4. Unless otherwise states, problem goals in order of importance are execution speed, minimum memory requirements, and best coding practices.
Download ZMFACC macro for your OS environment
Each solution must use ZMFACC macro calls to define the start and end of the code, input, and output sections. Click on the link for any of the ranked solutions already submitted for examples such as
P6RW1.MLC. The ZMFACC macro is used to isolate the solution code from the specific operating environment used to assemble, link, and execute it. The default target operating system environment for
the ZMFACC macro is z390. The macro now supports the following additional operating system environments by setting RUNSYS on the first call to ZMFACC as follows:
• Download ZMFACC macro source generalized version supporting the following environments:
□ RUNSYS=390 - default z390 generates ASCII output on log file via WTO and SNAP.
□ RUNSYS=MVS - generates EBCDIC output on SYSPRINT via WTO and SNAP.
□ RUNSYS=ZOS - generates EBCDIC output on SYSPRINT via WTO and SNAP.
□ RUNSYS=CMS - generates EBCDIC output via WRTERM and LINEDIT.
□ RUNSYS=VSE - generate EBCDIC output via WTO and PDUMP.
If the RUNSYS= keyword is not specified on the first ZMFACC macro call, then the target operating system can be specified by externally setting global &SYSPARM value via execution options.
For example, using z390 assembler, you can add options "SYSPARM(RUNSYS=MVS)" and SYSMAC(mvs\maclib) to override the default 390 option and generate code for execution on Hercules MVS 3.8
using the MVS 3.8 macro library.
• For more information on the different operating system environments for assembler see the following links:
□ z390 - open source Java emulator running under J2SE for Windows and Linux
□ Hercules - open source C emulator for Windows and Linux
□ z/OS - IBM licensed OS for System z (Use RUNSYS=ZOS uses same code as MVS)
□ zVM - IBM licensed OS to run other guest OS's and CMS (use RUNSYS=CMS)
□ VSE - IBM licensed Virtual Storage Extended OS (use RUNSYS=VSE - not tested yet)
□ z/Linux - IBM licensed Linux OS for System z (Since J2SE and z390 run on Ubuntu Linux for Intel PC's, I assume z390 could be run on z/Linux mainframe but no testing started yet? Note z390
Java code is aware of Windows versus Linux environment and makes some changes such as file separator, system utilities, and system commands, etc.)
Participants should use the ZMFACC macro to assemble, link, and execute solutions in their own environment before submitting them. Submitted solutions should then be portable to all the other
environments with the exception of solutions using newer problem state instructions or addressing modes not currently supported in some hardware and software environments. If there is not a
customized ZMFACC macro yet for your environment, please download the current ZMFACC macro, customize it to detect and run in your environment, and submit it to ZMFACC Submit Macro along with your
name and the target environment it has been tested on for use by other participants. Thanks!
Join the contest email group for discussion of problems and solutions
This email group is for the use of participants who wish to discuss problems and solutions. All email posted to this group is reviewed by moderators to verify it is related to the contest and is no
Volunteer to help part time as a moderator or judge
Send an email to ZMFACC Volunteer with your name and what you would like to volunteer for. Several backup moderators for the email group would be helpful to check for pending posts and keep the mail
flowing. A few contest solution judges willing to evaluate the relative merits of submitted solutions would also be helpful.
What's New Update Log
• 08/06/10
□ P22DSH2.MLC/LOG by Don Higgins using test version of z390 POPCNT instruction in loop with LG, POPCNT, MSGR, SRLG, AR, BXLE for total of 282 instructions for 208 character string. This test
version of z390 is based on based on SHARE Presentation on 08/04/10 by Don Greiner here:
The test version with 3 test programs is here:
For more on problem #22 go here.
• 07/30/10
□ New problem #22 Code fastest instruction sequence to count bits in an arbitrary string of bytes using currently available z/Architecture instructions prior to new instruction coming with z196
which is estimated to be 5 times faster.
• 09/21/08
□ P11DW1.MLC/LOG by David Wilkinson using TR to convert first byte to 0-26 and hash table size of 35,393. 952,200 total instructions.
• 08/27/08
□ P5DW1.MLC/LOG by David Wilkinson - using single TROT with truncated table to save memory
• 08/14/08
• 08/11/08
□ P17DW1.MLC/LOG by David Wilkinson compress and decompress using TRT and CRB for total of 628 instructions (1st place)
□ Problem #21 binary search submitted by David Wilkinson.
• 08/08/08
□ P3DW1.MLC/LOG by David Wilkinson - unpack and TR using 16 byte table for 4 byte parm
□ P4DW1.MLC/LOG by David Wilkinson - improved Quicksort using 1057 instructions (2nd place)
• 06/09/08
□ P7EH1.MLC/LOG by John Erhman - has been updated to remove work-around for AW since the latsest z390 PTF v1.4.01f now has support for AW and all the HFP unnormalized floating point
□ An email quiz question was posted about the most efficient way to test if the left most bit in any mask is on. The solution is to shift the mask 1 bit right and compare it to the selected
bits AND'd with mask. If the selected bits are high then the high bit must be on.
• 04/11/08
□ P20WR1 - by Werner Rams using published random number reference (runs for hours)
• 03/28/08
□ P19WR1.MLC/LOG - by Werner Rams using TRTR executing 28 instructions. Note honorable mention goes to Steve (S.R.K www.mysrk.com/) for email suggesting TRTR before Werner submitted complete
program the same day.
□ New problem #20 - Write integer random number generator and test program to determine the longest sequence without duplication that it produces for a given seed number. The longest sequence
of non-repeating pseudo-random numbers wins.
• 03/21/08
□ P18DSH1.MLC/LOG - solution using new DAT.MLC interval timer display showing time of day JDBC time-stamp format down to nano-seconds, total instruction counts, and MIPS. Running z390 v1.4.01a
on Intel 2.1 Duo Core chip, the MIP rates were 8.7 and 7.3 for 15% reduction in MIP rate but there was also an 8% reduction in elapsed time in nano-seconds using the z10 compare and branch
loop with BCTR, CIJNE versus the BCTR, LTR, JNZ loop. The 2 instruction loop has lower MIP rate but faster execution time than the 3 instruction loop.
□ Add new problem #19 to find last non-blank character in a line of text with fewest instructions.
• 03/09/08
□ Correct my error on the number of instructions executed for solution to problem 17 by Werner Rams. The correct number is 827.
• 03/07/08
□ P17WR1.MLC/LOG by Werner Rams compresses and decompresses 3 records using CLCL to find end of duplicate characters for total of 827 instructions.
□ Add new problem #18 to calculate performance gain using new z10 compare and branch instructions.
• 02/23/08
□ P16WR1.MLC/LOG by Werner Rams using simpler faster bit table for primes from 3 to 97
• 02/22/08
□ P12DSH2.MLC/LOG by Don Higgins using HFP to calc standard. deviation for (1, 2, 3, 6) = 1,87
□ P12DSH3.MLC/LOG by Don Higgins using DFP to calc standard. deviation for (1, 2, 3, 6) = 1,87
□ P15WR1.MLC/LOG by Werner Rams using DP and SRP to round 15 digit PD
□ P15DSH1.MLC/LOG by Don Higgins using DP, AP, and CP to round 15 digit PD
□ P16DSH1.MLC/LOG by Don Higgins using SETBIT and TESTBIT macro for primes 3 to 97
□ Add new problem #17 to calculate compressing and decompression routines
• 02/05/08
□ P11WR1.MLC/LOG by Werner Rams using linked list to handle duplicates requiring approximately only 12 * number of table entries for hash tables. 1876139 total instr.
• 01/31/08
□ Correct comments on problem #14 solution by Martin Ward. The solution supports up to 31 digits.
□ Add Current Problem Category Index thanks to suggestions from several participants.
• 01/29/08
□ Add new problem #15 to calculate 15 digit packed decimal rounded to 7 decimal places.
□ Post question about potential usefulness of problem techniques and suggested categories for additional problems.
□ Add new problem #16 to store and fetch numbers from bit array by Jim Connelley.
□ P14MW1.MLC/LOG by Martin Ward - using PD instructions to calculate solutions up to 15 digits
□ P14DSH1.MLC/LOG by Don Higgins - using recursive macro code only (limited to 32 bit integers)
• 01/27/08
□ Add new problem #14 to calculate the Ackerman recursive function a(4,1) = 65533.
• 01/26/08
□ Correct problem #13 statement to clarify total cost is for one or more items and quantify is the total number of items to be divided into total cost to calculate unit price.
□ P13DSH1.MLC/LOG by Don Higgins - using 7 DFP decimal floating point instructions
• 01/25/08
• 01/24/08
□ New problem 13 - Given a decimal number with 2 decimal places representing the total cost of an item and another decimal number representing the quantify, calculate the unit price with 2
decimal places rounded half up? Problem was derived from question posted on IBM Mainframe Assembler-List by Ludmila Koganer. I believe that using Decimal Floating Point may be the most
straight forward using the fewest instructions, but may not be the most efficient.
□ P11DSH2.MLC/LOG by Don Higgins using P11FIND2.MLC/LOG to find hash table with max of 2 duplicate key searches per entry (table size found 3473 which has density of 25% for 856 given keys).
This solution saves 180k storage for 10% increase in instruction count.)
• 01/18/08
• 01/14/08
□ P10MB1.MLC/LOG by Mats Broberg at SEB using fewer instructions and single ED (Note execution of this solution on z390 requires latest z390 v1.3.08h PTF to fix overflow bug)
□ P12DSH1.MLC/LOG by Don Higgins -using BFP to calc standard. deviation for (1, 2, 3, 6) = 1,87
□ ?
• 01/06/08
□ P4APN2.MLC/LOG by Alfred Nykolya - counts sort using 853 instructions
• 01/01/08
□ P10DSH1.MLC/LOG -by Don Higgins - using about 27 instructions and no loops
□ New problem #11 - Code hash table add and find routines for fast access by Don Higgins.
□ New problem #12 - Calculate the mean and standard deviation for a set of 500000 response times using a precision of .001. Assume each value may not exceed 1000 seconds- by Tony Matharu.
• 12/30/07
□ P3LKM1.MLC/LOG by Lindy Mayfield - uses loop to convert 4 bytes at a time to hex
• 12/27/07
□ P8LM1.MLC/LOG by Lindy Mayfield - using Rexx model solution series with all positive terms which converges to 33 significant digits in 49 iterations. This solution modified for z390 using CTD
library services to display trial values of Pi and the error from known value. This solution also uses inline macros LX and STX to simplify loading and storing extended floating point values.
□ New problem #10 - Code instructions required to convert any unsigned 128 bit integer value in even/odd 64 bit general purpose register pair generated by MLG or MLGR instruction to EBCDIC
decimal display character format using as few basic instructions as possible and no library services such as z390 CTD. Note 2**127 has 39 significant digits, extended floating point only
supports 34 significant digits, and packed decimal only supports 31 significant digits. Please submit solutions using the ZMFACC macro for portability across platforms. Submitted by Don
Higgins University of South Florida
□ P4RJ1.MLC/LOG by Mats Broberg, Roland Johansson, and SEB Sweden - improved version of Quicksort using 685 instructions.
• 12/23/07
□ P8MM1.MLC/LOG by Melvyn Maltz - using Gregory/Leibniz/Machin arctan series with alternating signs which converges to 33 significant digits in 7 iterations. This solution modified for z390
using CTD library services to display trail values of Pi and the error from known value.
□ Update ZMFACC macro to include all 16 floating point register EQU's indicating pairs
□ P4RAFA2.MLC/LOG by Rafa Pereira - improved qucksort using 1380 instructions
• 12/21/07
□ Update ZMFACC macro at 8 :00 EST to also display RUNSYS=??? at execution
□ P7EH1.MLC/LOG by John Erhman - using AW un-normalized add to align bits (See addition of alternate path work around for z390 AW instruction bug (RPI 767) by DSH plus display of intermediate
EH and DH calculated values via CTD for verification.)
• 12/19/07
□ Update ZMFACC macro at 14:00 EST for Chris Langford and Rafa Pereira changes to correct MVS/ZOS SNAP headings and areas to be dumped, and truncate text lines to 72.
• 12/18/07
□ New generalized version of ZMFACC for z390, MVS, z/OS, CMS, and VSE
Thanks to Chris Langford for CMS version and Rafa Pereira for MVS version
Updated again at 5 PM EST as follows:
☆ Fixes for restrictions using IFOX00 per Rafa Pereira
☆ Fixes to re-enable SYSPARM override to set environment externally, fixes for VSE to set base and save area and exit via EOJ macro call, and change target system keyword name to RUNSYS=
per Chris Langford
• 12/17/07
□ P4RAFA1.MLC/LOG by Rafa Pereira - quicksort of 20 elements using 1659 instr.
• 12/16/07
□ Rafa Pereira submitted version of ZMFACC macro tested on Hercules
□ New generalized version of ZMFACC for z390 and MVS compatible environments
☆ Updated by Rafa Pereira for use with MVS 3.8 IFOX00 assembler restrictions
☆ Lower case characters cannot be used in labels.
☆ Underscore character "_" cannot be used in labels.
☆ Labels cannot be more than 8 chars in length, including the prefixing dot if they have one.
☆ Symbols must be declared.
☆ Symbols must appear between apostrophes in AIF statements: AIF ('&RUNSYS' EQ ...)
□ P9MM2.MLC/LOG by Melvyn Maltz - EDMK with $() improved per Benyamin Dissen
• 12/15/07
□ P9MM1.MLC/LOG by Melvyn Maltz - EDMK followed by MVI's for $ and ()
□ P4AN1.MLC/LOG by Alfred Nykolyn -shell sort sorts 20 elements using 1532 instr.
□ P1RAFA1.MLC/LOG by Rafa Pereira - swap 2 fields with 2 MVC's
• 12/14/07
□ P6BR1.MLC/LOG by Bob Rutledge - 6 register instruction loop and single store
□ Mark Dixon added #9 - convert packed decimal to display characters with $ and credit
• 12/12/07
• 12/11/07
• 12/10/07
□ P1C1.MLC/LOG by Chris - 3 XC instructions
□ P3MM1.MLC/LOG by Melvyn Maltz - single TROT (CC3 retry added per Michael Poil)
|
{"url":"http://z390.sourceforge.net/z390_Mainframe_Assemble_Coding_Contest.htm","timestamp":"2014-04-19T04:38:45Z","content_type":null,"content_length":"96478","record_id":"<urn:uuid:1312adc0-96e2-475b-95af-5c8e34792b1d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Characterising ergodicity of continuous maps
up vote 4 down vote favorite
Hello all.
Suppose $X$ is a Polish space, $\mu$ is a Borel probability measure on $X$, and $T:X \to X$ is a continuous $\mu$-preserving map which is not ergodic.
Does there necessarily exist a Borel set $A \subset X$ such that
• $\mu(A) \in (0,1)$;
• $\mu(A \ \triangle \ T^{-1}(A)) = 0$;
• $A$ has non-empty interior?
What about if we replace the third point with the stronger requirement that $A$ is open?
Many thanks, Julian.
ds.dynamical-systems ergodic-theory
Does $T$ preserve the measure? – Joel Moreira Feb 24 '13 at 1:07
Good point! Let's assume it does. (I'll now edit the question accordingly.) – Julian Newman Feb 24 '13 at 1:36
add comment
1 Answer
active oldest votes
Let $T \colon X \to X$ be a minimal transformation of a compact metric space which is not uniquely ergodic, let $\mu$ be a non-ergodic $T$-invariant measure on $X$, and let $A$ be a set
with nonempty interior such that $\mu(A \triangle T^{-1}A)=0$. I claim that necessarily $\mu(A)=1$, contradicting the above conjecture. (Some constructions of transformations with the
above combination of properties may be found for example in the textbook Ergodic Theory on Compact Spaces by Denker, Grillenberger and Sigmund, or in John Oxtoby's classic 1952 article
Ergodic sets.)
up vote 5 Let $U \subseteq A$ be open and nonempty. Since $T$ is minimal we have $\bigcup_{n=0}^\infty T^{-n}U=X$, and indeed even $\bigcup_{n=0}^NT^{-n}U=X$ for some integer $N$ since $X$ is
down vote compact. In particular $\bigcup_{n=0}^N T^{-n}A=X$. Let us write $$\bigcup_{n=0}^N T^{-n}A = A \cup \bigcup_{n=1}^N \left(\left( T^{-n}A\right)\setminus \bigcup_{k=0}^{n-1} T^{-k}A\right)
accepted =A \cup \bigcup_{n=1}^N B_n,$$ say, which is a disjoint union. We would like to show that this union has measure identical to that of $A$. For each $n$ we have $$\mu(B_n)=\mu\left(T^{-n}A
\setminus \bigcup_{k=0}^{n-1} T^{-k}A\right)\leq \mu\left(T^{-n}A \setminus T^{-(n-1)}A\right)=\mu\left(T^{-1}A \setminus A\right)=0$$ by invariance and the hypothesis $\mu(A \triangle T^
{-1}A)=0$. It follows that $$\mu(A)=\mu\left(\bigcup_{n=0}^N T^{-n}A \right)=\mu(X)=1$$ so the desired situation can not occur.
1 Thank you. This is most helpful. Do you know any conditions on $X$ under which, if $\mu$ is a strictly positive probability measure on $X$, then every minimal $\mu$-preserving
continuous transformation is ergodic? (E.g. is this true for Euclidean space $X=\mathbb{R}^n$?) – Julian Newman Feb 24 '13 at 2:44
1 @Julian: This is hopeless. On any reasonable space, there will be transformations that are minimal, but not strictly ergodic. – Anthony Quas Feb 24 '13 at 3:26
@Julian: this is equivalent to asking for a condition on $X$ such that every minimal transformation on $X$ is uniquely ergodic, i.e. has only one invariant measure. (If a
1 transformation has two distinct invariant measures then a strict linear combination of the two is never ergodic.) Such conditions do exist: finite spaces $X$ have this property, as
does the circle (I think) but as Anthony says this is a severly restrictive requirement. The broader stroke of your question seems to be whether ergodicity can be easily characterised
using only topological concepts. The answer to this is "No". – Ian Morris Feb 24 '13 at 12:08
@Ian and Anthony: Just to be clear, I did not say that I require every invariant probability measure of a minimal transformation to be ergodic - I just required that every strictly
positive invariant probability measure of a minimal transformation had to be ergodic. (By strictly positive, I mean that its support is the whole of $X$). Is this still equivalent to
requiring that every minimal transformation is uniquely ergodic? (And in the case $X=\mathbb{R}^n$, if the requirement still is not satisfied, what about if we weaken the requirement
by restricting to, say, diffeomorphisms on $X$?) – Julian Newman Feb 24 '13 at 14:38
1 @Julian: Every invariant probability measure of a minimal transformation is fully supported, because otherwise its support would be a nonempty closed invariant proper subset,
contradicting minimality. So the two statements are equivalent. – Ian Morris Feb 24 '13 at 16:52
show 1 more comment
Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems ergodic-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/122764/characterising-ergodicity-of-continuous-maps?sort=oldest","timestamp":"2014-04-20T16:38:25Z","content_type":null,"content_length":"61223","record_id":"<urn:uuid:790f3177-a498-4785-9604-1c3aa635bf30>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On the representation theory of Galois and atomic topoi
, 2005
"... In this paper, we discuss various “general nonsense” aspects of the geometry of semi-graphs of profinite groups [cf. [Mzk3], Appendix], by applying the language of anabelioids introduced in
[Mzk4]. After proving certain basic properties concerning various commensurators associated to a semi-graph of ..."
Cited by 1 (1 self)
Add to MetaCart
In this paper, we discuss various “general nonsense” aspects of the geometry of semi-graphs of profinite groups [cf. [Mzk3], Appendix], by applying the language of anabelioids introduced in [Mzk4].
After proving certain basic properties concerning various commensurators associated to a semi-graph of anabelioids, we show that the geometry of a semi-graph of anabelioids may be recovered from the
category-theoretic structure of certain naturally associated categories — e.g., “temperoids” [in essence, the analogue of a Galois category for the “tempered fundamental groups” of [André]] and
“categories of localizations”. Finally, we apply these techniques to obtain certain results in the absolute anabelian geometry [cf. [Mzk3], [Mzk8]] of tempered fundamental groups associated to
hyperbolic curves over p-adic local fields.
, 801
"... Abstract. A locally connected topos is a Galois topos if the Galois objects generate the topos. We show that the full subcategory of Galois objects in any connected locally connected topos is an
inversely 2-filtered 2-category, and as an application of the construction of 2-filtered bilimits of topo ..."
Add to MetaCart
Abstract. A locally connected topos is a Galois topos if the Galois objects generate the topos. We show that the full subcategory of Galois objects in any connected locally connected topos is an
inversely 2-filtered 2-category, and as an application of the construction of 2-filtered bilimits of topoi, we show that every Galois topos has a point. introduction. Galois topoi (definition 1.5)
arise in Grothendieck’s Galois theory of locally connected topoi. They are an special kind of atomic topoi. It is well known that atomic topoi may be pointless [6], however, in this paper we show
that any Galois topos has points. We show how the full subcategory of Galois objects (definition 1.2) in any connected locally connected topos E has an structure of 2-filtered 2-category (in the
sense of [3]). Then we show that the assignment, to each Galois object A, of the category DA of connected locally constant objects trivialized by
, 2004
"... � � � In this paper, we discuss various “general nonsense ” aspects of the geometry of semi-graphs of profinite groups [cf. [Mzk3], Appendix], by applying the language of anabelioids introduced
in [Mzk16]. After proving certain basic properties concerning various commensurators associated to a semi ..."
Add to MetaCart
� � � In this paper, we discuss various “general nonsense ” aspects of the geometry of semi-graphs of profinite groups [cf. [Mzk3], Appendix], by applying the language of anabelioids introduced in
[Mzk16]. After proving certain basic properties concerning various commensurators associated to a semi-graph of anabelioids, weshow that the geometry of a semi-graph of anabelioids may be recovered
from the category-theoretic structure of certain naturally associated categories — e.g., “temperoids ” [in essence, the analogue of a Galois category for the “tempered fundamental groups ” of
[André]] and “categories of localizations”. Finally, we apply these techniques to obtain certain results in the absolute anabelian geometry [cf. [Mzk3], [Mzk8]] of tempered fundamental groups
associated to hyperbolic curves over p-adic local fields.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=10423857","timestamp":"2014-04-16T05:41:15Z","content_type":null,"content_length":"17674","record_id":"<urn:uuid:0dcc9602-a12f-4f67-b306-2836e77ce08b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dickinson, TX Calculus Tutor
Find a Dickinson, TX Calculus Tutor
...I really enjoy it and I always receive great feedback from my clients. I consider my client's grade as if it were my own grade, and I will do whatever it takes to make sure you get it, and at
the same time make sure our sessions are easy and enjoyable. I can tutor almost any subject, but my spe...
38 Subjects: including calculus, English, reading, writing
...I play on the Rice University club team as well as the intramural teams and have been playing soccer all of my life. I am young and athletic so it is easier for me to connect with my students.
I am a Senior Chemical Engineering student at Rice University.
22 Subjects: including calculus, chemistry, physics, geometry
...I've helped countless students of all ages with math of all levels. I've also taught SAT and ACT prep. I've tutored students in English, reading, and writing.
34 Subjects: including calculus, chemistry, reading, English
...At my previous employer, Matlab was a mission-critical app, and its robustness was therefore subject to scrutiny, hence the testing. I have experience with Matlab applied to the solution of
systems of differential equations, control systems modeling, and physical dynamics modeling. I have low-level experience with tuning Matlab code for faster execution.
10 Subjects: including calculus, computer science, differential equations, computer programming
...I recently received my Master's degree in Medical Sciences at the University of North Texas Health Science Center and before that graduated with magna cum laude honors from the University of
Houston in Biology. I have been an official tutor at the University of Houston for over 3 years and have ...
29 Subjects: including calculus, English, writing, physics
Related Dickinson, TX Tutors
Dickinson, TX Accounting Tutors
Dickinson, TX ACT Tutors
Dickinson, TX Algebra Tutors
Dickinson, TX Algebra 2 Tutors
Dickinson, TX Calculus Tutors
Dickinson, TX Geometry Tutors
Dickinson, TX Math Tutors
Dickinson, TX Prealgebra Tutors
Dickinson, TX Precalculus Tutors
Dickinson, TX SAT Tutors
Dickinson, TX SAT Math Tutors
Dickinson, TX Science Tutors
Dickinson, TX Statistics Tutors
Dickinson, TX Trigonometry Tutors
Nearby Cities With calculus Tutor
Alvin, TX calculus Tutors
Bacliff calculus Tutors
Beach City, TX calculus Tutors
El Lago, TX calculus Tutors
Hitchcock, TX calculus Tutors
Kemah calculus Tutors
La Marque calculus Tutors
League City calculus Tutors
Manvel, TX calculus Tutors
Nassau Bay, TX calculus Tutors
Santa Fe, TX calculus Tutors
Seabrook, TX calculus Tutors
Taylor Lake Village, TX calculus Tutors
Texas City calculus Tutors
Webster, TX calculus Tutors
|
{"url":"http://www.purplemath.com/dickinson_tx_calculus_tutors.php","timestamp":"2014-04-20T02:01:52Z","content_type":null,"content_length":"24021","record_id":"<urn:uuid:41478cd3-68d7-4a7a-9d93-690f433e88d4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Restricted Choice
Restricted Choice
(July 25, 2005: as a result of an e-mail commenting on legality of methods, I have clarified when you look at your cards when using the "pinochle rule" described below)
Do you remember the first time that you were introduced to the concept of "restricted choice" in a bridge setting? Did the explanation make complete sense, or did it seem that perhaps something was
missing (for instance, the rationale for the odds figure discussed)? Or perhaps the explanation just did not make any sense at all?
Following is an explanation of "restricted choice" outside of a bridge setting, which can be used to demonstrate that the concept of "restricted choice" is completely valid. Newcomers to bridge
should be receptive to the concept after working through the following scenario. Bridge teachers perhaps can use this example in a lesson on card play.
Monty Hall example
Remember the Monty Hall "Let's Make a Deal" TV show? A contestant is told that a substantial prize (a new car?) is behind one of three doors (named A, B, and C), and that small insignificant prizes
(or nothing) are behind each of the other two doors. The contestant would choose a door, and then, after elaborate fanfare, Monty would open one of the other two doors to reveal an insignificant
prize (a bar of soap) or nothing. Monty then would give the contestant a chance to change his/her mind about which door to choose: Stay with the original choice or switch? The studio audience would
be screaming "Stay!" or "Switch" and the contestant would ultimately ...
What are the chances that the big prize is behind the other door? Sure thing? 50%? Some other number?
Did you ever receive the advice to go with your first impressions? For example, when answering multiple choice questions on some kind of test, did you ever go back and change some answers? Or did you
go with the advice that some gave, "stick with your first answer?")
Well, research has indicated that students who go back and change answers IMPROVE their scores (when or why is not addressed here), and it is TWO to ONE that the big prize is behind the other door!
The answer is SWITCH!!!
This conclusion can be reached in at least two ways. The first explanation does not really address the concept of restricted choice, but it is completely valid.
Explanation A: Careful Reasoning
When a contestant initially chooses a door, he/she has a probability of 1/3 of being correct; each door has a probability of 1/3 that the big prize is behind that door. But when Monty opens a door
which has nothing behind it, the probability of the prize being behind that door is known to be ZERO. Now, which makes more sense? The 1/3 probability that was behind the door that was just opened
"splits" between the other 2 doors, or should the 1/3 be assigned to the other door which the contestant did not choose? I hope that you realize that the contestant's probability of having originally
chosen the correct door is STILL 1/3 (Monty's opening of a door does NOT affect this), and that therefore, the probability of the big prize being behind the other door is 2/3. SWITCH!!!
Explanation B: The concept of "Restricted Choice"
To simplify the analysis, we can assume that the contestant always chooses one door (and the prize could be assigned to any of 3 doors) or we can assume that the prize is always behind one door (and
the contestant randomly chooses one of the 3 doors). You could analyze all 3x3=9 cases, but the end result will be the same. For convenience, assume that the contestant always chooses Door B, and
that the prize can be behind either A, B, or C.
Three situations are possible:
1. Situation 1: The prize is behind Door A. The only door that Monty Hall can open that has nothing behind it is Door C. His choice is RESTRICTED to Door C.
2. Situation 2: The prize is behind Door B. Monty Hall can open EITHER Door A or Door C.
3. Situation 3: The prize is behind Door C. Monty Hall is RESTRICTED to opening Door A.
In 2 out of 3 situations, the big prize is behind the other door. But wait, you say. In Situation 2, Monty could have opened either door. Doesn't that change the odds?
Well, the answer is NO. Let's assume that the scenario is repeated 300 times. Each time, the contestant chooses Door B, and the Prize is behind Door A 100 times, behind B 100 times, and behind C 100
times. Further, assume that Monty randomly chooses between A and C when the contestant has correctly picked Door B.
1. 100 cases: Prize behind Door A, and contestant chooses Door B. Monty opens Door C to reveal nothing. Contestants correct decision is to SWITCH. Monty's decision to open Door C was a RESTRICTED
2. 100 cases: Prize behind Door C, and contestant chooses Door B. Monty opens Door A to reveal nothing. Contestants correct decision is to SWITCH. Same RESTRICTED choice decision.
3. 100 cases: Prize behind Door B, and contestant should NOT switch. In half of these cases (50), Monty opens Door A, and opens Door C in the other 50 cases. Monty's choice as to which door to open
is NOT restricted.
So, if the contestant chooses Door B, Monty will open Door A 150 times (100 times restricted and 50 times NOT restricted), and it will be correct to SWITCH in 100 of these cases. Monty will open Door
C for the other 150 times, and it will be correct to SWITCH in 100 of these cases. Thus the ODDS are 2:1 in favor of switching.
The principle of RESTRICTED choice here is this: If Monty opens Door A to reveal nothing, the odds are 2 to 1 that he opened Door A because he HAD to; the prize is behind the other door. Same
analysis if Monty opens Door C to reveal nothing.
Restricted Choice in Bridge
OK, how can this concept apply to contract bridge? The concept typically (but not always) applies to the situation where an opponent plays a particular card, when the opponent MIGHT have played a
different card. For example, assume your trump suit is AK654 opposite 10987 in dummy. The bidding and opening lead are unrevealing. Upon gaining the lead, you cross to dummy, and lead the 10. Your
RHO plays the 2, you play the Ace, and LHO plays the Queen. You cross to dummy to lead another trump, and RHO plays the 3. Should you finesse or play for the drop?
Beginners often assume that this a guess; RHO either has or does not have the Jack. However, the odds strongly favor taking the finesse. Now an incorrect analysis sometimes goes like this. Ignoring
all other suits and cards played, there are 12 "slots" available for the Jack in the hand of LHO, and there are 11 "slots" available for the Jack in the hand of RHO. These are about the same, so it
seems just slightly more likely that LHO has the Jack (about 50-50 whether to play for the Jack to drop). However, this analysis ignores the situation where LHO plays the Jack on the first round. Now
who has the Queen? It turns out that the likelihood of the LHO holding EITHER a stiff Jack or stiff Queen is approximately twice that of holding QJ tight. The correct assumption for declarer to make
is that LHO played the Jack (or Queen) because he HAD to (RESTRICTED CHOICE)
For the record, the correct odds are (ignoring any inferences from the bidding, lead, or early play):
1. Above scenario, missing 4 cards, QJ and 2 small cards; odds are 11:6 in favor of the finesse (not quite 2:1 odds)
2. Missing 5 cards (e.g., AK765 opposite 1098), QJ and 3 small cards; odds are 10:6 or 5:3 in favor of the finesse
3. Missing 6 cards, QJ and 4 small cards (e.g. AK76 opposite 1098); odds are 9:6 or 3:2 in favor of the finesse (this ignores the issue of misleading falsecards)
4. Missing 7 or more cards. Although the odds could be computed, they likely would not be accurate because of signaling issues and/or inferences from the bidding
Defender's play from QJ tight
Which card should you play? Not analyzed here, but you should randomize in some manner. For example, choose the Queen about 50% of the time on the first lead. (Almost anything works; just do not
always choose the Queen or Jack.) But how can you randomize? You are not allowed to use any artificial aids: this would include calculators, looking at the second hand on your watch, looking at the
board number or a predetermined list of random numbers, etc. You are restricted to cards that you hold. So, here is a legal method for a 50% decision.
BEFORE looking at any card in your hand, shuffle and count your cards face down.
(Counting BEFORE looking at your cards is correct procedure according to the laws of bridge (Law 7B1); if you do not count, and do not have 13 cards, you might be subject to penalty. As far as I
know, shuffling is NOT required.)
Then look at the top card in your hand (or the bottom card, or any random card, for that matter). If you later find that you have QJ tight, then, if the first card is BLACK, play the Queen if you
must choose between Q or J from QJ. If the first card is RED, choose the Jack. I call this my "pinochle" rule for randomization. You can extend this to several cards if you want to have a objective
method for making more than one 50% decision on a given hand. If you do not have QJ tight, then you can use the observation to make any other 50% decision that you might have.
In Closing
Correction of background: Actually, on the show hosted by Monty Hall, the contestant did not have the option of choosing the other door. But many people, including me when I first wrote this short
teaching aide, have often assumed so, and thus this type of situation is often discussed as the "Monty Hall" problem. Note: There are 3 implicit assumptions in the scenario as described above:
1. Monty knows what door the valuable prize is behind
2. Monty MUST choose a door, and offer the contestant the chance to change his/her choice
3. Monty will never open a door that reveals the valuable prize
PLEASE READ: A book which I highly recommend is "The Drunkard's Walk: How Randomness Rules Our Lives" by Leonard Mlodinow (c2008 Pantheon Books New York). In it he reports what happened when Marilyn
vos Savant was asked what the solution to the "Monty Hall" problem was (in her Parade September 9, 1990 column), and she answered (correctly, according to the implicit assumptions in the above
paragraph). She received over 10,000 letters, of which 92% claimed that she was wrong. Around 1000 letters were from PhD's, many in mathematics and/or statistics, of which 65% (of the PhD's) claimed
that she was wrong. From the comments received, she concluded that almost all of the respondents agreed with the implicit assumptions in the above paragraph. It is well established that most people,
including highly intelligent people, do not understand probability very well. For more information, just google "Tversky and Kahneman" to find references to experiments where subjects misjudge
probabilities (often VERY badly). In real life, these misjudgments can have serious adverse consequences, and I highly recommend that you read the book by Mlodinow so that you are better able to
judge probabilities in circumstances that affect you (and, of course, there are other sources of information. GOOGLE!).
Final comment: There are "Restricted Choice" situations where the odds are exactly 2:1 as in the Monty Hall example. For more information, look up restricted choice in the Bridge Encyclopedia. There
are also some very subtle restricted choice situations in the bidding and in leads, but that is beyond the scope of this article.
Stan Fuhrmann
|
{"url":"http://www.acbl-district13.org/artic003.htm","timestamp":"2014-04-18T08:03:23Z","content_type":null,"content_length":"13363","record_id":"<urn:uuid:06e2196d-cf43-4cc2-a161-9514ed552bb3>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematicians find new solutions to an ancient puzzle
Many people find complex math puzzling, including some mathematicians. Recently, mathematician Daniel J. Madden and retired physicist, Lee W. Jacobi, found solutions to a puzzle that has been around
for centuries.
Jacobi and Madden have found a way to generate an infinite number of solutions for a puzzle known as 'Euler’s Equation of degree four.'
The equation is part of a branch of mathematics called number theory. Number theory deals with the properties of numbers and the way they relate to each other. It is filled with problems that can be
likened to numerical puzzles.
“It’s like a puzzle: can you find four fourth powers that add up to another fourth power" Trying to answer that question is difficult because it is highly unlikely that someone would sit down and
accidentally stumble upon something like that,” said Madden, an associate professor of mathematics at The University of Arizona in Tucson.
The team's finding is published in the March issue of The American Mathematical Monthly.
Equations are puzzles that need certain solutions “plugged into them” in order to create a statement that obeys the rules of logic.
For example, think of the equation x + 2 = 4. Plugging “3” into the equation doesn’t work, but if x = 2, then the equation is correct.
In the mathematical puzzle that Jacobi and Madden worked on, the problem was finding variables that satisfy a Diophantine equation of order four. These equations are so named because they were first
studied by the ancient Greek mathematician Diophantus, known as 'the father of algebra.’
In its most simple version, the puzzle they were trying to solve is the equation:
(a)(to the fourth power) + (b)(to the fourth power) + (c)(to the fourth power) + (d)(to the fourth power) = (a + b + c + d)(to the fourth power)
That equation, expressed mathematically, is:
a^4 + b^4 +c^4 +d^4 = (a + b + c + d)^4
Madden and Jacobi found a way to find the numbers to substitute, or plug in, for the a's, b's, c's and d's in the equation. All the solutions they have found so far are very large numbers.
In 1772, Euler, one of the greatest mathematicians of all time, hypothesized that to satisfy equations with higher powers, there would need to be as many variables as that power. For example, a
fourth order equation would need four different variables, like the equation above.
Euler's hypothesis was disproved in 1987 by a Harvard graduate student named Noam Elkies. He found a case where only three variables were needed. Elkies solved the equation: (a)(to the fourth power)
+ (b)(to the fourth power) + (c)(to the fourth power) = e(to the fourth power), which shows only three variables are needed to create a variable that is a fourth power.
Inspired by the accomplishments of the 22-year-old graduate student, Jacobi began working on mathematics as a hobby after he retired from the defense industry in 1989.
Fortunately, this was not the first time he had dealt with Diophantine equations. He was familiar with them because they are commonly used in physics for calculations relating to string theory.
Jacobi started searching for new solutions to the puzzle using methods he found in some number theory texts and academic papers.
He used those resources and Mathematica, a computer program used for mathematical manipulations.
Jacobi initially found a solution for which each of the variables was 200 digits long. This solution was different from the other 88 previously known solutions to this puzzle, so he knew he had found
something important.
Jacobi then showed the results to Madden. But Jacobi initially miscopied a variable from his Mathematica computer program, and so the results he showed Madden were incorrect.
“The solution was wrong, but in an interesting way. It was close enough to make me want to see where the error occurred,” Madden said.
When they discovered that the solution was invalid only because of Jacobi’s transcription error, they began collaborating to find more solutions.
Madden and Jacobi used elliptic curves to generate new solutions. Each solution contains a seed for creating more solutions, which is much more efficient than previous methods used.
In the past, people found new solutions by using computers to analyze huge amounts of data. That required a lot of computing time and power as the magnitude of the numbers soared.
Now people can generate as many solutions as they wish. There are an infinite number of solutions to this problem, and Madden and Jacobi have found a way to find them all.
The title of their paper is, “On a^4 + b^4 +c^4 +d^4 = (a + b + c + d)^4."
“Modern number theory allowed me to see with more clarity the implications of his (Jacobi’s) calculations,” Madden said.
“It was a nice collaboration,” Jacobi said. “I have learned a certain amount of new things about number theory; how to think in terms of number theory, although sometimes I can be stubbornly
Source: University of Arizona
1 / 5 (2) Mar 15, 2008
I love it when a solution to a math problem turns out to be this elegant.
1 / 5 (1) Mar 15, 2008
Would that its explanation here be as elegant and rise above 'equations as puzzles'
not rated yet Mar 15, 2008
Whatever it is, I'll bet it has to do with 4 dimensional geometries with 3 spacial dimensions and 1 time dimension.
Kind of like the "A^2 plus B^2=(C)^2" rule for right triangles in plane geometry, but for the special case where "C = A plus B".
not rated yet Mar 16, 2008
I love it that an error in transcribing a number was turned into a whole new approach to the math.
1 / 5 (1) Apr 16, 2008
i love it when i have dorks like u guys do my math homework for me. :]
|
{"url":"http://phys.org/news124726812.html","timestamp":"2014-04-16T04:27:17Z","content_type":null,"content_length":"72444","record_id":"<urn:uuid:73357bec-1774-4667-8162-38231386a03b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An introduction to the Bird-Meertens Formalism
- Algebraic and Coalgebraic Methods in the Mathematics of Program Construction, volume 2297 of LNCS, chapter 5 , 2000
"... A good way of developing a correct program is to calculate it from its specification. Functional programming languages are especially suitable for this, because their referential transparency
greatly helps calculation. We discuss the ideas behind program calculation, and illustrate with an examp ..."
Cited by 26 (8 self)
Add to MetaCart
A good way of developing a correct program is to calculate it from its specification. Functional programming languages are especially suitable for this, because their referential transparency greatly
helps calculation. We discuss the ideas behind program calculation, and illustrate with an example (the maximum segment sum problem). We show that calculations are driven by promotion, and that
promotion properties arise from universal properties of the data types involved. 1 Context The history of computing is a story of two contrasting trends. On the one hand, the cost and cost/
performance ratio of computer hardware plummets; on the other, computer software is over-complex, unreliable and almost inevitably over budget. Clearly, we have learnt how to build computers, but not
yet how to program them. It is now widely accepted that ad-hoc approaches to constructing software break down as projects get more ambitious. A more formal approach, based on sound mathematical
foundations, i...
- In 2nd. APPSEM II Workshop , 2004
"... Functional programming is particularly well suited for equational reasoning – referential transparency ensures that expressions in functional programs behave as ordinary expressions in
mathematics. However, unstructured programming can still difficult formal treatment. As such, when John Backus prop ..."
Cited by 1 (0 self)
Add to MetaCart
Functional programming is particularly well suited for equational reasoning – referential transparency ensures that expressions in functional programs behave as ordinary expressions in mathematics.
However, unstructured programming can still difficult formal treatment. As such, when John Backus proposed a new functional style of programming in his 1977 ACM Turing Award lecture, the main
features were the absence of variables and the use of functional forms or combinators to combine existing functions into new functions [1]. The choice of the combinators was based not only on their
programming power, but also on the power of the associated algebraic laws. Quoting Backus: “Associated with the functional style of programming is an algebra of programs [...] This algebra can be
used to transform programs and to solve equations whose “unknowns ” are programs in much the same way one transforms equations in high-school algebra”. This style of programming is usually called
point-free, as opposed to the point-wise style, where the arguments are explicitly stated. The basic set of combinators used in this paper as been already extensively presented in many publications,
such as [6], and includes the typical products, with split ( · △ ·) and projections fst and snd, sums, with either ( · ▽ ·) and injections inl and inr, and exponentials, with curry · and application
"... Functional programming is well suited for equational reasoning on programs. In this paper, we are trying to use this capability for program comprehension purposes. Specifically, in a program
understanding process, higher-order operators can work like abstract schemes in which we can fit formal speci ..."
Add to MetaCart
Functional programming is well suited for equational reasoning on programs. In this paper, we are trying to use this capability for program comprehension purposes. Specifically, in a program
understanding process, higher-order operators can work like abstract schemes in which we can fit formal specifications calculated from the source code. Such specifications are calculated by a
transformational process which we call reverse program calculation that operates on both notations: pointwise and pointfree. Once a specification matches an abstract schema, a new refactoring phase
leading to a clearer source code takes place. At the same time, an unambiguous behavioural understanding is reached because we give a mathematical description of the abstract schemes. To provide a
more complete and realistic perspective of the approach, we use recursive operators that can handle side effects.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=799104","timestamp":"2014-04-16T22:11:27Z","content_type":null,"content_length":"18794","record_id":"<urn:uuid:bc8a102f-7c08-4093-abd1-1f5cf0f9ac98>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Conshohocken Algebra 2 Tutor
...Writer colleagues of mine have also submitted their works to me for review prior to publication. I have experience educating elementary school children in a home environment. I consider Math to
be one of the most enjoyable subjects since it is very linear.
20 Subjects: including algebra 2, reading, statistics, biology
I am a youthful high school Latin teacher. I have been tutoring both Latin & Math to high school students for the past six years. I hold a teaching certificate for Latin, Mathematics, and English,
and I am in the finishing stages of my master's program at Villanova.
7 Subjects: including algebra 2, geometry, algebra 1, Latin
...Most people don't write the way they speak. I hear bad grammar in classes and in the media daily. Let's understand and focus on the differences.
35 Subjects: including algebra 2, chemistry, English, reading
...I prefer a hands-on approach to teaching and tutoring, an approach developed and polished during office hours as a TA and adjunct mathematics faculty. I find one-to-one tutoring most effective
and personally rewarding.I have been studying and playing guitar for 15+ years. I know the basic elements of several styles of music: classical, blues, rock and jazz included.
26 Subjects: including algebra 2, statistics, geometry, algebra 1
...I have successfully passed the GRE's (to get into graduate school) as well as the Praxis II content knowledge test for mathematics. Therefore, I am qualified to tutor students in SAT Math. I
have a bachelor's in mathematics from Rutgers University.
16 Subjects: including algebra 2, English, physics, calculus
|
{"url":"http://www.purplemath.com/Conshohocken_algebra_2_tutors.php","timestamp":"2014-04-18T21:56:59Z","content_type":null,"content_length":"24013","record_id":"<urn:uuid:4f1e23b8-f49f-4a55-aed6-a7f516e57534>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tracyton ACT Tutor
...I have tutored Geometry for close to nine years and enjoy teaching students how to make questions less intimidating. I help my students learn formulas and apply them in situations of increasing
complexity. I can also help students prepare for the Washington State EOC exam.
33 Subjects: including ACT Math, English, reading, algebra 2
...I enjoy tutoring math. I've helped many elementary school students with their math, trying to make it fun and easy to learn. If you don't understand something or can't solve a math problem, I
can simplify it until you get it and solve it all by yourself.
13 Subjects: including ACT Math, geometry, Chinese, algebra 1
...I believe in the importance of differentiated learning or tailoring lessons for each particular student so that it successfully meets the needs of every unique student, allowing them the
opportunity to reach their full potential in terms of understanding and applying the material at hand. I have...
27 Subjects: including ACT Math, chemistry, reading, writing
...I have taught privately for almost 20 years. After leaving the University of Washington with my Bachelors of Science, I decided to go back to take the MCAT test and prepare for entrance into
medical school. I took the test twice and enrolled in the Kaplan prep class for the MCAT as well.
46 Subjects: including ACT Math, English, reading, algebra 1
...For a full range of topics that I have been educated in, feel free to email me for more information. Academics aside, I am a very gregarious and kind-hearted person. For the past 3 years, I
have been a First Years Program Leader on campus, essentially guiding freshmen through the various challenges and concerns they have upon entering college.
42 Subjects: including ACT Math, reading, English, calculus
|
{"url":"http://www.purplemath.com/tracyton_act_tutors.php","timestamp":"2014-04-21T05:00:23Z","content_type":null,"content_length":"23502","record_id":"<urn:uuid:3d331636-25c8-423b-8236-b9d2d95b1377>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: st: Test for trend in surveys
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: Test for trend in surveys
From "Kieran McCaul" <kamccaul@meddent.uwa.edu.au>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: Test for trend in surveys
Date Fri, 3 Oct 2008 05:02:58 +0800
I remember years ago, before any of the survey commands existed in Stata, I had some cluster-sampled survey data to analyse. I had to use SUDAAN, but I found that if I simply wanted the 95%CI around a proportion I could use Stata's -regress- with the -cluster- and robust options. I just fitted the binary variable as the dependent with no covariates and used pweights. The constant term in this model would be the correct proportion and the 95%CI around this would be the correct 95%CI. I could verify this with SUDAAN.
Kieran McCaul MPH PhD
WA Centre for Health & Ageing (M573)
University of Western Australia
Level 6, Ainslie House
48 Murray St
Perth 6000
Phone: (08) 9224-2140
Fax: (08) 9224 8009
email: kamccaul@meddent.uwa.edu.au
The fact that no one understands you doesn't make you an artist.
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of 聲gel Rodr璲uez Laso
Sent: Thursday, 2 October 2008 3:08 PM
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: Test for trend in surveys
Thanks all for your answers.
When I wrote 'of type Pearson chi-squared' I didn't want to mean that
it was specifically chi-squared, but that it was of the type that
could be obtained as an option when performing a plain frequency
analysis, without having to carry out regressions.
Steve's proposal makes me a little bit nervous: I was taught that
using O.L.S. regression for a binary response is inadequate, but I
suppose there are exceptions.
Angel Rodriguez-Laso
2008/10/2 Steven Samuels <sjhsamuels@earthlink.net>:
> There is, to my knowledge, no such thing as test for trend of type Pearson
> chi-squared. I suspect that 聲gel is referring to the Cochran-Armitage test
> one degree-of-freedom chi square test for trend (A. Agresti, 2002,
> Categorical Data Analysis, 2nd Ed. Wiley Books, Section 5.3.5).
> Let Y be the 0-1 binary outcome variable and X be the variable which
> contains category scores. One survey-enabled approach is Phil's suggestion:
> use -svy: logit-.
> However -svy: reg- will produce a result closer to that of the
> Cochran-Armitage test. Why? The Cochran-Armitage test statistic is formally
> equivalent to an O.L.S. regression of Y on X, with a standard error for
> beta which substitutes the total variance for the residual variance. The
> statistic is (beta/se)^2. The total variance is equal to P(1-P), where P is
> the overall sample proportion. In other words, the standard error is
> computed under the null hypothesis of equal proportions.
> The -svy: reg- command will estimate the same regression coefficient, but
> with a standard error that is robust to heterogeneity in proportions. In
> both survey-enabled commands, t = (b/se) has a t distribution with degrees
> of freedom (d.f.) based on the survey design; t^2 has an F(1, d.f.)
> distribution.
> -Steve
>>> On Sep 30, 2008, at 6:39 AM, Philip Ryan wrote:
>>> Well, the z statistic testing the coefficient on the exposure variable is
>>> as
>>> valid and as useful a summary (test) statistic as the chi-square
>>> statistic
>>> produced by a test of trend in tables. If you prefer chi-squares, you
>>> could
>>> just square the z statistic to get the chi-square on 1 df. And if you
>>> prefer
>>> likelihood ratio chi-squares to the Wald z (or Wald chi-square) then the
>>> modelling approach can deliver that also.
>>> Phil
>>> Quoting 聲gel Rodr璲uez Laso <angelrlaso@gmail.com>:
>>> Thanks to Philip and Neil for their advice.
>>> Philip's proposal is absolutely compatible with survey data, but I was
>>> interested in a summary statistic of the type of Pearson chi-squared.
>>> To this respect, Neil puts forward a test (nptrend) that would be
>>> perfect if it allowed complex survey specifications. I believe strata
>>> and clusters are not important because the formula for the standard
>>> error of this nonparametric test (see Stata Reference Manual K-Q page
>>> 338) should not be affected by these specifications. But nptrend does
>>> not accept weights as an option, what I think makes it unsuitable for
>>> complex survey analyses.
>>> Angel Rodriguez Laso
>>> 2008/9/29 Philip Ryan <philip.ryan@adelaide.edu.au>:
>>> For a 2 x k table [with a k-category "exposure" variable] just set up a
>>> logistic
>>> dose-response model:
>>> svyset <whatever>
>>> svy: logistic <binary outcome var> <exposure var>
>>> and check the coefficient of <exposure var>, along with its confidence
>>> interval
>>> and P-value.
>>> If you prefer a risk metric rather than odds, then use svy: glm..... with
>>> appropriate link and error specifications.
>>> Phil
>>> Quoting 聲gel Rodr璲uez Laso <angelrlaso@gmail.com>:
>>> Dear Statalisters,
>>> Is there a way to carry out a test for trend in a two-way table in
>>> survey analysis in Stata?
>>> Many thanks.
>>> Angel Rodriguez Laso
>>> *
>>> *--
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2008-10/msg00101.html","timestamp":"2014-04-16T16:08:57Z","content_type":null,"content_length":"12346","record_id":"<urn:uuid:469d2b0b-755a-4d80-867d-127dc37443a7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Temple, GA Statistics Tutor
Find a Temple, GA Statistics Tutor
...In adddition I have tutored hundreds of students preparing for the ACT and SAT entrance exams. I have taught these study skills for more than 20 years as an educator. I have taught in the RESA
psychiatric/special needs program in Georgia for 8 years.
47 Subjects: including statistics, reading, English, biology
...I hold myself to a high standard and ask for feedback from students and parents. I never bill for a tutoring session if the student or parent is not completely satisfied. While I have a 24
hour cancellation policy, I often provide make-up sessions.
8 Subjects: including statistics, algebra 1, trigonometry, algebra 2
...These include remote access to databases for web-based applications such as older e-commerce sites and troubleshooting/modifying legacy forms created through Access that contained in-line
scripts for manipulating data and presentation on the screen. I have years of experience with relational dat...
126 Subjects: including statistics, chemistry, English, calculus
...Certified to teach Business Education and will be taking the Middle Grades Mathematics and Science Examination to become certified in Mathematics and Science. Available to teach GED subjects,
math, statistics, reading, writing skills and business subjects. Very patient and understanding with st...
15 Subjects: including statistics, reading, writing, English
...Algebra 2 was one of the first high school math classes I taught and I continue to teach the same concepts in college. Whether Algebra or "Math 1/2" or CCGPS, the algebra remains the same. I
can explain any algebra topic simply and clearly, though many topics require a review of related basic math skills.
13 Subjects: including statistics, calculus, geometry, algebra 1
Related Temple, GA Tutors
Temple, GA Accounting Tutors
Temple, GA ACT Tutors
Temple, GA Algebra Tutors
Temple, GA Algebra 2 Tutors
Temple, GA Calculus Tutors
Temple, GA Geometry Tutors
Temple, GA Math Tutors
Temple, GA Prealgebra Tutors
Temple, GA Precalculus Tutors
Temple, GA SAT Tutors
Temple, GA SAT Math Tutors
Temple, GA Science Tutors
Temple, GA Statistics Tutors
Temple, GA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/temple_ga_statistics_tutors.php","timestamp":"2014-04-19T09:40:41Z","content_type":null,"content_length":"24017","record_id":"<urn:uuid:43e9556e-adbe-40f2-9b5c-31c9a811eeff>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: April 2008 [00774]
[Date Index] [Thread Index] [Author Index]
Print[Plot] vs Print[text,Plot]?
• To: mathgroup at smc.vnet.net
• Subject: [mg87907] Print[Plot] vs Print[text,Plot]?
• From: AES <siegman at stanford.edu>
• Date: Sat, 19 Apr 2008 23:50:52 -0400 (EDT)
• Organization: Stanford University
I've just executed a test cell containing
Print["Some text\n", Plot[---]];
(same simple plot in both lines; objective of 2nd line being to get the
text and the plot -- eventually several plots -- into the same output
Result from first line is expected plot; result from second line is a
miniaturized plot about 1/4 the size of the first one.
* This is sensible or useful?
* This is what a novice user should expect as consistent
and reasonable behavior from the above commands?
(Print[Plot] doesn't do the same thing with a given Plot
as does Print[----, Plot[], ----] ???)
* This is documented --or better, warned about -- where?
(since it looks like I'll have to, once again, step aside from
attempting to accomplish anything useful with 6.0 and
burn up more time digging into its arcane documentation,
trying to understand this. Apologies for the sarcasm --
but that's the way it feels.)
• Follow-Ups:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Apr/msg00774.html","timestamp":"2014-04-17T13:13:00Z","content_type":null,"content_length":"26197","record_id":"<urn:uuid:74292264-405f-40a4-ade1-8f99afa9dcef>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
|