content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Rego Park Science Tutor
Find a Rego Park Science Tutor
...At first, physics didn't make any sense to me either. Before this, most of school is memorization and regurgitation, but physics is more of a way of thinking. After patiently working at it,
something clicked and it transformed the way I look at the world.
9 Subjects: including physics, biology, calculus, chemistry
...I have also done numerous hours of tutoring at the college level. I have helped students prepare for MCAT exams and DAT exams as well as the math and chemistry classes during the academic year.
I am patient, attentive, and I believe in teaching through practice problems versus just lecturing.
2 Subjects: including chemistry, organic chemistry
...What makes tutoring an ideal environment in which to learn is that I can help you see what ways of thinking you have that can be a great help in grasping new subjects. I have made a lifetime of
learning. Having earned three master's degrees and working on a doctoral degree, all in different fie...
50 Subjects: including organic chemistry, physics, chemistry, ACT Science
...Over my last 12 years, I have developed specific techniques to work with students with learning disabilities and low grade testing anxiety. My high success rates include an LSAT student that I
took from a 150 starting score to a 173 ending, and while that 20 point increase was outstanding, my LSAT students normally see an increase of 7-12 points. My standard SAT score raise is 200
42 Subjects: including psychology, geology, physical science, biology
...I have created an enriching relationship with all my students, and I encourage them to keep in contact with me for advice. My areas of expertise include GRE, SAT, NYS Regents, and history.
While teaching, I focus on maintaining the attention of students, adjusting as needed, so that they are getting all the help that they could get efficiently.
56 Subjects: including microbiology, chemistry, English, reading
|
{"url":"http://www.purplemath.com/rego_park_science_tutors.php","timestamp":"2014-04-19T04:52:58Z","content_type":null,"content_length":"23934","record_id":"<urn:uuid:7e311744-d76e-480d-80e1-0b55eb24957c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Scoredist: A simple and robust protein sequence distance estimator
Distance-based methods are popular for reconstructing evolutionary trees thanks to their speed and generality. A number of methods exist for estimating distances from sequence alignments, which often
involves some sort of correction for multiple substitutions. The problem is to accurately estimate the number of true substitutions given an observed alignment. So far, the most accurate protein
distance estimators have looked for the optimal matrix in a series of transition probability matrices, e.g. the Dayhoff series. The evolutionary distance between two aligned sequences is here
estimated as the evolutionary distance of the optimal matrix. The optimal matrix can be found either by an iterative search for the Maximum Likelihood matrix, or by integration to find the Expected
Distance. As a consequence, these methods are more complex to implement and computationally heavier than correction-based methods. Another problem is that the result may vary substantially depending
on the evolutionary model used for the matrices. An ideal distance estimator should produce consistent and accurate distances independent of the evolutionary model used.
We propose a correction-based protein sequence estimator called Scoredist. It uses a logarithmic correction of observed divergence based on the alignment score according to the BLOSUM62 score matrix.
We evaluated Scoredist and a number of optimal matrix methods using three evolutionary models for both training and testing Dayhoff, Jones-Taylor-Thornton, and Müller-Vingron, as well as Whelan and
Goldman solely for testing. Test alignments with known distances between 0.01 and 2 substitutions per position (1–200 PAM) were simulated using ROSE. Scoredist proved as accurate as the optimal
matrix methods, yet substantially more robust. When trained on one model but tested on another one, Scoredist was nearly always more accurate. The Jukes-Cantor and Kimura correction methods were also
tested, but were substantially less accurate.
The Scoredist distance estimator is fast to implement and run, and combines robustness with accuracy. Scoredist has been incorporated into the Belvu alignment viewer, which is available at ftp://
ftp.cgb.ki.se/pub/prog/belvu/ webcite.
Estimating divergence time of protein sequences is one of the fundamental problems in bioinformatics. Evolutionary distance estimates are used by many of the most commonly used phylogenetic tree
reconstruction algorithms [1-3]. In current research, phylogenetic trees are used for many types of subsequent analysis, e.g. orthology inference [4-6]. Early models for sequence evolution focussed
on nucleotides. They commonly employ Markov chains and assume independent evolution at every site. Each of the four nucleotides is identified by one state and the substitution probability is modelled
as a state transition probability from one state to another. In the most straightforward approach, the same state transition probability is assigned to every substitution [7]. Subsequent models take
account of more nucleotide specific properties, e.g. transitional and transversional substitutions as well as GC content (see [8] for an introduction). These more advanced approaches are bound to
nucleotide sequences and cannot be directly used with protein sequences.
Markov chain models for protein evolution differ from nucleotide approaches in their larger number of states and transitions for which parameters need to be estimated. The protein sequence
Jukes-Cantor model assigns the same probability to each substitution and is hence a rather poor approximation. This method essentially takes the observed differences between two sequences and
corrects this value to the estimated evolutionary distance using a logarithmic function. Other similar methods exist that also correct observed differences, e.g. Kimura's method [9]. Although they
produce rather inaccurate distance estimates, correction-based distance estimators are popular because of their simplicity. More advanced protein evolution models estimate parameters from protein
sequence alignments. Assuming the same substitutions for closely and distantly related sequences leads to the construction of the Dayhoff matrix series [10]. Following this approach, it suffices to
collect data from alignments of closely related sequences to build an evolutionary model of amino acid substitution.
Dayhoff and co-workers introduced the term Percent Accepted (point) Mutation (PAM), which denotes a commonly used measure for evolutionary distance between two aligned sequences (insertions and
deletions are ignored). In other words, two sequences at a distance of 150 PAM are related to each other by 1.5 substitutions per position on average. As substitution is a stochastic process, some
positions will experience multiple substitutions while others will experience none. It is also possible that secondary substitutions at one site will result in the original residue, making the
evolutionary steps invisible. This is in essence the reason why estimating evolutionary distance is so hard – multiple substitutions cannot be observed directly. An evolutionary distance of 250 PAM
corresponds roughly to 80% observed differences. The term PAM is found in literature for both the matrix series given by Dayhoff et al. as well as for evolutionary distance unit. In this publication
we refer to the matrices as Dayhoff matrices and reserve the term PAM for distance units.
There are two major shortcomings connected with the derivation of the Dayhoff matrices. First, potential errors inherent in the experimental data will be magnified by extrapolation. Additionally, it
is questionable whether substitution probabilities observed on closely related sequences can accurately reflect the evolution of more distantly related sequences. The efforts of researchers since the
publication of the Dayhoff matrices have led to several other matrix series, sharing the idea of an underlying Markov chain. They differ in terms of the data they are built upon and account for the
above-mentioned shortcomings in various ways [11-13].
The approach behind the BLOSUM matrices [14] is different from Dayhoff's evolutionary model. Whereas the Markov model assumes that any transition probability matrix may be derived from another matrix
in the same series, the BLOSUM matrices do not imply any evolutionary time. There is no direct mathematical relationship between matrices in the BLOSUM series. Sequences with identities above a given
identity cutoff are clustered and used to derive score matrices. The BLOSUM matrices are known as a good general-purpose choice. Especially, BLOSUM62 is frequently chosen for the alignment of
We here introduce Scoredist, a novel correction-based distance estimator for protein sequences. It applies a correction function to an observed reduction in normalised score, rather than to observed
differences as other correction-based methods. This gives a better estimate of the divergence in the well-established PAM measure and allows the popular BLOSUM matrix series to be used. Other
matrices could in principle be used, but the BLOSUM matrix has proved to be the most universal. Scoredist distance estimates are calculated directly by a simple equation and do not require cumbersome
computational approximations, which is needed for e.g. Maximum Likelihood (ML) and Expected Distance (ED) estimates [15]. Additional calibration opens the possibility to make Scoredist tuned to other
evolutionary models.
In order to evaluate our novel protein distance estimator Scoredist against other estimators, we generated a large testset of artificial sequence alignments. Simulation is the only way to exactly
know an alignment's evolutionary distance. The substitutions were made by ROSE [16] according to an evolutionary model that can be chosen arbitrarily. It is to be expected that a distance estimator
based on a particular evolutionary model will perform optimally on a testset generated with the same model. We therefore generated testsets using four different matrix series: Dayhoff [10], MV [12],
JTT [11], and WAG [13]. For each model, 2000 alignments were created for evolutionary distances between 1 and 200 PAM units, i.e. 10 alignments for each distance. The Scoredist, Maximum Likelihood,
and Expected Distance estimators can all be tuned towards a particular evolutionary model. We therefore used three evolutionary models which were also used to generate the testsets for these distance
estimators, and use a shorthand to refer to these as "method-model". For instance, Maximum Likelihood using the MV model is denoted ML-MV. The Jukes-Cantor and Kimura estimators can not be tuned to a
specific model but were tested on all four datasets.
Table 1 shows a compressed summary of the results. For each combination of distance estimator and dataset, the average root mean square deviation from the true distance was calculated for all 2000
alignments. The Expected Distance results were similar to ML, as the methods are akin in nature, but ML was generally more accurate and is much more widely used. As expected, low RMSD values as a
sign of good distance estimates were generally obtained when using the same model for alignment creation and subsequent distance estimation. This is seen in the diagonal of low RMSD values from
Dayhoff/Dayhoff to JTT/JTT. The only exception to this rule was observed when the testset was generated with Dayhoff. Here, ML-JTT was slightly better than ML-Dayhoff. This result was also verified
for distances up to 250 and 300 PAM (data not shown). Comparing Scoredist and ML accuracies when training and testing using the same model resulted in a tie. Scoredist was better for MV, ML was
better for JTT, and they were equally accurate for Dayhoff. When comparing accuracies for different training and testing models, however, Scoredist dominates. Here, Scoredist performed better than ML
in five of the six cases. For the MV testset the difference was very big. The only case where ML was better than Scoredist was again when running ML-JTT on the Dayhoff testset, which for unclear
reasons produced very accurate distance estimates.
Table 1. Accuracy as average RMSD values for combinations of data modelsand estimators
The Jukes-Cantor and Kimura correction methods are generally less accurate than Scoredist and ML estimators. In some cases they reached higher accuracy than Scoredist and ML trained on the "wrong"
model. For instance, on the Dayhoff testset Kimura was better than Scoredist-MV and ML-MV, and on the MV testset Jukes-Cantor was better than Scoredist and ML trained on Dayhoff or JTT. However,
Jukes-Cantor and Kimura never came near the Scoredist and ML accuracy when trained on the "right" model. In a real situation, it is of course not known which evolutionary model is most appropriate.
Therefore, taking the average RMSD values for each training model reveals the generality and robustness of the method on different testsets. The average accuracy of Scoredist is consistently better
than for ML, and Jukes-Cantor and Kimura are even further behind.
Figure 1 two shows a more detailed picture of the different distance estimators. The average of 10 estimates from 10 independent simulations at each evolutionary distance is plotted for data
generated with the Dayhoff matrices. The variance among the 10 estimates is not shown for clarity; they are however reflected by the RMSD values in Table 1 which may give a slightly different
picture. For instance, it is possible that the average deviation is close to zero if the individual estimates have large positive and negative deviations that cancel each other out. Therefore, the
RMSD values should be trusted more than the deviation plots when in doubt. Figure 1A shows the dependence on evolutionary model for Scoredist and ML. Testing on the Dayhoff testset, Scoredist-Dayhoff
and ML-Dayhoff stayed reasonable accurate in the entire range (below 5% error). In contrast, Scoredist-MV and ML-MV deviated considerably from the true distance. It is however clear that ML is more
affected by switching model than Scoredist is. In Figure 1B the testset was generated with the MV model. Again, the corresponding deviation was observed for "wrong model" estimators. Here it is even
more pronounced that ML is more dependent on the model, and generalizes poorly. Scoredist was less affected by the change of model – Scoredist-Dayhoff was considerably more accurate on the MV testset
than ML-Dayhoff. As expected, when Scoredist and ML had been trained on MV data, the accuracy is very good for both estimators. In conclusion, we observed that although the Scoredist method is very
simple compared to the ML method, it is approximately equally accurate when testing and training using the same evolutionary model. However, when testing on a different model, Scoredist is
considerably more accurate.
Figure 1. Stratified accuracy analysis of Scoredist and ML. To illustrate how estimated distance depends on the model, the average deviation is plotted as a function of true distance for two
evolutionary models, Dayhoff and Mueller-Vingron. For each evolutionary distance between 1 and 200 PAM, 10 alignments were generated. For each alignment, the deviation was calculated as the
difference between the estimated distance and the true distance used for data generation by ROSE [16]. The average of the 10 deviations was plotted using a running average with a window of 10
residues. Note that positive and negative deviations at the same true distance can cancel each other out – the curve only shows the average deviation and not the variability. The values in Table 1
measure the accuracy more correctly by using RMSD of every datapoint. The testset data was created with the matrices given by Dayhoff (A) or Müller-Vingron (B). In both cases, the estimators using
the same evolutionary model as the testset data perform well. However, when switching the model in the estimator, Scoredist diverges less than ML, indicating that Scoredist is more robust. The curves
show that ML-MV is more different from ML-Dayhoff than Scoredist-MV is from Scoredist-Dayhoff, particularly for the MV dataset in (B). The less difference between estimates using different models,
the more robust is the method.
The Scoredist estimator was implemented in Belvu, which is a general-purpose multiple alignment viewer that allows basic alignment editing. Belvu can calculate and display phylogenetic trees. The
tree reconstruction can be based on Scoredist or other common correction-based distance estimators available within Belvu.
Multiple alignments can be coloured in Belvu according to conservation using average BLOSUM62 score in the column, or by residue-specific colours. User-specified cutoffs can be employed to fine-tune
the display. Belvu has a range of functions for sorting, colouring, marking up, and printing alignments. In Figure 2, the alignment is coloured according to conservation, and sorted according to the
tree. The effect of distance correction with Scoredist is illustrated.
Figure 2. The Belvu multiple sequence alignment viewer. Belvu is a multiple sequence alignment viewer that implements the Scoredist distance estimator. The alignment window (A) shows a subset of the
Pfam family DNA_pol_A (PF00476). Uniprot IDs are shown throughout. A sequence with known structure is included (DPO1_ECOLI) – the SA line showing surface accessibility and the SS line showing
secondary structure. The neighbour-joining tree in (B) used uncorrected distances (observed differences), while the tree in (C) used Scoredist correction. Belvu assigns a colour to each species if
provided with species markup information. The distance correction mainly affects the longer branches, and affects the tree topology in some cases, e.g. the placement of DPOQ_HUMAN. Structural markup
and taxonomic information were embedded in the Stockholm format alignment provided by the Pfam database.
Belvu can also be utilised for batch mode operations on the multiple alignment, or for producing distance matrices or phylogenetic trees without graphical output. It is available for the most common
UNIX operating systems and can be obtained from [20]. A Windows version exists but is less frequently maintained. See [17] for instructions, and [18] for information on the Stockholm format, which is
used by the Pfam project.
Our analysis was based on four different evolutionary models – Dayhoff, MV, JTT and WAG. We chose these because they represent the spectrum of models well. The only tuning done in the Scoredist
method is the estimation of the calibration factor c. This factor can be seen as a scaling factor for the logarithm base in equation (5) that needs to be set empirically.
The difference between Scoredist and ML becomes particularly apparent in the MV dataset. There are several hypotheses for this behaviour. The Dayhoff matrices were constructed with the limited data
available at the time. Given the substantial increase of research output in this field particularly during the last decade, it is not surprising that the Müller-Vingron model (published in 2000)
reports substantially other results than the Dayhoff (1978) and JTT (1992) matrices. Additionally, the calibration factor c can also be interpreted as measure for the similarity of the respective
models. Following this argument, JTT and Dayhoff are more akin given a Δc ≈ 0.05. The MV model is more distant to both JTT (Δc ≈ 0.11) and Dayhoff (Δc ≈ 0.16).
The Expected Distance estimator generally overestimates distances. For instance, among Dayhoff-calibrated estimators on the MV testset, Expected Distance is more than 10 PAM RMSD units (over 50%)
poorer than the best method Scoredist. Similar values are observed for JTT calibrated estimators. Generally, MV-trained estimators are prone to underestimate evolutionary distances (Figure 1A). In
combination with the ED higher distance estimation, this rather fortuitously leads to good results for ED – MV. However, the scope of this research was to identify a robust method that performs well
on various data sources. An estimator which is highly sensitive to the data source or possible incorrect calibration is of less value. The best single estimator was JTT-calibrated Scoredist. If the
method per se is measured by averaging over all calibrations and testsets, Scoredist receives 15.39, ED 16.89, and ML 17.14 PAM RMSD units. This highlights Scoredist as the most robust estimator,
with the distance between Scoredist and ED (Δ[Scoredist, ED ]≈ 1.50) being 6 fold the difference between ED and ML (Δ[ED, ML ]≈ 0.25).
We here only present Scoredist results using BLOSUM62 for calculating the score σ between two sequences. In principle one could use some other score matrix, but we found that this had little effect
on the results. Since the goal was to make a general-purpose method, BLOSUM62 was an obvious choice. The key to Scoredist is the usage of scores rather than identities, and the choice of somewhat
arbitrary parameters is not of primary concern. At present, gaps in the alignments are not included in the Scoredist calculation. Traditionally, gaps have been difficult to embody in evolutionary
models. In the models used here, they are at best crudely modelled by treating every gap equally. An inherent problem is that the probabilities for insertions and deletions (indels) are not
necessarily synchronized with the substitution probabilities. Some protein families are more prone to indels than others, hence it is hard to make a generalizable model that suits all protein types.
We have experimented with affine gap penalties in the Scoredist method (this is an option in the implementation), but this resulted in decreased accuracy. We therefore do not recommend using gaps to
estimate protein distances.
We have developed the score matrix based distance estimator Scoredist for aligned protein sequences. Its main advantages are computational simplicity and high robustness. Most other distance
estimators produce good results for certain evolutionary models but perform poorly on others. The Maximum Likelihood and Expected Distance were found to overfit their estimates to the evolutionary
model so much that the results on testsets generated with other models suffered heavily. The correction-based methods Jukes-Cantor and Kimura also favoured a particular evolutionary model, but were
not competitively accurate on any testset. It seems that Scoredist achieved the best compromise between accuracy and generalization power.
For the estimation of divergence time, let s^1 and s^2 be two aligned protein sequences (gaps are ignored) of identical length l. A similarity score σ is defined as
where S is a log-odds score matrix. Log-odds score matrices are constructed such that substitutions by the same or a similar amino acid receive a positive score, whereas substitutions to dissimilar
amino acids are attributed a negative score. The expected value for this kind of matrix is negative. This ensures that the comparison of unrelated sequences returns a negative score. For two random
sequences of length l the expected score σ^r(l) = σ^0 * l, where is the expected value of the score matrix. As we strive to measure scores above the scores for the null model of sequence
independence, the score σ(s^1, s^2) is deducted by the expected score σ^r, giving the normalised score σ^N
σ^N = σ(s^1, s^2) - σ^r (l). (2)
For two random sequences of length l the expected score σ^r (l) = σ^0 * l, where σ^0 is the expected value of the score matrix.
The expected score σ^r for unrelated sequences can be regarded as lower limit. The upper limit of the score between s^1 and any other sequences is given by σ(s^1, s^1). For two different sequences,
the upper limit of the score σ^U is, for the sake of symmetry, assumed to be
and normalised
σ^UN = σ^U (s^1, s^2) - σ^r (l). (4)
Any sound score σ^N is situated within the interval [0, σ^UN]. The validity of the upper boundary follows from the score's definition. The lower boundary might, however, get violated if two sequences
receive a score σ(s^1, s^2) <σ^r (l). As the model assumes independent evolution already for σ^r (l), a score below σ^r does not contain any additional information. A lower score is therefore set to
σ(s^1, s^2) = σ^r (l). We model the raw distance as a modified Poisson process
As seen in Figure 3, d[r ]is linearly related to the true distance, deviating only by a constant factor. The Scoredist evolutionary distance estimate of two sequences is given as the product of the
raw distance and a calibration factor
Figure 3. Estimation of the calibration factor c in Scoredist. This factor rescales the raw distance d[r ]to optimally fit true evolutionary distances. The plot shows how c is estimated by
least-squares fitting of raw distances d[r ]to true distances for 2000 artificially produced sequence alignments, using the Dayhoff matrix series. The linear relationship between the raw distance d[r
]and the true distance of the sequence samples justifies the introduction of the calibration factor c, which was here determined to c[Dayhoff ]= 1.3370 (See Table 2).
d[s ]= c * d[r]. (6)
Evolutionary distances of 250–300 PAM units are commonly considered as the maximum for reasonable distance estimation and, therefore, the Scoredist estimate d[s ]is restricted to the interval [0,
300] PAM.
Calibration factors can be determined for various evolutionary models. We used the ROSE program [16] to simulate evolution with three different matrix series and generated 2000 sample sequence
alignments for distances up to 200 PAM units. The calibration factor c was calculated by least squares fitting on this data, using the BLOSUM62 score matrix for calculating the score σ in the
estimator (Table 2, Figure 3). The simulated evolution started with a random sequence of 200 residues. For each integer distance within the interval [1, 200] PAM, we produced 10 alignments, yielding
2000 alignments per dataset. The default gap parameters of ROSE V1.3 were applied. Each dataset was generated with the transition probability matrix and the stationary frequencies of the respective
evolutionary model.
Table 2. Calibration factors for three evolutionary models
Calculation of Maximum Likelihood (ML) and Expected Distances (ED): ML distances were estimated by applying the Newton-Raphson method to the derivative of the likelihood of the evolutionary distance
given an alignment. To calculate ED, the same likelihood function was numerically integrated, to get its "center of gravity" [15]. Both methods are implemented in the program lapd (L. Arvestad,
unpublished), which uses Perl and Octave. The Jukes-Cantor and Kimura distance estimators were run as implemented in Belvu. The popular PROTDIST program from the PHYLIP package [19] calculates only
ML-Dayhoff and Kimura distances. We therefore chose to use lapd in order to assess Scoredist by a broader range of distance estimators.
Authors' contributions
ES had the initial idea and implemented the method. VH carried out the evaluation and wrote the first manuscript draft. All authors read and approved the final manuscript.
We thank Lars Arvestad for the lapd program, for helpful discussions, and for reading the manuscript.
Sign up to receive new article alerts from BMC Bioinformatics
|
{"url":"http://www.biomedcentral.com/1471-2105/6/108","timestamp":"2014-04-25T08:04:43Z","content_type":null,"content_length":"102845","record_id":"<urn:uuid:712767e7-8801-4a39-95da-faf61047b853>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lagrangian measurements of inertial particle accelerations in shear, and in grid generated turbulence
Seminar Room 1, Newton Institute
We describe two dimensional Lagrangian acceleration statistics of inertial particles in a turbulent boundary layer and contrast the results with those observed in decaying grid generated turbulence
(Ayyalasomayajula PRL .Vol. 97,144507 (2006)) . The statistics were determined by means of particle tracking techniques using a high speed camera moving along the side of a wind tunnel at the mean
flow speed . Water droplets were fed into the flow using two different methods: sprays placed down-stream from an active grid, and from tubes fed into the boundary layer from humidifiers. The
boundary layer, which had considerable free stream turbulence, was formed above a flat plate placed horizontally in the tunnel. The flows are described in terms of the Stokes, Froude and Reynolds
numbers. For the flow conditions studied, the sprays produced Stokes numbers varying from 0.47 to 1.2, and the humidifiers produced Stokes numbers varying from 0.035 to 0.25, where the low and high
values refer to the outer boundary layer edge and the near-wall region, respectively. The Froude number was approximately 1.0 for the sprays and 0.25 for the humidifiers. The boundary layer momentum
thickness Reynolds number was approximately 800. The free stream turbulence was varied by operating the grid in the active mode as well as a passive mode (the latter behaves as a conventional grid).
The effects of the free stream turbulence on the acceleration statistics were systematically studied. At the outer edge of the boundary layer, where the shear was weak, the acceleration probability
density functions were similar to those previously observed in isotropic turbulence for inertial particles. As the boundary layer plate was approached, the tails of the probability density functions
narrowed, became negatively skewed, and their peak occurred at negative accelerations (decelerations in the stream-wise direction). The mean deceleration and its rms increased to large values close
to the plate. These effects were more pronounced at higher Stokes number. Although there were free stream turbulence effects, and the complex boundary layer structure played an important role, a
simple model suggests that the acceleration behavior is dominated by shear and inertia. The results are contrasted with inertial particles in isotropic turbulence and with fluid particle acceleration
statistics in a boundary layer. The work was supported by the US National Science Foundation.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/HRT/seminars/2008091010201.html","timestamp":"2014-04-19T04:39:48Z","content_type":null,"content_length":"8159","record_id":"<urn:uuid:550d9b41-afd8-456c-b023-297861c8c2b4>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 11 - 20 of 33 matches
Water Quality-Total Dissolved Solids part of Starting Point-Teaching Entry Level Geoscience:Teaching with Data:Examples
Students use a microcomputer connected to a conductivity probe to measure the total dissolved solids in local area water samples. -
Subject: Geoscience: Geoscience:Oceanography:Chemical, Environmental Science:Water Quality and Quantity:Surface Water , Geoscience:Geology:Geochemistry, Geoscience:Hydrology:Surface Water:Water
Sun Spot Analysis part of Starting Point-Teaching Entry Level Geoscience:Teaching with Data:Examples
Introductory students use Excel to graph monthly mean Greenwich sunspot numbers from 1749 to 2004 and perform a spectral analysis of the data using the free software program "Spectra". -
Subject: Geoscience: Physics:Astronomy, Geoscience:Atmospheric Science:Meteorology:Solar and terrestrial radiation
Using Mass Balance to Understand Atmospheric CFCs part of Starting Point-Teaching Entry Level Geoscience:Teaching with Data:Examples
Students use an interactive online mass balance model help understand the observed levels of chlorofluorocarbon CFC-12 over the recent past. -
Subject: Geoscience: Environmental Science:Global Change and Climate:Ozone depletion, Environmental Science:Policy:Global Policy, Geoscience:Atmospheric Science:Meteorology:Atmospheric structure and
When is Dinner Served? Predicting the Spring Phytoplankton Bloom in the Gulf of Maine (College Level) part of Starting Point-Teaching Entry Level Geoscience:Teaching with GIS:Examples
College-level adaptation of the Earth Exploration Toolbook chapter. Students explore the critical role phytoplankton play in the marine food web. -
Subject: Geoscience: Geoscience:Oceanography, Geography:Geospatial, Biology:Ecology:Principles, Environmental Science:Ecosystems:Ecology
Reading Time Series Plots part of Cutting Edge:Online Teaching:Activities for Teaching Online
This activity provides a brief introduction to GPS and provides a student activity to practice creating and reading time series plots with simplified GPS data and with authentic GPS data from
Subject: Geoscience: Geoscience:Geology:Geophysics:Geodesy, Geoscience:Geology:Tectonics
Abrupt climate change, greenhouse gases, and the bipolar see-saw part of Cutting Edge:Early Career:Previous Workshops:Workshop 2010:Teaching Activities
In this activity, students work with paleoclimate proxy data (d18O, CH4, CO2)from the Byrd and GISP2 ice cores. Students prepare a graph of paleoclimate data and use the graph to answer several
questions about the ...
Subject: Geoscience: Geoscience:Atmospheric Science:Climate Change, Environmental Science:Global Change and Climate:Climate Change
Consequences of Modern Energy Use: A Remote Sensing analysis of the gulf oil spill using ArcGIS software. part of Cutting Edge:Energy:Energy Activities
Students download satellite imagery and conduct a remote sensing analysis of the Deepwater Horizon oil spill using ArcGIS software.
Subject: Geoscience: Environmental Science:Water Quality and Quantity:Point Source Pollution, Environmental Science:Waste:Toxic and Hazardous Wastes:Organic Chemicals, Environmental Science:Oceans
and Coastal Resources, Geoscience:Oceanography:Marine Resources, Geoscience:Oceanography, Environmental Science:Energy:Fossil Fuels
Radiometric Dating Isochron exercise part of Cutting Edge:Rates and Time:Teaching Activities
Hands-on introduction to using the isochron method to determine radiometric ages.
Subject: Geoscience: Geoscience:Geology:Historical Geology
Dating Students: Relative vs. Numerical Time part of Cutting Edge:Rates and Time:Teaching Activities
This activity introduces students to the fundamental ideas of relative versus radiometric dating, using the students themselves as a sample population. In the first half, the students attempt to
order the people in ...
Subject: Geoscience: Geoscience:Atmospheric Science:Climate Change, Environmental Science:Global Change and Climate:Climate Change:Paleoclimate records, Geoscience:Atmospheric Science:Climate Change:
Paleoclimate records, Geoscience:Geology:Historical Geology
Stream Characteristics Lab part of Quantitative Skills:Activity Collection
Students determine the relationship between the sinuosity of a river and its gradient by calculating gradients and sinuosity, and generating a graph on Excel. They then test the relationship by
making measurements on a picture generated on Google Earth.
Subject: Geoscience: Geoscience:Geology, Hydrology
|
{"url":"http://serc.carleton.edu/teachearth/teaching_methods/teachingwdata/browse_examples.html?results_start=11","timestamp":"2014-04-20T18:40:02Z","content_type":null,"content_length":"33960","record_id":"<urn:uuid:8f120de9-fd81-4eb1-9049-fc11d6da72c9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Proposal re James Harris
Replies: 16 Last Post: Dec 31, 2000 6:26 PM
Messages: [ Previous | Next ]
Re: Proposal re James Harris
Posted: Dec 30, 2000 10:01 AM
Well, I'd still suggest that saying "wrong, for exactly the same
reason!" is an exaggeration, when one argument is incoherent
because nothing's defined and the other argument is just the
sort of thing mathematicians say all the time, where in fact
all the details could easily be filled in.
OTOH you point out below that it's more circular than I'd
realized, so you have a better point than I thought at first:
On 30 Dec 2000 01:20:29 GMT, landsbur@troi.cc.rochester.edu (Steven E.
Landsburg) wrote:
>Regarding my assertion that James's FLT proof and his sum-of-squares
>proof are wrong in essentially the same way, David Ullrich writes:
>> Actually there's a big difference: The proof that x^2 + y^2 <> 0
>> is perfectly correct. Suppose that x and y are reals not both
>> zero. If x^2 + y^2 = 0 then it _does_ follow that
>> (x+iy)(x-iy) = 0
>But *why* does it follow? It follows because this equation takes place
>in a particular ring (the Gaussian integers or the complex numbers,
>as you prefer) which we can prove has a certain property (namely
>the property of being an integral domain).
>And how do we know these rings are integral domains? We know it,
>ultimately, because we know that -1 is not the square of any integer.
>That's perilously close to the whole strength of what James is
>trying to prove and hence perilously close to being circular.
>Therefore I question the relevance of Prof. Ullrich's statement:
>> Seriously: Suppose hypothetically you knew all about the
>> real and complex numbers as _fields_, but you were somehow
>> unaware that the reals could be ordered, so you didn't
>> know the result about sums of squares of reals, because
>> the word "positive" didn't exist. If someone showed you
>> exactly what James wrote you would accept it as a proof
>> that the sum of the squares of two non-zero reals cannot
>> vanish.
>The point is that it is not *possible* to "know about the complex
>numbers as a field" without first knowing that -1 is not the square
>of an integer.
My first reaction was "why not?". Then I thought about it.
When we construct the complexes as ordered pairs of
reals (because we're too ignorant to construct the
complexes in a manner P L would approve of) we
need to show that if x and y are reals not both zero
then x+iy has an inverse. The obvious way to do
this is to write down what the inverse _is_, but
when we do that we're going to be dividing by
x^2 + y^2.
So it's circular. (But it's not nonsense. And I still
think that "what in the world does this have to
do with the question it purports to be answering?"
is a much more relevant reply than "this is
wrong, for the same reason".)
> Without that knowledge, you cannot know that the
>complex numbers form a field.
>So I continue to maintain that:
> a) James's sum-of-squares proof is wrong
> b) It is wrong for exactly the same reason that his FLT proof is
> wrong
>and c) If James could be made to understand the reasons for a), then
> he would understand most of what everybody has been trying to
> explain to him.
I don't believe that James understands as little as he claims to.
In particular when Winter gives a precisely analogous proof of
an obviously false result I don't _really_ think that James
is missing the point, and I don't think that he _really_ thinks
he's explained the difference.
>Steven E. Landsburg
Date Subject Author
12/28/00 Steven E. Landsburg
12/29/00 Re: Proposal re James Harris David C. Ullrich
12/29/00 Re: Proposal re James Harris hale@mailhost.tcs.tulane.edu
12/29/00 Re: Proposal re James Harris Adam Russell
12/29/00 Re: Proposal re James Harris Dik T. Winter
12/29/00 Re: Proposal re James Harris magidin@math.berkeley.edu
12/29/00 Re: Proposal re James Harris hale@mailhost.tcs.tulane.edu
12/30/00 Re: Proposal re James Harris mareg@mimosa.csv.warwick.ac.uk
12/30/00 Re: Proposal re James Harris Steven E. Landsburg
12/31/00 Re: Proposal re James Harris David Libert
12/29/00 Re: Proposal re James Harris Steven E. Landsburg
12/30/00 Re: Proposal re James Harris David C. Ullrich
12/30/00 Re: Proposal re James Harris Steven E. Landsburg
12/30/00 Re: Proposal re James Harris Steven E. Landsburg
12/31/00 Re: Proposal re James Harris jstevh@my-deja.com
12/31/00 Re: Proposal re James Harris hale@mailhost.tcs.tulane.edu
12/31/00 Re: Proposal re James Harris Steve Lord
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=284393","timestamp":"2014-04-20T21:56:51Z","content_type":null,"content_length":"38885","record_id":"<urn:uuid:0b061507-29b5-4fbf-9438-caae2bbfbbb6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: September 2010 [00505]
[Date Index] [Thread Index] [Author Index]
Re: How to interpret this integral?
• To: mathgroup at smc.vnet.net
• Subject: [mg112614] Re: How to interpret this integral?
• From: Leonid Shifrin <lshifr at gmail.com>
• Date: Thu, 23 Sep 2010 04:21:13 -0400 (EDT)
Mathematica does not automatically handle most of singularities in the sense
you would like it to.
In your case, the non-zero aW changes the answer qualitatively. But for
Mathematica, it is not at
all obvious that the aW==0 should be "considered" by it on a separate basis.
Who knows, may be
this is just a part of the more general expression, where the singularity
will be cancelled etc. Using
your formula at aW ==0 meant taking the limit aW->0 (and my procedure was
just a trick to make it
easier), but Mathematica does not automatically do this kind of things
since it may not always be
a right thing to do. What you, as a Mathematica programmer, can do, is to
define your function in a
way which reflects special points:
myFunction[0,aV_,v0_,th0_,w0_,Ts_]:==<specific answer>
myFunction[aW_,aV_,v0_,th0_,w0_,Ts_]:==<generic answer>
In this case, the more specific definitions will automatically be tried
before the more general ones.
If you have an idea about possible singular points, the process of giving
such specialized
definitions can be somewhat automated, and that is exactly the task of
applied Mathematica programmer
(a scientist using Mathematica who is able to use the domain-specific
knowledge in in implementing such automation), but you still must perform
such an analysis yourself. I actually doubt that there can be any totally
general approach to situations like this. IMO, routine stuff is for the
machine, but singularities are for humans )
Sorry if I wasn't exactly helpful.
On Wed, Sep 22, 2010 at 7:55 PM, Julian Stoev / =D0=AE=D0=BB=D0=B8=D0=B0=D0=
=BD =D0=A1=D1=82=D0=BE=D0=B5=D0=B2 <
julian.stoev at gmail.com> wrote:
> Thanks Leonid,
> I also had your first idea, but I am a little suspicious if this is a
> good general approach.
> If aW is a special point, shouldn't Mathematica provide a solution in
> terms of If[] functions?
> There must be some reason why it is not doing so.
> And Mathematica is not the only software giving me an answer with this
> form...
> Also I was not able to force it to take into account any assumptions,
> like real variables, real Sqr,...
> There must be some regular non-manual way of getting a solution?
> Best!
> --JS
> On Wed, Sep 22, 2010 at 5:29 PM, Leonid Shifrin <lshifr at gmail.com> wrote:
> > Hi Julian,
> > Here is your expression:
> > In[137]:== expr == Integrate[(aV*t+v0)*Cos[(aW*t^2)/2+th0+t*w0],{t,0,Ts=
> > Out[137]==
> > (1/(aW^(3/2)))(Sqrt[\[Pi]] (-aW v0+aV w0) Cos[th0-w0^2/(2 aW)]
> > FresnelC[w0/(Sqrt[aW] Sqrt[\[Pi]])]+Sqrt[\[Pi]] (aW v0-aV w0)
> > Cos[th0-w0^2/(2 aW)] FresnelC[(aW Ts+w0)/(Sqrt[aW] Sqrt[\[Pi]])]+aV
> Sqrt[aW]
> > (-Sin[th0]+Sin[th0+(aW Ts^2)/2+Ts w0])+Sqrt[\[Pi]] (aW v0-aV w0)
> > (FresnelS[w0/(Sqrt[aW] Sqrt[\[Pi]])]-FresnelS[(aW Ts+w0)/(Sqrt[aW]
> > Sqrt[\[Pi]])]) Sin[th0-w0^2/(2 aW)])
> > If you want to treat the point aW == 0 specially, the simplest is to pu=
> it
> > to zero right when the integral
> > is computed:
> > In[138]:== exprAt0 == Integrate[(aV*t+v0)*Cos[th0+t*w0],{t,0,Ts}]
> > Out[138]== (-aV Cos[th0]+aV Cos[th0+Ts w0]-v0 w0 Sin[th0]+(aV Ts+v0) w0
> > Sin[th0+Ts w0])/w0^2
> > Alternatively, you can extract it from the full expression, but it st a
> > little more involved. One way
> > is to change variable as t == 1/aW, then expand in t around t==Infinity=
> then
> > take the limit t->Infinity of the
> > resulting expression. One subtlety here that you will need to expand to
> the
> > second order in t, to
> > get the correct result:
> > In[140]:== exprAt0Alt ==
> > Limit[Normal@Series[expr/.aW->1/t,{t,Infinity,2}],t->Infinity]
> > Out[140]== (-aV Cos[th0]+aV Cos[th0+Ts w0]+w0 (-v0 Sin[th0]+aV Ts
> Sin[th0+Ts
> > w0]+v0 Sin[th0+Ts w0]))/w0^2
> > You can check that the results are the same:
> > In[141]:== FullSimplify[exprAt0-exprAt0Alt]
> > Out[141]== 0
> > Hope this helps.
> > Regards,
> > Leonid
> >
> >
> > On Wed, Sep 22, 2010 at 9:57 AM, Julian <julian.stoev at gmail.com> wrote:
> >>
> >> Hello All,
> >>
> >> After long interruption with symbolic computing, I am struggling how
> >> to make a useful result out of this integral.
> >>
> >> Integrate[(aV*t + v0)*Cos[(aW*t^2)/2 + th0 + t*w0, {t, 0, Ts}]
> >>
> >> It comes from the differential equations describing the motion of a
> >> wheeled robot. aV and aW are accelerations and v0, w0, th0 are initial
> >> conditions. The numerical solution clearly exists for different real
> >> accelerations, including positive, negative and zero.
> >>
> >> However the symbolic solution of Mathematica is:
> >> (Sqrt[Pi]*(-(aW*v0) + aV*w0)*Cos[th0 - w0^2/(2*aW)]*
> >> FresnelC[w0/(Sqrt[aW]*Sqrt[Pi])] + Sqrt[Pi]*(aW*v0 - aV*w0)*
> >> Cos[th0 - w0^2/(2*aW)]*FresnelC[(aW*Ts + w0)/(Sqrt[aW]*Sqrt[Pi])]
> >> +
> >> aV*Sqrt[aW]*(-Sin[th0] + Sin[th0 + (aW*Ts^2)/2 + Ts*w0]) +
> >> Sqrt[Pi]*(aW*v0 - aV*w0)*(FresnelS[w0/(Sqrt[aW]*Sqrt[Pi])] -
> >> FresnelS[(aW*Ts + w0)/(Sqrt[aW]*Sqrt[Pi])])*Sin[th0 - w0^2/
> >> (2*aW)])/
> >> aW^(3/2)
> >>
> >> Note that aW can be found inside a Sqrt function and also in the
> >> denominator. While I can see that FresnelS[Infinity] is well defined,
> >> so aW====0 should not be a real problem, it is still very problematic
> >> how to use this result at this particular point. The case of negative
> >> aW is also interesting, because it will require a complex FresnelS and
> >> FresnelC. I somehow managed to remove this problem using
> >> ComplexExpand, but I am not sure this is the good solution.
> >>
> >> I tried to give Assumptions -> Element[{aW, aV, w0, v0, Ts, th0},
> >> Reals] to the integral, but there is no change in the solution.
> >>
> >> While I understand that the solution of Mathematica is correct in the
> >> strict mathematical sense, I have to use the result to generate a C-
> >> code, which will be evaluated numerically and the solution I have now
> >> is not working for this.
> >>
> >> Can some experienced user give a good advice how use get a solution of
> >> the problem?
> >>
> >> Thank you in advance!
> >>
> >> --JS
> >>
> >
> >
> --
> Julian Stoev, PhD.
> Control Researcher
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Sun-Content-Length: 7720
Julian,=C2 <div><br></div><div>Mathematica does not automatically handle =
most of singularities in the sense you would like it to.=C2 </div><div>In=
your case, the non-zero=C2 aW changes the answer qualitatively. But for =
Mathematica, it is not at=C2 </div>
<div>all obvious that the aW=0=C2 should be "considered" by i=
t on a separate basis. Who knows, may be</div><div>=C2 this is just a par=
t of=C2 the more general expression, where the singularity will be cancel=
led etc. Using=C2 </div>
<div>your formula at=C2 aW =0 meant taking the limit aW->0 (and my p=
rocedure was just a trick to make it=C2 </div><div>easier), but =C2 Mat=
hematica does not automatically do this kind of=C2 things since it may no=
t always be</div><div>
=C2 a right=C2 thing to do.=C2 What you, as a Mathematica=C2 progra=
mmer, can do, is to define your function in a=C2 </div><div>way which ref=
lects special points:</div><div><br></div><div>myFunction[0,aV_,v0_,th0_,w0=
_,Ts_]:=<specific answer></div>
<div>myFunction[aW_,aV_,v0_,th0_,w0_,Ts_]:=<generic answer></div><d=
iv><br></div><div>In this case, the more specific definitions will automati=
cally be tried before the more general ones.</div><div>If you have an idea =
about possible singular points, the process of giving such specialized</div=
<div>definitions can be somewhat automated, and that is exactly the task of=
applied Mathematica programmer</div><div>(a scientist using Mathematica wh=
o is able to use the domain-specific knowledge in in implementing such auto=
mation), but you still must perform such an analysis yourself. I=C2 actua=
lly doubt that there can be any totally general approach to situations like=
this. IMO,=C2 routine stuff is for the machine, but singularities are fo=
r humans )</div>
<div><br></div><div>Sorry if I wasn't exactly helpful.</div><div><br></=
div><div>Best,</div><div>Leonid</div><div><br></div><div><br><div class="=
gmail_quote">On Wed, Sep 22, 2010 at 7:55 PM, Julian Stoev / =D0=AE=D0=BB=
=D0=B8=D0=B0=D0=BD =D0=A1=D1=82=D0=BE=D0=B5=D0=B2 <span dir="ltr"><<a =
href="mailto:julian.stoev at gmail.com">julian.stoev at gmail.com</a>></span=
> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex;">Thanks Leonid,<br>
I also had your first idea, but I am a little suspicious if this is a<br>
good general approach.<br>
If aW is a special point, shouldn't Mathematica provide a solution in<b=
terms of If[] functions?<br>
There must be some reason why it is not doing so.<br>
And Mathematica is not the only software giving me an answer with this form=
Also I was not able to force it to take into account any assumptions,<br>
like real variables, real Sqr,...<br>
There must be some regular non-manual way of getting a solution?<br>
<div><div></div><div class="h5"><br>
On Wed, Sep 22, 2010 at 5:29 PM, Leonid Shifrin <<a href="mailto:lshif=
r at gmail.com">lshifr at gmail.com</a>> wrote:<br>
> Hi Julian,<br>
> Here is your expression:<br>
> In[137]:= expr = Integrate[(aV*t+v0)*Cos[(aW*t^2)/2+th0+t*w0],{t,0=
> Out[137]=<br>
> (1/(aW^(3/2)))(Sqrt[\[Pi]] (-aW v0+aV w0) Cos[th0-w0^2/(2 aW)]<br>
> FresnelC[w0/(Sqrt[aW] Sqrt[\[Pi]])]+Sqrt[\[Pi]] (aW v0-aV w0)<br>
> Cos[th0-w0^2/(2 aW)] FresnelC[(aW Ts+w0)/(Sqrt[aW] Sqrt[\[Pi]])]+aV Sq=
> (-Sin[th0]+Sin[th0+(aW Ts^2)/2+Ts w0])+Sqrt[\[Pi]] (aW v0-aV w0)<br>
> (FresnelS[w0/(Sqrt[aW] Sqrt[\[Pi]])]-FresnelS[(aW Ts+w0)/(Sqrt[aW]<br>
> Sqrt[\[Pi]])]) Sin[th0-w0^2/(2 aW)])<br>
> If you want to treat the point aW = 0 specially, the simplest is to =
put it<br>
> to zero right when the integral<br>
> is computed:<br>
> In[138]:= exprAt0 = Integrate[(aV*t+v0)*Cos[th0+t*w0],{t,0,Ts}]<br=
> Out[138]= (-aV Cos[th0]+aV Cos[th0+Ts w0]-v0 w0 Sin[th0]+(aV Ts+v0) =
> Sin[th0+Ts w0])/w0^2<br>
> Alternatively, you can extract it from the full expression, but it st =
> little more involved. One way<br>
> is to change variable as t = 1/aW, then expand in t around t=Infin=
ity, then<br>
> take the limit t->Infinity of the<br>
> resulting expression. One subtlety here that you will need to expand t=
o the<br>
> second order in t, to<br>
> get the correct result:<br>
> In[140]:= exprAt0Alt =<br>
> Limit[Normal@Series[expr/.aW->1/t,{t,Infinity,2}],t->Infinity]<b=
> Out[140]= (-aV Cos[th0]+aV Cos[th0+Ts w0]+w0 (-v0 Sin[th0]+aV Ts Sin=
> w0]+v0 Sin[th0+Ts w0]))/w0^2<br>
> You can check that the results are the same:<br>
> In[141]:= FullSimplify[exprAt0-exprAt0Alt]<br>
> Out[141]= 0<br>
> Hope this helps.<br>
> Regards,<br>
> Leonid<br>
> On Wed, Sep 22, 2010 at 9:57 AM, Julian <<a href="mailto:julian.s=
toev at gmail.com">julian.stoev at gmail.com</a>> wrote:<br>
>> Hello All,<br>
>> After long interruption with symbolic computing, I am struggling h=
>> to make a useful result out of this integral.<br>
>> Integrate[(aV*t + v0)*Cos[(aW*t^2)/2 + th0 + t*w0, {t, 0, Ts}]<br>
>> It comes from the differential equations describing the motion of =
>> wheeled robot. aV and aW are accelerations and v0, w0, th0 are ini=
>> conditions. The numerical solution clearly exists for different re=
>> accelerations, including positive, negative and zero.<br>
>> However the symbolic solution of Mathematica is:<br>
>> (Sqrt[Pi]*(-(aW*v0) + aV*w0)*Cos[th0 - w0^2/(2*aW)]*<br>
>> =C2 FresnelC[w0/(Sqrt[aW]*Sqrt[Pi])] + Sqrt[Pi]*(aW*v0 - aV*w0)=
>> =C2 Cos[th0 - w0^2/(2*aW)]*FresnelC[(aW*Ts + w0)/(Sqrt[aW]*Sqrt=
>> +<br>
>> =C2 aV*Sqrt[aW]*(-Sin[th0] + Sin[th0 + (aW*Ts^2)/2 + Ts*w0]) +<b=
>> =C2 Sqrt[Pi]*(aW*v0 - aV*w0)*(FresnelS[w0/(Sqrt[aW]*Sqrt[Pi])] -=
>> =C2 =C2 FresnelS[(aW*Ts + w0)/(Sqrt[aW]*Sqrt[Pi])])*Sin[th0 -=
>> (2*aW)])/<br>
>> =C2 aW^(3/2)<br>
>> Note that aW can be found inside a Sqrt function and also in the<b=
>> denominator. While I can see that FresnelS[Infinity] is well defin=
>> so aW==0 should not be a real problem, it is =C2 still very =
>> how to use this result at this particular point. The case of negat=
>> aW is also interesting, because it will require a complex FresnelS=
>> FresnelC. I somehow managed to remove this problem using<br>
>> ComplexExpand, but I am not sure this is the good solution.<br>
>> I tried to give Assumptions -> Element[{aW, aV, w0, v0, Ts, th0=
>> Reals] to the integral, but there is no change in the solution.<br=
>> While I understand that the solution of Mathematica is correct in =
>> strict mathematical sense, I have to use the result to generate a =
>> code, which will be evaluated numerically and the solution I have =
>> is not working for this.<br>
>> Can some experienced user give a good advice how use get a solutio=
n of<br>
>> the problem?<br>
>> Thank you in advance!<br>
>> --JS<br>
</div></div><font color="#888888">--<br>
Julian Stoev, PhD.<br>
Control Researcher<br>
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Sep/msg00505.html","timestamp":"2014-04-17T21:33:50Z","content_type":null,"content_length":"41541","record_id":"<urn:uuid:9c31a1a2-9665-45c0-97ce-99ab93e2de84>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
A metric space (S,d) is called hyperconvex if any collection of closed balls of radius
${r}_{\alpha }$
and center
${s}_{\alpha },\alpha \in A$
, which satisfy
$d\left({s}_{\alpha },{s}_{\beta }\right)\le {r}_{\alpha }+{r}_{\beta }$
for all
in A, has nonempty intersection. The author begins with a good review and set of references for known results. He defines the
-fixed point set of T:
$S\to S$
to be
${F}_{ϵ}\left(T\right)=\left\{s:d\left(s,Ts\right)\le ϵ\right\}$
, then proves an approximate fixed point theorem: if T:
$H\to H$
is nonexpansive and H is hyperconvex, then
is hyperconvex and, if nonempty, is a nonexpansive retract of H. If
is also bounded, then the fixed point set
is nonempty as well. The rest of the paper gives interesting results and discussion of the hyperconvex hull, N(S), of S as defined by
J. R. Isbell
[Comment. Math. Helv. 39, 65- 76 (1964;
Zbl 0151.302
)]. In particular, if T:N(S)
$\to N\left(S\right)$
is nonexpansive, then
$S\subset {F}_{ϵ}\left(T\right)$
. Also, any realization of
${\ell }_{\infty }$
is all of
${\ell }_{\infty }$
. This is nice since realizations of N(S) in a hyperconvex space H containing S always exist but, though isometric, are not generally unique. The author’s first “Remark” is unfortunate in that it
gives a badly flawed example to discount Remark 3.1 of the Isbell paper cited above. In fact, it is the reviewer’s opinion that minor modification of the author’s lemma proof verifies Isbell’s
54H25 Fixed-point and coincidence theorems in topological spaces
47H10 Fixed point theorems for nonlinear operators on topological linear spaces
47H09 Mappings defined by “shrinking” properties
|
{"url":"http://zbmath.org/?q=an:0694.54033","timestamp":"2014-04-20T18:50:13Z","content_type":null,"content_length":"24851","record_id":"<urn:uuid:8d1b0c2e-c2ea-4905-85a9-18336cc2183d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the center of mass of a lamina
March 27th 2010, 08:49 PM #1
Nov 2008
Find the center of mass of a lamina
The boundary of a lamina consists of the semicircles $y=\sqrt{1-x^2}\ and\ y=\sqrt{4-x^2}$ together with the portions of the x-axis that join them. Find the center of mass of the lamina if the
density at any point is proportional to its distance from the origin.
I drew a graph that looks like this.
I know that polar coordinates are a good tool to use for circle type questions like this, but I've never encountered something like this before. If anyone could just step me in the right
direction, that would be great, Thanks
the bounds are $0\le \theta\le \pi$ while $1\le r\le 2$
March 27th 2010, 09:08 PM #2
|
{"url":"http://mathhelpforum.com/calculus/136026-find-center-mass-lamina.html","timestamp":"2014-04-20T09:44:38Z","content_type":null,"content_length":"33341","record_id":"<urn:uuid:792c49ed-ef36-42a9-a7df-a728b7973c9c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Larch Frequently Asked Questions
$Revision: 1.116 $
18 July 2001
by Gary T. Leavens
Table of Contents
This document is a list of frequently asked questions (FAQ) and their answers for the Larch family of specification languages. It is intended to be useful to those who are new to Larch, especially to
students and others trying to understand and apply Larch. However, there is some material that is also intended for experts in other formal methods who would like to learn more about Larch. If you
find something that seems too technical, please skip a bit, perhaps to another question.
We are looking for more contributions of questions and answers, as well as corrections and additions to the answers given. Please send e-mail to us at larch-faq@cs-DOT-iastate-DOT-edu (after changing
each "-DOT-" to "."). We will welcome and acknowledge any help you may give.
Bibliographic references are described at the end of this document (see section Bibliography). They look like "[Wing83]", which give the names of up to 3 authors, and an abbreviated year of
This FAQ is accessible on the WWW in the following URL.
HTML, plain text, postscript, and GNU info format versions are also available by anonymous ftp at the following URLs.
Archive-name: larch-faq
Posting-Frequency: monthly
Last-Modified: $Date: 2001/07/18 16:45:42 $
Version: $Revision: 1.116 $
URL: http://www.cs.iastate.edu/~leavens/larch-faq.html
Copyright: (c) 1995, 1996, 1997 Gary T. Leavens
Maintainer: Gary T. Leavens and Matt Markland <larch-faq@cs-DOT-iastate-DOT-edu>
This FAQ is copyright (C) 1997 Gary T. Leavens.
Permission is granted for you to make copies of this FAQ for educational and scholarly purposes, and for commercial use in specifying software, but the copies may not be sold or otherwise used for
direct commercial advantage; permission is also granted to make this document available for file transfers from machines offering unrestricted anonymous file transfer access on the Internet; these
permissions are granted provided that this copyright and permission notice is preserved on all copies. All other rights reserved.
This FAQ is provided as is without any express or implied warranties. While every effort has been taken to ensure the accuracy of its information, the author and maintainers assume no responsibility
for errors or omissions, or for damages resulting from the use of the information contained herein.
This work was partially funded by (US) National Science Foundation, under grant CCR-9503168.
Thanks to Matt Markland who provided the initial impetus for getting this FAQ started, who helped make up the initial list of questions.
Many thanks to John Guttag, Jim Horning, Jeannette Wing, and Steve Garland for creating the Larch approach and its tools, and for their patient teaching of its ideas through their writing, personal
contact, and many e-mail messages.
Thanks to Perry Alexander, Al Baker, Pieter Bekaert, Eric Eide, John Fitzgerald, Jim Horning, Ursula Martin, Bertrand Meyer, Clyde Ruby, and Jeannette Wing for general comments, updates, and
corrections to this FAQ. Thanks to Steve Garland and Jim Horning for help with answers. Thanks to Pierre Lescanne, Kishore Dhara, Teo Rus, Narayanan Sridhar, and Sathiyanarayanan Vijayaraghavan for
suggesting questions.
In this chapter, we discuss global questions about Larch.
Larch [Guttag-Horning93] may be thought of as an approach to formal specification of program modules. This approach is an extension of Hoare's ideas for program specification [Hoare69] [Hoare72a].
Its distinguishing feature is that it uses two "tiers" (or layers) [Wing83]. The top tier is a behavioral interface specification language (BISL), tailored to a specific programming language. Such
BISLs typically use pre- and postcondition specifications to specify software modules. The bottom tier is the Larch Shared Language (LSL, see [Guttag-Horning93], Chapter 4), which is used to describe
the mathematical vocabulary used in the pre- and postcondition specifications. The idea is LSL specifies a domain model, some mathematical vocabulary, and the BISL specifies both the interface and
the behavior of program modules. See section 1.2 Why does Larch have two tiers?, for more of the reasons for this separation.
The Larch family of specification languages [Guttag-Horning-Wing85b] consists of all the BISLs that use LSL for specification of mathematical vocabulary. Published BISLs with easily accessible
references include:
There are also some BISLs whose documentation is not as easy to come by: Larch/Generic [Chen89], GCIL (which stands for Generic Concurrent Interface specification Language [Lerner91]), and Larch/
CORBA (a BISL for CORBA IDL [Sivaprasad95]).
Typically, each BISL uses the declaration syntax of the programming language for which it is tailored (e.g., C for LCL, Modula-3 for LM3), and adds annotations to specify behavior. These annotations
typically consist of pre- and postconditions (often using the keywords requires and ensures), some way to state a frame axiom (often using the keyword modifies), and some way to indicate what LSL
traits are used to provide the mathematical vocabulary. See section 4. Behavioral Interface Specification Languages (BISLs), for more details.
The two "tiers" used in the Larch family of specification languages are LSL, which is called the bottom tier, and a behavioral interface specification language (a BISL), which is called the top tier.
This specification of program modules using two "tiers" is a deliberate design decision (see [Wing83] and [Guttag-Horning93], Chapter 3). It is similar to the way that VDM [Jones90] [ISO-VDM96]
[Fitzgerald-Larsen98] is typically used (one specifies a model and some auxiliary functions, in a way similar to LSL, and then one uses that vocabulary to specify the operations that are to be
implemented in some programming language.) However, VDM does not have a family of BISLs that are tailored to specific programming languages. Only the Larch family, and the RESOLVE family
[Edwards-etal94], have specification languages tailored to specific programming languages. (See section 1.4 How does Larch compare to other specification languages?, for more comparisons.)
What are the advantages of using the Larch two-tiered approach?
• Having different BISLs tailored to each programming language allows each BISL to specify all the details of a program module's interface (how to call it) and behavior (what it does). If one has a
generic interface specification language, such as VDM-SL [Jones90] [ISO-VDM96] [Fitzgerald-Larsen98], then one cannot specify all interface details.
• The division into two tiers allows the language used for each tier to be more carefully designed and expressive. For example, although it is possible to deal with finiteness, partiality,
exceptional behavior, aliasing, mutation, and side-effects in mathematics, dealing with such issues requires using more apparatus than just mathematical functions. (One might use, for example,
the techniques of denotational semantics [Schmidt86].) Since such additional apparatus does not fit well with an algebraic style of specification, these issues are thus typically ignored in LSL
specifications. By ignoring these issues, LSL specifiations can deal with potentially infinite, pure values, and total functions, in a stateless setting. On the other hand, while pre- and
postcondition specifications are good for the specification of finiteness, partiality, exceptions, aliasing, mutation, and side-effects, they are not well adapted to the specification of
mathematical vocabulary. Thus each tier uses techniques that are most appropriate to its particular task.
• The division into two tiers allows the underlying logic to be classical. That is, because one is not specifying recursive programs in LSL, there is no need for a specialized logic that deals with
partial functions; instead, LSL can use total functions and classical logic [Leavens-Wing97].
• The domain models described in LSL can be shared among many different BISLs, so the effort that goes into constructing such models is not limited to just one BISL [Wing83].
One could imagine using some other language besides LSL for the bottom tier and still getting these advantages. This is done, for example, in RESOLVE [Edwards-etal94]. Chalin [Chalin95] has suggested
that Z [Hayes93] [Spivey92] could also serve for the bottom tier, but one would probably want to use a subset of Z without state and the specification of side-effects for that.
The major disadvantage of the tailoring of each BISL to a specific programming language is that each BISL has its own syntax and semantics, and thus requires its own tool support, manuals, etc.
(Parts of the following paragraph are adapted from a personal communication from Horning of November 29, 1996.)
Something that might seem to be a disadvantage of the Larch approach is that one sometimes finds oneself writing out a very similar specifications in both LSL and a BISL. Isn't that a waste of
effort? It may seem like that, but one has to realize that one is doing two different things, and thus the two similar specifications cannot really be the same. What may happen is that one writes
similar declarations on the two levels, with parallel sets of operators and procedures. For example, one might specify LSL operators empty, push, pop, top for stacks, and then specify in a BISL
procedures Empty, Push, Pop, and Top. While the BISL procedure specification will use the LSL operators in their pre- and postconditions, there will probably be significant differences in their
signatures. For example, the procedure Push may have a side-effect, rather than returning a new stack value as would the push operator in the trait. If you find yourself repeating the axioms from an
LSL trait in a BISL specificaton, then you're probably making a mistake.
It is also difficult to extract a simple mathematical vocabulary (like the total functions of LSL) from a BISL specification. This is because the semantics of program modules tends to be complex,
Thus a BISL specification is not an appropriate vehicle for the specification of mathematical vocabulary.
See section 1.3 What is the difference between LSL and a Larch BISL?, for more discussion on this point.
The main difference between LSL and a Larch BISL is that in LSL one specifies mathematical theories of the operators that are used in the pre- and postcondition specifications of a Larch BISL. Thus
LSL is used to specify mathematical models and auxiliary functions, and the a Larch BISL is used to specify program modules that are to be implemented in some particular programming language.
The following summary, provided by Horning (private communication, October 1995), contrasts LSL and a Larch BISL.
A BISL specifies (not LSL) LSL specifies (not a BISL)
--------------------------- ---------------------------
State Theory definition
storage allocation Theory operations
aliasing, side-effects inclusion, assumption,
Control implication, parameterization
errors, exception handling, Operator definitions
partiality, pre-conditions, Facilities for libraries, reuse
Programming language constructs
type system, parameter modes,
special syntax and notations
(no LSL operators are
Horning's summary is that a BISL handles "all the messy, boring stuff", while LSL handles "all the subtle, hard stuff", and thus the features in the two tiers do not overlap much. However, that
summary is from the point-of-view of a designer of LSL. From the point of view of a user of a Larch BISL, the BISL is where you specify what is to be implemented, and LSL is where you do "domain
See [Lamport89] [Guttag-Horning93], Chapter 3, for more about the role of interface specification, and the separation into two tiers.
First, a more precise comparison is needed, because Larch is not a single language, but a family of languages (see above). Another problem with this comparison is that Larch has two tiers, but VDM-SL
[Jones90] [ISO-VDM96] [Fitzgerald-Larsen98], Z [Hayes93] [Spivey92], and COLD-K [Feijs-Jonkers92] are all integrated languages, which mix aspects of both of the tiers of the Larch family. Thus we
will compare some aspects of VDM-SL and Z to LSL and other aspects to BISLs in the Larch family (such as LCL or LM3 [Guttag-Horning93], Chapters 5 and 6).
(The comparisons below tend to be a bit technical; if you are not familiar with specification and verification, you might want to skip to another question.)
An excellent resource for comparison of LSL to other algebraic specification languages is [Mosses96]. Also a comparison of Larch with RESOLVE [Edwards-etal94] is found in [Sitaraman-Welch-Harms93].
By VDM, one means, of course, the specification language VDM-SL [Jones90] [ISO-VDM96] [Fitzgerald-Larsen98]. In comparison with LSL, in VDM-SL one can specify mathematical values (models) using
constructs similar to those in denotational semantics and typed, functional programming languages. That is, one can declare types of records, tagged unions, sets, maps and so on, and one can specify
mathematical functions on such values, either in a functional style, or by the use of pre- and postcondition specifications. For example, one might write the following in VDM-SL to define a phone
-- This is a VDM-SL model and auxiliary function specification
PhoneBook = map Id to (Info | HIDDEN)
Info :: number : seq of Digit
address : Address
listed : PhoneBook * Id -> bool
listed(pb, id) == id in set dom pb
The above example uses the interchange syntax of VDM-SL [ISO-VDM96] is used. In this syntax, for example, * is an approximation to the mathematical notation for a cross product; see Section 10 of
[ISO-VDM96] for details. The type PhoneBook is a map to a tagged union type. The type Info is a record type. The auxiliary function listed is given in a mathematical form, similar to LSL.
In LSL one can use standard traits (for example, those in Guttag and Horning's handbook [Guttag-Horning93], Appendix A) and the LSL shorthands tuple and union (see Section 4.8 of [Guttag-Horning93])
to translate VDM-SL specifications of mathematical values into LSL. For example, the following is an approximation to the above VDM-SL specification.
% This is LSL
PhoneBookTrait : trait
includes Sequence(Digit, Number),
FiniteMap(PhoneBook, Id, InfoOrHIDDEN)
Info tuple of number : Number, address : Address
InfoOrHidden union of left : Info, right : HIDDEN
listed : PhoneBook, Id -> Bool
\forall pb: PhoneBook, id: Id
listed(pb, id) = defined(pb, id);
However, VDM-SL does not have the equivalent of LSL's equational specification constructs, and thus it is more difficult in VDM-SL to specify a mathematical vocabulary at the same level of
abstraction. That is, there is no provision for algebraic specification in VDM-SL.
Because VDM-SL has a component that is like a programming language, and because it allows recursive definitions, the logic used in VDM-SL is a three-valued logic instead of the classical, two-valued
logic of LSL. This may make reasoning about VDM-SL specifications relatively more difficult than reasoning about LSL specifications.
In comparison to Larch family BISLs, the first thing to note is that VDM-SL is not a BISL itself, as it is not tailored to the specification of interfaces for some particular programming language.
This is evident in the following example, which continues the above VDM-SL phonebook specification.
-- The following is a VDM-SL operation specification
LOOKUP(pb: PhoneBook, id: Id) i: Info
pre listed(pb, id)
post i = pb(id)
In the above VDM-SL specification, it is evident that one can define some generic interface information (parameters, information about their abstract values, etc.). The pre- and postcondition format
of such specifications has been adopted by most Larch family BISLs. Framing for procedure specifications in VDM-SL is done by declaring external variables, and noting which are readable and writable
by a procedure; these two classes of variables are respectively defined by external declarations (in LCL and Larch/C++), and by the modifies clause. Because VDM-SL is generic, it cannot, by itself,
specify language-specific interface details, such as pointers, exception handling, etc. However, an advantage of VDM-SL is that it has better tool support than most BISLs in the Larch family.
Like VDM-SL, Z [Hayes93] [Spivey92] (pronounced "zed") is a specification language that allows both the specification of mathematical values and program modules.
Like LSL, Z allows the definition of mathematical operators equationally (see [Spivey92], section 3.2.2). A Z schema is roughly comparable to an LSL trait. The main difference between Z and LSL is
that in Z specifications can include state variables. That is, a Z schema can specify variables, or a procedure with side effects on such variables. In this respect a Z schema can act much like a
Larch family BISL, since, unlike LSL, it is not restricted to the specification of mathematical constants and mathematical functions.
By concentrating on the features of Z that do not involve state variables, one can more closely compare Z and LSL. We do this for the next several paragraphs below.
The Z schema calculus (see [Spivey92], section 3.8) allows one to apply to schemas: most logical connectives, quantification, name hiding, and pipelining. (One can also use sequential composition
between schemas that specify side-effects.) The schema calculus is thus more expressive than the LSL mechanisms: includes and assumes. However, in LSL one can effectively conjoin traits by including
them all to make a new trait.
It is possible in Z to write schemas that describe constants and mathematical functions only. However, many purely mathematical Z schemas describe something more like an instance of an LSL tuple.
This is done in Z by giving several variable declarations. For example, consider the following Z schema (in which Z stands for the integers).
| num, den: Z
The above is semantically similar to the LSL trait LSL_rational given below.
LSL_rational: trait
includes Integer
num: -> Int
den: -> Int
However, in Z usage, the schema Z_rational is likely to be useful, whereas in LSL one would typically write a trait such as the following, which specifies a type (called a sort in LSL) of rational
LSL_rational_sort: trait
includes Integer
rational tuple of num: Int, den: Int
The above is semantically similar to a Z schema that defines rational as the Cartesian product of Z and Z, and specifies the constructors and observers of the LSL shorthand (see [Guttag-Horning93],
Figure 4.10). This is a difference in usage, not in expressiveness, however.
Z and LSL both have comparable built-in notations for describing abstract values. Z has basic types (the integers, booleans, etc.) and more type constructors (sets, Cartesian products, schemas, and
free types). A "free type" in Z (see [Spivey92], Section 3.10) is similar to an LSL union type, but Z allows direct recursion. Z also has a mathematical tool-kit (see [Spivey92], Chapter 4) which
specifies: relations, partial and total functions, sequences, and bags, and a large number of operation symbols and mathematical functions defined on these types. This tool-kit is roughly comparable
to Guttag and Horning's handbook (see [Guttag-Horning93], Appendix A), although it tends to make heavier use of symbols (like "+") instead of named functions (like "plus"). (This use of symbols in Z
has been found to be a problem with the understandability of Z specifications [Finney96]. However, the readability of Z is improved by the standard conventions for mixing informal commentary with the
formal notation.)
LSL and Z both use classical logic, and the underspecification approach to the specification of partiality, in contrast to VDM-SL. Z does not have any way of stating the equivalent of an LSL
partitioned by or generated by clause.
See [Baumeister96] for an extended Z example (the birthday book of [Spivey92]) worked in LSL.
Compared to a Larch family BISL, Z, like VDM-SL is generic, and thus is not able to specify the interface details of a particular programming language.
An interesting semantic difference between Z and a typical Larch-family BISL (and VDM-SL) is that Z does not have true preconditions (see [Jackson95], Sections 7.2 and 8.1); instead, Z has guards
(enabling conditions, which are somewhat like the when clauses of GCIL [Lerner91]). Thus Z may be a better match to the task of defining finite state machines; the use of enabling conditions also (as
Jackson points out) allows different views of specifications to be combined. Aside from the Larch BISLs like GCIL that have support for concurrency, most Larch family BISLs have preconditions which
are "disclaimers". Because of this difference, and the Z schema calculus, it is much easier to combine Z specifications than it is to combine specifications in a Larch BISL.
A related point of usage difference is the frequency of the use of invariants in Z (Spivey calls these "constraints"; see [Spivey92], section 3.2.2). Because in Z one often specifies schemas with
state variables, it makes sense to constrain such variables in various ways. In Z, a common idiom is to conjoin two schemas, in which case the invariants of each apply in the conjoined schema. A
typical Larch family BISL has no provision for such conjunction, although object-oriented BISLs (such as Larch/C++) do get some mileage out of conjunction when subtypes are specified.
Because Z does not distinguish operation specifications from auxiliary function specifications, it does not have a separation into tiers. So in Z one can use operations previously specified to
specify other operations.
Like Larch, COLD-K [Feijs-Jonkers92] makes more of a separation into mathematical and interface specifications, although all are part of the same language. The part of COLD-K comparable to LSL is its
algebraic specifications (see [Feijs-Jonkers92], Chapters 2 and 3). In contrast to LSL, COLD-K does not use classical logic, and can specify partial functions. All COLD-K types have an "undefined"
element, except the type of the Booleans. The logic in COLD-K has a definedness predicate that allows one to specify that some term must be undefined, for example; such a specification is not
directly supported by LSL. A feature of COLD-K not found in LSL is the ability to hide names (see [Feijs-Jonkers92], Section 3.3); that is to only export certain names from a COLD-K scheme. Both
COLD-K and LSL have mechanisms for importing and renaming.
Compared to a Larch family BISL, COLD-K is often more expressive, because it uses dynamic logic as its fundamental semantics (see [Feijs-Jonkers92], Section 5.9). In COLD-K one can also write
algorithmic descriptions, and can mix algebraic, algorithmic, and dynamic logic specifications. COLD-K has relatively more sophisticated mechanisms for framing (see [Feijs-Jonkers92], p. 126, and
[Jonkers91]) than most Larch family BISLs (see section 4.9 What does modifies mean?). All of this would seem to make COLD-K both more expressive and difficult to learn than a typical Larch family
(The following is adapted from a posting of Horning to the larch-interest mailing list on June 19, 1995, which was itself condensed from the preface of [Guttag-Horning93].)
This project has been a long time in the growing. The seed was planted by Steve Zilles on October 3, 1973. During a programming language workshop organized by Barbara Liskov, he presented three
simple equations relating operations on sets, and argued that anything that could reasonably be called a set would satisfy these axioms, and that anything that satisfied these axioms could reasonably
be called a set. John Guttag refined this idea. In his 1975 Ph.D. thesis (University of Toronto), he showed that all computable functions over an abstract data type could be defined algebraically
using equations of a simple form. He also considered the question of when such a specification constituted an adequate definition.
As early as 1974, Guttag and Horning realized that a purely algebraic approach to specification was unlikely to be practical. At that time, they proposed a combination of algebraic and operational
specifications which they referred to as "dyadic specification." Working with Wing, by 1980 they had evolved the essence of the two-tiered approach to specification; the term "two-tiered" was
introduced by Wing in her dissertation [Wing83]. A description of an early version of the Larch Shared Language was published in 1983. The first reasonably comprehensive description of Larch was
published in 1985 [Guttag-Horning-Wing85a].
In the spring of 1990, software tools supporting Larch were becoming available, and people began using them to check and reason about specifications. To make information on Larch more widely
available, it was decided to write a book [Guttag-Horning93].
The First International Workshop on Larch, organized by Ursula Martin and Jeannette Wing, was held in Dedham, Massachusetts, July 13-15, 1992 [Martin-Wing93].
According to Jim Horning (personal communication, January 20, 1998) and John Guttag (personal communication, March 28 1998): "The name was not selected at PARC (hence from the Sunset Western Garden
Book), but at MIT. The project had been known informally there as `Bicycle'." One day Mike Dertouzos [director of the MIT Laboratory for Computer Science] and John were talking on the phone.
According to Jim, Mike asked John to "think up a new name quick, because he could just imagine what Senator Proxmire would say if he noticed that DARPA was sponsoring `a bicycle project' at MIT."
John was, he says, "looking at a larch tree at" his "parent's house" and came up with "Larch". He also says "I did grow up in Larchmont, and I'm sure that influenced my choice of name." Jim adds, "It
was only later that Butler Lampson suggested that we could use it for the specification of very `larch' programs."
The "Larch" is the common name of a species of fir tree. The cover of [Guttag-Horning93] has a picture of Larch trees on it. For more pictures of Larch trees, see the following URL.
GIF format pictures of the Larch logo, can be found at the following URLs.
A good place to start is the Guttag and Horning's book [Guttag-Horning93]. (This book is sometimes called "the silver book" by Larchers.) Consider it required reading. If you find that all tough
going, you might want to start with Liskov and Guttag's book [Liskov-Guttag86], which explains the background and motivates the ideas behind abstraction and specification. (See section 1.9 What is
the use of formal specification and formal methods?, for more background.)
You might also want to read the introductory article about the family [Guttag-Horning-Wing85b], although it is now somewhat dated. You might (especially if you are an academic) want to also read some
of the proceedings of the First International Workshop on Larch [Martin-Wing93], which contains several papers about Larch.
The usenet newsgroup `comp.specification.larch' is devoted to Larch. You might also want to read the newsgroup `comp.specification.misc' for more general discussions.
There was a mailing list, called "larch-interest," for Larch. However, it has been discontinued, now that the usenet newsgroup is available. The archives of this list are available from the following
Other on-line resources can be found through the global home page for Larch at the following URL.
Other resources include SRC's Larch home page:
This frequently asked questions list (FAQ) is accessible on the world-wide web in:
HTML, plain text, postscript, and GNU info format versions are also available by anonymous ftp at the following URLs.
This question might mean one of several other more precise questions. These questions are discussed below.
One question is: is there a Larch-style BISL for some particular object-oriented (OO) programming language? Yes, there are Larch-style BISLs for Modula-3 (see [Guttag-Horning93], chapter 6, and
[Jones91]), Smalltalk-80 (see [Cheon-Leavens94]), and C++ (see [Leavens97] [Leavens96b]).
Another question is: does LSL have extensions that support object-oriented specification? No, there are none in particular. One could imagine extensions of LSL with some notion of subsorting, as in
OBJ [Futatsugi-etal85] [Goguen-etal87], but as yet this has not been done.
There are endless debates about the use of formal specification and formal methods. For recent discussions see [Saiedian-etal96]. The following discussion may help you enter the debate in a
reasonable fashion.
First, be sure you know what formal methods are, and what they are good for [Liskov-Guttag86] [Wing90]. Roughly speaking, formal methods encompass formalized software design processes, formal
specifications, formal derivation of implementations, and formal verification. You might also want to look at the following URL for more recent work on formal methods.
Formal verification of implementations that were not designed with verification in mind is impossible in general. Formal verification has been critiqued as too difficult to justify and manage
[DeMillo-Lipton79]. Programs can also only be verified relative to some abstract machine [Fetzer88]. However, these problems haven't stopped people from trying to use formal verification to gain an
added level of confidence (or to help find bugs [Garland-Guttag-Horning90] [Tan94]) in software that is very critical (in terms of safety or business factors). Of course, you can still use formal
methods without doing formal verification.
Several advocates of formal methods have tried to explain formal methods by debunking the "myths" that surround them [Hall90] [Bowen-Hinchey95]. Bowen and Hinchey have also given some advice in the
form of "commandments" for good use of formal methods [Bowen-Hinchey95b].
Of the various kinds of formal methods, formal specification (that's where Larch comes in), seem to be the most useful. At a fundamental level, some specification is needed for any abstraction
(otherwise there is no abstraction [Liskov-Guttag86]). A specification is useful as a contract between the implementor and a client of an abstraction (such as a procedure or a data type). This
contract lays out the responsibilities and obligations of both parties, and allows division of labor (and a division of blame when something goes wrong). However, such a specification need not be
formal; the main reason for preferring formal specification over informal specification is that formal specification can help prevent the ambiguity and unintentional imprecision which plague informal
There are, however, some disadvantages to using formal specifications. One shouldn't take the search for precision to an extreme and try to formalize everything, as this will probably cost much time
and effort. (Instead, try to formalize just the critical aspects of the system.) Another disadvantage is that a formal specification can itself contain errors and inconsistencies, which can lead to
the specification becoming meaningless. The use of formality by itself also does not lead to well-structured and readable specifications; that is a job that humans have to do.
Although you can use the Larch languages and tools without subscribing to any particular view on the utility of formal methods, there is an emerging view of this question in the Larch community. This
view is that formal specification and formal methods are most useful as aids to debugging designs [Garland-Guttag-Horning90] [Tan94]. The basic idea is that one writes (what one thinks is) redundant
information into specifications, and this information is compared with the rest of the specification. Often the supposedly redundant information does not match, and trying to check that it does
brings out design or conceptual errors. Thus formal methods play a role complementary to prototyping, in that they allow one to analyze certain properties of a design to see if there are lurking
inconsistencies or problems.
Much of what has been said above seems logical, but experimentation would be helpful to understand what is really true and to help quantify the costs and benefits of formal methods.
One last answer to the question: some people find writing formal specifications (and even proofs) fun. No one knows why.
In this chapter, we discuss questions about the Larch Shared Language (LSL).
The Larch Shared Language (LSL) (see [Guttag-Horning93], Chapter 4, and [Guttag-Horning-Modet90]) is a language for specifying mathematical theories. LSL is a kind of equational algebraic
specification language [Guttag75] [Guttag-Horning78] [Goguen-Thatcher-Wagner78] [Ehrig-Mahr85] [Futatsugi-etal85] [Mosses96] [Loeckx-Ehrich-Wolf96]. That is, specifications in LSL mainly consist of
first-order equations between terms. The semantics of LSL is based on classical, multisorted first-order equational logic (see section 2.14 What is the meaning of an LSL specification?). However, LSL
does have two second-order constructs, which allow you to specify rules of data type induction and certain kinds of deduction rules (see section 2.10 How and when should I use a generated by clause?,
and see section 2.9 How and when should I use a partitioned by clause?).
Unlike a programming language, LSL is a purely declarative, mathematical formalism. In LSL there are no side-effects, algorithms, etc. (see section 2.18 What pitfalls are there for LSL specifiers?).
Instead of specifying computation, in LSL one specifies mathematical operators and their theories. These operators are used in Larch behavioral interface specification languages as their mathematical
vocabulary (see section 1.1 What is Larch? What is the Larch family of specification languages?).
An unusual feature of LSL is that it provides ways to add checkable redundancy to specifications (see section 2.11 When should I use an implies section?). See section 1.4 How does Larch compare to
other specification languages?, for more detailed comparisons of LSL and related specification languages.
See Chapter 4 of [Guttag-Horning93], and the rest of this chapter, for more information.
Besides this FAQ, the best place to look is probably your own computer system. You should have a manual page for the LSL checker, if it's installed on your system. Try the Unix command man lsl to see
You should also look for a directory (such as `/usr/larch/LSL') where the installation of LSL is found. In that directory, you will find a subdirectory `Doc', where there is some documentation on the
checker. See section 2.3 Where can I get a checker for LSL?, if you don't have the LSL checker installed on your system.
There is no official reference manual for the latest version of LSL (3.1) on-line. The defining document for LSL version 2.3 [Guttag-Horning-Modet90] is available from the URL
You can get the MIT releases of the LSL checker using the world-wide web, starting at the following URL.
You can also get the MIT release by anonymous ftp from the following URL.
At Iowa State, the Larch/C++ group has made available later beta releases of the LSL checker that fix problems with its profligate use of space. You can get the sources for Unix and Windows 95
executables from the following URL.
You might also be interested in the Larch Development Environment (LDE), from Cheng's group at Michigan State University. This environment integrates tools that support LSL, LP, LCL, and Larch/C++.
It runs under SunOS and Solaris. You can get it from the following URL.
The following makefile shows one way to use the Unix command make to help check LSL traits. It relies on properties of the Bourne shell under Unix and standard Unix make, so it would have to be
adjusted to work on other systems. (Also, if you cut and paste this makefile, be sure to change the leading blanks to tab characters on the rule lines.)
SHELL = /bin/sh
LSL = lsl
.SUFFIXES: .lsl .lsl-ckd .lp
$(LSL) $(LSLFLAGS) $< 2>&1 | tee $
$(LSL) -lp $(LSLFLAGS) $<
$(LSL) $(LSLFLAGS) *.lsl
rm -f *.lsl-ckd *.lpfrz
cleanall: clean
rm -f *.lpscr *.lp *.lplog
If you use the emacs text editor, you can then check your traits by using M-x compile (and responding with something like checkalltraits and a return). When you do this, you can then use C-x ` (which
is an alias for M-x next-error) to edit the LSL trait with the next error message (if you're using a version of the LSL checker version 3.1beta5 or later). Emacs will put the cursor on the
appropriate line for you automatically.
See section 3.10 How do I use LP to check my LSL traits?, for more information on how to use LP to check the implications in LSL traits.
The "sorts" in an LSL trait correspond to the types in a programming language. They are names for sets of values.
This usage is historical (it seems to come from [Goguen-Thatcher-Wagner78]), and arises from the desire to avoid the word "type", which is heavily overused. However, it does have some practical
benefit, in that in the context of a BISL, one can say that a type comes from a programming language, but a sort comes from LSL.
Unlike some other specification languages (e.g., OBJ [Futatsugi-etal85] [Goguen-etal87]), in LSL there is no requirement that all sort names be declared. For example, in the following trait, a sort
Answer is used.
AnswerTrait: trait
yes, no, maybe: -> Answer
In summary, LSL sorts are declared by being mentioned in the signatures of operators.
An LSL trait is used to describe a mathematical theory. This could be used by a mathematician to simply formalize a theory, but more commonly the theory specified is intended to be used as the
mathematical vocabulary for some BISL. Another common use is as a way of specifying input to the Larch Prover (LP).
When used as mathematical vocabulary for some behavioral interface specification, one can identify some other common uses. Quite often one wants to specify the abstract values of some data type. (See
section 4.4 What is an abstract value?.) Examples include the Integer, Set, Queue, and String traits in Guttag and Horning's handbook [Guttag-Horning93a]. Another common use is to specify some
function on abstract values, in order to ease the specification of some procedure.
One might also write a trait in order to:
• specialize some existing trait by renaming some sorts or by instantiating some parameterized sorts,
• change vocabulary by renaming some operators of some existing trait,
• state implications, for the benefit of readers, to help debug a specification [Garland-Guttag-Horning90], or as input to LP,
• add additional operators to some existing trait, as a convenience in specifying some interface or as a way of introducing a name for some idea that is an important concept in an interface
specification, or
• combine several existing traits, to make it easier to use them in an interface specification.
The sections of an LSL trait are determined by the LSL grammar [Guttag-Horning-Modet90]. (See section 2.3 Where can I get a checker for LSL?, for a more recent grammar, which is found in the file
`Doc/lsl3_1.y', for version 3.1, and `Doc/lsl3_2.y', for version 3.2.)
As a rough guide, one can have the following sections, in the following order.
A good general introduction is found in [Guttag-Horning93], Chapter 4. See section 2.8 What is the difference between assumes and includes?, for more on the purposes of the assumes and includes
One way to think of this is that includes requests textual inclusion of the named traits, but assumes only says that all traits that include the trait being specified must include some trait or
traits that makes the assumptions true. Hence includes is like #include in C or C++, but assumes is quite different.
A common reason to use assumes instead of includes is when you want to specify a trait with a sort parameter, and when you also need some operators to be defined on your sort parameters. For example,
Guttag and Horning's handbook trait Sequence (see [Guttag-Horning93], page 174), starts as follows
Sequence(E, C): trait
assumes StrictPartialOrder(>, E)
The assumes clause is used to specify that the sort parameter E must have an operator > that makes it a strict partial order. One way to view such assumptions is that they specify what is needed
about the theory of a sort parameter in order to sort check, and make sense of the operators that apply to the sort parameter [Goguen84] [Goguen86].
One can also use assumes to state assumptions about operators that are trait parameters. For example, Guttag and Horning's handbook trait Sequence (see [Guttag-Horning93], page 175), starts as
PriorityQueue (>: E,E -> Bool, E, C): trait
assumes TotalOrder(E for T)
The assumes clause is used to specify that the parameter >:E,E -> Bool must make E into a total order.
Another reason to use an assumes clause instead of an includes clause is to state that a trait is not self-contained, but is intended to be included in some context in which some other trait has been
included. For example, Guttag and Horning's handbook trait IntegerPredicates (see [Guttag-Horning93], page 164), starts as follows.
IntegerPredicates (Int): trait
assumes Integer
The assumes clause is used to specify that this trait has to be used in a trait that has included (the theory of) the trait Integer.
See [Guttag-Horning93], Section 4.6, for more details.
You should use a partitioned by clause to identify (i.e., make equivalent) some terms that would otherwise be distinct in an LSL trait's theory. A classic example is in the specification of sets,
which includes the following partitioned by clause (see [Guttag-Horning93], page 166).
C partitioned by \in
In this example, C is the sort for sets, and \in is the membership operator on sets. This means that two sets, s1 and s2, are distinct (i.e., not equal) if there is some element e such that member(e,
s1) is not equal to member(e, s2). Another way to look at this is that, you can prove that two sets are equal by showing that they have the exactly the same members. (The sets insert(1, insert(2,
{})) and insert(2, insert(1, {})) are thus equal.) This property is what distinguishes a set from a list, because it says that order does not matter in a set.
The reason there is a separate clause in LSL for this is because of the implicit quantifiers involved. The partitioned by clause in the example above is more powerful than the following (see [Wing95]
, Section 5.1).
\forall e: E, s1, s2: Set
((e \in s1) = (e \in s2)) => s1 = s2
The reason is that the above code would allow you to conclude that {1,2} = {2,3}, because both have 2 as a member. (That is, one can let e be 2, and then the left hand side of the implication holds.)
Historically, LSL did not have a way to write quantifiers inside its formulas, and so the partioned by clause was necessary. With LSL version 3.1, however, one could write the following equivalent to
the above partitioned by clause for Set. (See section 2.24 How do I write logical quantifiers within an LSL term?, for the meaning of \A.)
\forall e: E, s1, s2: Set
(\A e (e \in s1) = (e \in s2)) => s1 = s2
However, the above formula will be harder to work with in LP than the equivalent partitioned by clause. This is because LP has special provision for using partitioned by clauses. Another reason to
prefer the partitioned by clause is that it conveys intent more clearly than a general logical formula like the above. Thus it there are still good reasons to use a partitioned by clause in LSL.
See section 4.2 in Guttag and Horning's book [Guttag-Horning93] for more details.
You should use a generated by clause to specify that there can be no abstract values beside the ones that you are describing for a given sort. This is quite common for specifications of the abstract
values of data types, but will not usually be the case for the abstract values of sort parameters (such as E in Queue[E]). A generated by clause can also be thought of as excluding "junk" (i.e.,
nonstandard values) from a type [Goguen-Thatcher-Wagner78]. For example, in the trait AnswerTrait (see section 2.5 What is a sort?), nothing in the specification prevents there being other elements
of the sort Answer besides yes, no, and maybe. That is, if x has sort Answer, then the formula
x = yes \/ x = no \/ x = maybe
is not necessarily true; in other words, it's not valid. However, in the following trait the above formula is valid, because of the generated by clause. (This is stated in the trait's implies
AnswerTrait2: trait
includes AnswerTrait
Answer generated by yes, no, maybe
yes \neq no;
yes \neq maybe;
no \neq maybe;
\forall x: Answer
x = yes \/ x = no \/ x = maybe;
Yet another reason to use a generated by clause is to assert a rule of data type induction. This view is particularly helpful in LP, because it allows one to divide up a proof into cases, and to use
induction. For example, when proving some property P(x) holds for all x of sort Answer, the generated by clause of the trait AnswerTrait2 allows one to proceed by cases; that is, it suffices to prove
P(yes), P(no), and P(maybe).
A more typical induction example can be had by considering Guttag and Horning's handbook trait Set (see [Guttag-Horning93], page 167). In this trait, C is the sort for sets. It has the following
generated by clause (by virtue of including the trait SetBasics, see [Guttag-Horning93], page 166).
C generated by {}, insert
This clause says that the only abstract values of sort C, are those that can be formed from (finite) combinations of the operators {} and insert. For example, insert(x, insert(y, {})) is a set. This
rules out, for example, infinite sets, and sets that are nonstandard. It does not say that each such term denotes a distinct set, and indeed some terms fall in the same equivalence class (see section
2.9 How and when should I use a partitioned by clause?). It also does not mean that other operators that form sets cannot be defined; for example, in the trait Set one can write {x} \U {y}. What it
does say is that every other way to build a set, like this one, has to be equivalent to some use of {} and insert.
Let us consider a sample proof by structural induction in the trait Set. One of the implications in this trait is that
\forall s: C
size(s) >= 0
To prove this, one must prove that it holds for all sets s. Since, by the generated by clause, all sets can be formed from {} and insert, it suffices to prove two cases. For the base case, one must
prove the following trivial result.
size({}) >= 0
For the inductive case, one gets to assume that
size(y) >= 0
and then one must prove the following.
\forall s: C, e: E
size(insert(e, y)) >= 0
This follows fairly easily; we leave the details to the reader.
See section 4.2 in Guttag and Horning's book [Guttag-Horning93] for more details on the generated by clause.
One situation where one usually does not want to use a generated by clause is when specifying the theory of a sort parameter. For example, in the trait Set (see [Guttag-Horning93], page 167), there
is no generated by clause describing the elements E of the set. The reason not to use a generated by clause for such sort parameters is to make the assumptions on such parameters as weak as possible.
Thus, unless one has to do induction over the elements of a sort parameter, one would not need to know that they are generated.
You should usually use an implies section in a trait, even though it adds nothing to the theory of a trait, and is completely redundant. The reasons for doing so are as follows.
• You probably have something interesting to highlight about the theory of a trait, to bring to the attention of the reader. This includes important examples that help illustrate how to use the
• You probably have found more than one way to specify something in your trait. Putting both of them in the asserts section runs the risk of making your theory inconsistent (if they are not really
equivalent). Leaving one of them out loses information, and prevents possible debugging and consistency checking that could otherwise be done (for example, using LP; see [Guttag-Horning93],
Chapter 7).
Redundancy is a good thing for debugging and consistency checking; after all, one needs two pieces of information to do any checking.
A motto that mathematicians live by is the following. (You do realize that writing an LSL trait is doing math, don't you?)
Few axioms, many theorems.
For LSL, this motto means that you should try to make your asserts section as small and as elegant as possible, and to push off any redundancy into the implies section.
An important piece of redundant information you can provide for a reader in the implies section is a converts clause. You should also consider using a converts clause for each operator you have
introduced in the trait. See section 2.12 How and when should I use a converts clause?.
Writing a good implies section for a trait is something that takes some practice. It helps to get a feel for what can be checked with LP first (see [Guttag-Horning93], Chapter 7). It also helps to
look at some good examples; for example, take a look at the traits in Guttag and Horning's handbook (see [Guttag-Horning93], Appendix A); most of these have an implies section, and some are quite
One general hint about writing implies clauses comes from the fact there are an infinite number of theorems that follow from every LSL specification. (One infinite set of theorems concerns the
built-in sort Bool.) Since there are an infinite number of theorems, you can't possibly list all of them. Focus on the ones that you think are interesting, especially those that will help the reader
understand what your specification is about, and those that will help you debug your specification.
See Section 4.5 in Guttag and Horning's book [Guttag-Horning93] for more details.
A converts clause goes in the implies section of a trait. It is a way to make redundant claims about the completeness of the specification of various operators in a trait.
You should consider stating a converts clause for each non-constant operator you introduce in a trait. If you list an operator in a converts clause, you would be saying that the operator is uniquely
defined "relative to the other operators in a trait" (see [Guttag-Horning93], p. 142, [Leavens-Wing97], Section B.1). Of course, this would not be true for every operator you might define. This means
that if you state that an operator is converted in the implies section of a trait, then you are claiming that there is no ambiguity in the definition of that operator, once the other operators of the
trait are fixed.
As an example of a converts clause, consider the following trait.
ConvertsExample: trait
includes Integer
unknown, hmm, known: Int -> Int
asserts \forall i: Int
hmm(i) == unknown(i);
known(i) = i + 1;
known: Int -> Int,
hmm: Int -> Int
In this trait, the converts clause claims that both known and hmm are converted. It should be clear that known is converted, because it is uniquely-defined for all integers. What may be a bit
puzzling is that hmm is also converted. The reason is that once the interpretation of the other operators in the trait are fixed, in particular once the interpretation of unknown is fixed, then hmm
is uniquely-defined. It would not be correct to claim that both hmm and unknown are converted; since they are defined in terms of each other, they would not be uniquely defined relative to the other
operators of the trait.(1)
See Section 4.5 and pages 142-144 in Guttag and Horning's book [Guttag-Horning93] for more details.
Often one wants to say that some operator is uniquely-defined, but not for some arguments. This can be done using an exempting clause, which is a subclause of a converts clause. See section 2.13 How
and when should I use an exempting clause?, for more discussion.
An exempting clause is a subclause of a converts clause. (See section 2.12 How and when should I use a converts clause?.) You should use an exempting clause when you want to claim that some operator
is uniquely-defined, "relative to the other operators in a trait" ([Guttag-Horning93], p. 142), except for some special cases.
A very simple example is the trait List (see [Guttag-Horning93], p. 173), where the converts includes the following.
converts head
exempting head(empty)
This just says that head is well-defined, except for taking the head of an empty list.
A slightly more complicated exempts clause is illustrated by the following classic example. Suppose one specifies a division operator on the natural numbers. In such a trait (see [Guttag-Horning93],
p. 201), one could use the following trait outline.
Natural (N): trait
0: -> N
div: N, N -> N
% ...
% ...
converts div: N, N -> N
exempting \forall x: N
div(x, 0)
The converts clause in this example claims that div is uniquely-defined, relative to the other operators in a trait, except that division by 0 is not defined. Without the exempting subclause, the
converts clause would not be correct.
See the following subsections and page 44 of [Guttag-Horning93] for more information.
No, exempting a term does not make it undefined. An exempting clause merely takes away the claim that it is defined (see [Guttag-Horning93], page 44). For example, in the trait List (see
[Guttag-Horning93], p. 173), writing the following does not make head(empty) undefined.
converts head
exempting head(empty)
If anything, it is the lack of an axiom defining the value of head(empty) that makes it "undefined". However, you would certainly want to use an exempting clause in situations where you have a term
that is intended to be "undefined".
The concept of "undefinedness" is a tricky one for LSL. In a technical sense, nothing is undefined in LSL, because each constant and variable symbol has a value, and each non-constant operator is a
total function (see [Guttag-Horning93], p. 9, and [Leavens-Wing97], Appendix A). It would be more accurate to say that in the trait List, the term head(empty) is "underspecified". This means it has a
value, but the trait doesn't specify what that value is. See section 2.16 Is there support for partial specifications in LSL?, for more discussion on this point.
With LSL (through version 3.1) the exempting subclause of a converts clause is not based on semantics, instead it is based on syntax. You can only exempt terms having particular forms; these forms
allow the arguments to an operator to be either ground terms (as in head(empty)), or universally quantified variables or both (as in div(x, 0)).
If you want to exempt all terms for which one argument is positive, or has a size larger than 3, then you are out of luck. All you can do is to state your conversion claim informally in a comment (as
you can't use a converts clause naming the operator in question). If you want to rewrite your trait, you could, for example, change the appropriate argument's sort to be the natural numbers instead
of the integers, or define a new sort for which your operator is completely-defined.
See Section B.2 of [Leavens-Wing97] for a proposed extension to LSL that would solve this problem.
An LSL trait denotes a theory, which is a collection of true formulas (of sort Bool). This theory contains "the trait's assertions, the conventional axioms of first-order logic, everything that
follows from them, and nothing else" (see [Guttag-Horning93], p. 37). For a brief introduction to these ideas, see Chapter 2 of [Guttag-Horning93]; for general background on equational logic, see
[Ehrig-Mahr85] or [Loeckx-Ehrich-Wolf96].
Another way to look at the meaning of an LSL trait is the set of models it denotes. An algebraic model of a trait has a set for each sort, and a total function for each operator. In addition, it must
satisfy the axioms of first-order logic and the axioms of the trait. A model satisfies an equational axiom if it assigns the same value to both sides of the equation. (See [Ehrig-Mahr85], Chapter 1,
or [Loeckx-Ehrich-Wolf96], Chapter 5, for more details.)
In contrast to some other specification languages, the semantics of LSL is loose (see [Guttag-Horning93], p. 37). That is, there may be several models of a specification that have observably distinct
behavior. For example, consider the trait ChoiceSet in Guttag and Horning's handbook (see [Guttag-Horning93], p. 176). There are many models of this trait with different behavior; for example, it
might be that in one model choose({2} \U {3}) = 2, and in some other model choose({2} \U {3}) = 3. The specification won't allow you to prove either of these things, of course, because whatever is
provable from the specification is true in all models.
The following answer is adapted from a posting to `comp.specification.larch' by Horning (July 19, 1995) in response to a question by Leavens (July 18, 1995).
The trouble starts with your question: there are no "undefined terms" in LSL. LSL is a language of total functions. There are no partial functions associated with operators, although there may be
underspecified operators (i.e., operators bound to functions with different values in different models). The possibility of underspecification is intentional; it is important that models of
traits do not have to be isomorphic.
Another way to say this is that in an algebraic model of a trait (see section 2.14 What is the meaning of an LSL specification?), each term denotes an equivalence class, which can be thought of as
its definition in that algebra. In other algebras, the same term may be in a nonisomorphic equivalence class, if the term is underspecified.
To make this clearer, consider the following trait.
QTrait : trait
includes Integer
Q union of theBool: Bool, theInt: Int
isPositive: Q -> Bool
\forall q: Q
isPositive(q) == q.theInt > 0 /\ (tag(q) = theInt);
\forall q: Q, b: Bool
isPositive(q) == (tag(q) = theInt) /\ q.theInt > 0;
isPositive(q) == if (tag(q) = theInt)
then q.theInt > 0 else false;
isPositive(theBool(b)) == false;
converts isPositive: Q -> Bool
In this trait, the question is whether the assertion that defines isPositive is written correctly. In particular, is isPositive(theBool(true)) "undefined" or an error? This is clarified in the
implies section of the trait, which claims that the order of conjuncts in the definition of isPositive does not matter, that the definition using if is equivalent, that the meaning of isPositive
(theBool(true)) is false, and that isPositive is converted (see section 2.12 How and when should I use a converts clause?).
The trait QTrait says that q.theInt is always of sort Int. However, it places no constraint on what integer it is, unless tag(q) = theInt. So in the definition and the first implication, both
operands of /\ are always defined (i.e., always mean either true or false, since Bool is generated by true and false), even though we may not always know which. So the definition is fine, and the
implications are all provable.
The fact that you don't have to use if, and that all the classical laws of logic (like conjunction being symmetric) apply is a big advantage of the total function interpretation of LSL. See section
2.16.1 Can I specify a partial function in LSL? for more discussion.
There are two ways one might understand this question. These are dealt with below.
Technically, no; all functions specified in LSL are total (see section 2.14 What is the meaning of an LSL specification?). Thus every operator specified in LSL takes on some value for every
combination of arguments. What you can do is to underspecify such an operator, by not specifying what its value is on all arguments. For example, the operator head in the handbook trait List (see
[Guttag-Horning93], p. 173) is underspecified in this sense, because no value is specified for head(empty). It is a synonym to say that head is weakly-specified.
One reason LSL takes this approach to dealing with partial functions is that it makes the logic used in LSL classical. One doesn't have to worry about non-classical logics, and "bottom" elements of
types infecting a term [Gries-Schneider95]. Another reason is that when LSL is used with a BISL, the precondition of a procedure specification can protect the operators used in the postcondition from
any undefined arguments [Leavens-Wing97].
(If you want to develop a theory of partial functions, however, you can do that in LSL. For example, you can specify a type with a constant bottom, that, for you, represents "undefined". You will
just have to manipulate your partial functions with LSL operators that are total functions.)
No, you don't have to specify everything completely in LSL. It's a good idea, in fact, to only specify the aspects that are important for what you are doing. For example, if you want to reason about
graph data structures, it's best to not (at first, anyway) try to specify the theory of graphs in great detail. Put in what you want, and go with that. (This won't hurt anything, because whatever you
need to prove you'll be forced to add eventually.)
Anything you can specify in LSL, you can do incompletely. (There are no "theory police" that will check your traits for completeness.) To incompletely specify a sort, for example, nodes in a graph,
just use it's name in operations and don't say anything more about it. If you need to have some operators that manipulate nodes, then write their signatures into the introduces section, but don't
write any axioms for them. If you need to know some properties of these operators, you can write enough axioms in the asserts section to prove them (perhaps making some of the properties theorems in
the implies section).
The time to worry about the completeness of your theory is when you are having trouble proving some of the things you want to prove about it, or if you are preparing it for reuse by others. (On the
other hand, a less complete theory is often more general, so when trying to generalize a trait for others to use, sometimes it helps to relax some of the properties you have been working with.)
Yes, you can define operators recursively. A good, and safe, way to do this is to use structural induction (e.g., on the generators of a sort). For example, look at how count is defined in the
handbook trait BagBasics (see [Guttag-Horning93], p. 168). In this trait, the sort C of bags is generated by the operators {} and insert. There is one axiom that specifies count when its second
argument is the empty bag, and one that specifies count when its argument is constructed using insert. There are many other examples in the Guttag and Horning's handbook (see [Guttag-Horning93],
Appendix A), for example the many specifications of operator size (Section A.5), the specification of the operators head and tail in the trait Queue (p. 171), the specification of flatten in the
trait ListStructureOps (p. 183), and the specification of operator + in the trait Positive (p. 202). See section 2.19 Can you give me some tips for specifying things with LSL?, for more details on
structural induction.
This kind of recursive specification is so common that if you are familiar with functional programming (especially in a language like Haskell or ML), then it's quite helpful to think that way when
writing LSL traits.
There is one thing to be careful of, however. Unless you use structural induction, you might specify an inconsistent trait. The following example (from David Schmidt, personal communication),
illustrates the problem. This trait includes the Integer trait of Guttag and Horning's handbook (see [Guttag-Horning93], p. 163).
BadRecTrait: trait
includes Integer
f: Int -> Int
\forall i: Int
f(i) == f(i) + 1;
It's easy to prove that this trait is inconsistent; subtract f(i) from both sides of the defining equation for f, and you obtain 0 = 1. As a heuristic for avoiding this kind of mistake, avoid
recursive specifications that don't "make progress" when regarded as programs.
It's also possible to fall into a pit when recursively defining a partial function. The following is a silly example, but illustrates the problem.
BadZeroTrait: trait
includes Integer, MinMax(Int)
zero: Int -> Int
\forall k: Int
zero(k) == if k = 0 then 0 else min(k, zero(k-1));
This is silly, the idea being to define a constant function that returns zero for any nonnegative integer. Still, it looks harmless. However, what does the specification say about zero(-1). It's easy
to see that it's less than -1, and less than -2, and indeed less than any integer. But the Integer trait in Guttag and Horning's handbook (see [Guttag-Horning93], p. 163) does not allow there to be
such an integer; so this trait is inconsistent. To solve this problem, you could change the axiom defining zero to indicate the intended domain, as follows.
\forall k: Int
(k >= 0) => zero(k) = (if k = 0 then 0 else min(k, zero(k-1)));
This revised axiom would allow zero(-1) to have any value. Another cure for this kind of example would be to specify the domain of zero as the natural numbers. (Of course, for this particular
example, a nonrecursive definition is also preferable.)
See section 2.18 What pitfalls are there for LSL specifiers?, for another thing to watch for when making a recursive definition.
According to Guaspari (posting to the larch-interest mailing list, on March 8, 1995), "the commonest `purely mathematical' mistakes" in trait definitions occur when one uses structural induction
"over constructors that don't freely generate a sort". To understand this, consider his example, which includes the handbook trait Set (see [Guttag-Horning93], page 167).
GuasparisExample: trait
includes Integer, Set(Int, Set[Int])
select: Set[Int] -> Int
\forall s: Set[Int], e: Int
select(insert(e, s)) == e;
insert(1, {2}) == insert(2, {1});
select(insert(1, {2})) == select(insert(2, {1}));
1 == 2; % ouch!
In the above example, the problem with the definition of select is that it's not well-defined with respect to the equivalence classes of the sort Set[Int]. The problem is highlighted in the implies
section of the trait. The first implication is a reminder that the order of insertion in a set doesn't matter; that is, insert(1, {2}) and insert(2, {1}) are equivalent sets because they have the
same members. In the jargon, insert(1, {2}) and insert(2, {1}) are members of the same equivalence class. The second implication is a reminder that an operator, such as select must give the same
result on equal arguments; in the jargon, a function must "respect" the equivalence classes. But the definition of select does not respect the equivalence classes, because if one applies the
definition to both sides of the second implication, one concludes that 1 = 2.
One way to fix the above definition is shown by the trait ChoiceSet in Guttag and Horning's handbook (see [Guttag-Horning93], page 176). Another way that would work in this example (because integers
are ordered) would be to define an operator minimum on the sort Set[Int], and to use that to define select as follows.
GuasparisExampleCorrected: trait
includes Integer, Set(Int, Set[Int]), MinMax(Int)
select: Set[Int] -> Int
minimum: Set[Int] -> Int
\forall s: Set[Int], e, e1, e2: Int
minimum(insert(e, {})) == e;
minimum(insert(e1, insert(e2, s)))
== min(e1, minimum(insert(e2, s)));
select(insert(e, s)) == minimum(insert(e, s));
\forall s: Set[Int], e1, e2: Int
minimum(insert(e1, insert(e2, s)))
== minimum(insert(e2, insert(e1, s)));
select(insert(1, {2})) == 1;
select(insert(2, {1})) == 1;
To show that minimum is well-defined, one has to prove that it gives the same answer for all terms in the same equivalence class. The first implication is this trait does not prove this in general,
but does help in debugging the trait.
To explain the difference that having a "freely-generated" sort makes, compare the trait GuasparisExample above to the handbook trait Stack (see [Guttag-Horning93], page 170). In that trait, top is
defined using the following equation.
\forall e: E, stk: C
top(push(e, stk)) == e;
Why doesn't this definition have the same problem as the definition of select? Because it respects the equivalence classes of the sort C (stacks). Why? Because there is no partitioned by clause for
Stack, all equivalence classes of stacks have a canonical representative. The canonical representative is a term built using just the operators noted in the generated by clause for Stack: empty and
push; it is canonical because there is only one way to use just these generators to produce a value in each equivalence class. The sort of stacks is called freely generated because of this property.
(The sort for sets is not freely generated, because there is more than one term built using the generators that represents the value {1,2}.) Since a term of the form push(e, stk) is a canonical
representative of its equivalence class, the specification of top automatically respects the equivalence classes.
A more involved example is the definition of the set membership operator (\in) for sets (see [Guttag-Horning93], p. 166 and p. 224). Since sets are not freely generated, it takes work to prove that
the specification of \in is well-defined. You might want to convince yourself of that as an exercise.
In summary, to avoid this first pitfall, remember the following mantra when defining operators over a sort with a partitioned by clause or an interesting notion of equality.
Is it well-defined?
As a help in avoiding this problem, you can use a generated freely by clause in the most recent versions of LSL. The use of a generated freely by clause makes it clear that the reader does not have
to be concerned about ill-defined structural inductions.
Another pitfall in LSL is to assert that all values of some freely-generated type satisfy some nontrivial property. An example is the trait given below.
BadRational: trait
includes Integer
Rational tuple of num: Int, den: Int
\forall r: Rational
r.den > 0;
([3, -4]).den > 0;
(-4) > 0; % oops!
In this trait, the implications highlight the consequences of this specification pitfall: an inconsistent theory. The reason the above trait is inconsistent is that it says that all integers are
positive. This is because for all integers i and j, one can form a tuple of sort rational by writing [i, j]. This kind of inconsistency arises whenever one has a freely generated sort, such as LSL
tuples, and tries to add constraints as properties. There are several ways out of this particular pit. The most straightforward is to use a different sort for the denominators of rationals (e.g., the
positive numbers, as in [Guttag-Horning93], page 207). Another way is to specify the sort Rational without using a tuple, but using your own constructor, such as mkRat; this allows the sort to be not
freely generated. The only problem with that approach is that one either has to make arbitrary or difficult decisions (such as the value of mkRat(3, 0)), or one has to be satisfied with an
underspecified constructor. Another way to avoid this is to use a parameterized trait; for example, we could have parameterized BadRational by the sort Int, and instead of including the trait Integer
, used an assumes clause to specify the property desired of the denominator's sort. In some cases, one can postpone asserting such properties until one reaches the interface specification; this is
because in a BISL one can impose such properties on variables, not on entire sorts.
To avoid this pit, think of the following mantra.
Is this property defining a sort, or imposing some constraint on another sort?
(If the answer is that it's imposing a constraint on another sort, watch out for the pit!)
A pitfall that some novices fall into is forgetting that LSL is a purely mathematical notation, that has no concept of state or side effects. For example, the following are surely incorrect
signatures in LSL.
BadSig: trait
includes Integer
currentTime: -> Seconds
random: -> Int
If you think random is a good signature for a random number generator, you've fallen into the pit. The operator random is an (underspecified) constant; that is, random stands for some particular
integer, always the same one no matter how many times it is mentioned in an assertion. The operator currentTime illustrates how mathematical notation is "timeless": it is also a constant.
How could you specify a notion like a clock in LSL? What you have to do is, instead of specifying the concept of "the current time", specify the concept of a clock. For example consider the following
ClockTrait: trait
assumes TotalOrder(Seconds for T)
newClock: Seconds -> Clock
tick: Clock -> Clock
currentTime: Clock -> Seconds
Clock generated by newClock, tick
\forall sec: Seconds, c: Clock
currentTime(newClock(sec)) == sec;
currentTime(tick(c)) > currentTime(c);
\forall sec: Seconds, c: Clock
currentTime(tick(tick(c))) > currentTime(c);
Note that a clock doesn't change state in LSL, however, because each clock is also timeless, the tick operator produces a new Clock value. Tricks like this are fundamental to semantics (see, for
example, [Schmidt86]). As an exercise, you might want to specify a random number generator along the same lines.
To avoid this pit remember the following mantra when you are writing the signatures of operators in LSL.
Does this operator have any hidden parameters?
There are no hidden parameters, or global variables, in LSL.
A logical pitfall that one can fall prey of in LSL is failing to understand that the quantifiers on an equation in LSL are outside the equation as a whole. Usually this problem can be detected when
there is a free variable that appears only on the right-hand side of an equation. (This assumes that you use the typical convention of having the right-hand side of an equation be the "result" of the
left-hand side.) For example, consider the following (from [Wing95], p. 13).
BadSubsetTrait: trait
includes SetBasics(Set[E] for C)
__ \subseteq __: Set[E], Set[E] -> Bool
\forall e: E, s1, s2: Set[E]
s1 \subseteq s2 == (e \in s1 => e \in s2);
\forall e: E, s1, s2: Set[E]
(e \in s1 /\ e \in s2) => s1 \subseteq s2;
The implication highlights the problem: if two sets have any single element in common, then they are considered subsets of each other by the above trait. This trait could be corrected in LSL by
writing the quantifier inside the defining equation. (See section 2.24 How do I write logical quantifiers within an LSL term?, for the meaning of \A.)
\forall e: E, s1, s2: Set[E]
s1 \subseteq s2 == \A e (e \in s1 => e \in s2);
This is different, because the right-hand side must hold for all e, not just some arbitrary e.
To avoid this pit remember the following mantra whenever you use a variable on only one side of an equation.
Is it okay if this variable has a single, arbitrary value?
See Wing's paper "Hints to Specifiers" [Wing95] for more tips and general advice.
Yes indeed, we can give you some tips.
The first tip helps you write down an algebraic specification of a sort that is intended to be used as an abstract data type (see [Guttag-Horning78], Section 3.4, and [Guttag-Horning93], Section
4.9). The idea is to divide the set of operators of your sort into generators and observers. A constructor returns your sort, while an observer takes your sort in at least one argument and returns
some more primitive sort (such as Bool). The tip (quoted from [Guttag-Horning93], p. 54) is to:
Write an equation defining the result of applying each observer to each generator.
For example, consider the following trait.
ResponseSigTrait: trait
yes, no, maybe: -> Response
optimistic, pessimistic: Response -> Bool
Response generated by yes, no, maybe
Response partitioned by optimistic, pessimistic
In this trait, the generators are yes, no, and maybe; the observers are optimistic and pessimistic. What equations would you write?
According to the tip, you would write equations with left-hand sides as in the following trait. (The tip doesn't tell you what to put in the right hand sides.)
ResponseTrait: trait
includes ResponseSigTrait
optimistic(yes) == true;
optimistic(no) == false;
optimistic(maybe) == true;
pessimistic(yes) == false;
pessimistic(no) == true;
pessimistic(maybe) == true;
\forall r: Response
yes ~= no;
yes ~= maybe;
no ~= maybe;
optimistic(r) == r = yes \/ r = maybe;
pessimistic(r) == r = no \/ r = maybe;
(Defining the operators optimistic and pessimistic as in the last two equations in the implies section of the above trait is not wrong, but doesn't follow the tip. That kind of definition can be
considered a shorthand notation for writing down the equations in the asserts section.)
As in the above example, you can specify that some set of generators is complete by listing them in a generated by clause. If you have other generators, what Guttag and Horning call extensions, then
they can be defined in much the same way as the observers are defined. As an exercise, try defining the operator introduced in the following trait.
ResponseTraitExercise: trait
includes ResponseTrait
negate: Response -> Response
Here are some other general tips for LSL.
See Wing's paper "Hints to Specifiers" [Wing95] for several more tips and general advice.
You can also find other tips by looking for the "mottos" and "mantras" in this chapter. See section 2.18 What pitfalls are there for LSL specifiers?, for tips on what to avoid.
This is a good question, although there's no hard-and-fast answer to it. To help you in deciding whether a specification is sufficiently abstract, we'll give you a checklist, and an example of what
not to do.
• Are you using an overly-specific representation?
• Are you specifying an algorithm for an operator instead of its properties?
• Are you working with an overly-specific sort of data instead of a sort parameter and some assumed properties?
• Are you specifying any unnecessary sizes or other constants?
To illustrate this checklist, we'll give a really awful example of a specification that isn't abstract enough. It illustrates every fault in the above list. (Of course, you'd never write anything
this bad.)
NonAbstractSet: trait
includes Integer
SmallSet tuple of firstDef, secondDef, thirdDef: Bool,
first, second, third: Int
{}: -> SmallSet
insert: Int, SmallSet -> SmallSet
__ \in __: Int, SmallSet -> Bool
size: SmallSet -> Int
\forall i,i1,i2,i3: Int, s: SmallSet, b1,b2,b3: Bool
{} == [false, false, false, 0, 0, 0];
insert(i, s)
== if ~s.firstDef
then set_first(set_firstDef(s, true), i)
else if ~s.secondDef
then set_second(set_secondDef(s, true), i)
else if ~s.thirdDef
then set_third(set_thirdDef(s, true), i)
else s;
i \in s == if s.firstDef /\ i = s.first
then true
else if s.secondDef /\ i = s.second
then true
else if s.thirdDef /\ i = s.third
then true
else false;
size(s) == if ~s.firstDef then 0
else if ~s.secondDef then 1
else if ~s.thirdDef then 2
else 3;
Comparing this specification with the one in Guttag and Horning's handbook ([Guttag-Horning93], pp. 166-167) reveals a lot about how to write more abstract specifications. Notice that the trait
NonAbstractSet represents the sort SmallSet as a tuple. This is akin to a "design decision" and makes the specification less abstract. Another problem is the specification of algorithms for the
observers insert, \in, and size. These could be made more abstract, even without changing the definition of the sort SmallSet. For example, using the idea of generators and observers (see section
2.19 Can you give me some tips for specifying things with LSL?), one could specify \in as follows.
i \in {} == false;
size(s) < 3 => (i \in insert(i, s) = true);
The above trait also defines sets of integers, even though it doesn't particularly depend on the properties of integers. It would be more abstract to specify sets of elements. Finally the trait only
works for sets of three elements. Unless there is some good reason for this (for example, modeling some particular implementation), you should avoid such arbitrary limits in LSL. They can easily be
imposed at the interface level.
Look at traits in the LSL handbook for examples of how to do things right. You may find that a trait can be too abstract to make it easily understood; that's fine, you don't have to try to define the
ultimate theory of whatever you're working with.
See [Wing95], section 4.1 for a more extended discussion on the above ideas. You might want to look at [Liskov-Guttag86] for more background on the general idea of abstraction in specification. See
also [Jones90] for the related idea of "implementation bias".
The question of how to write a trait that will guarantee consistency is important for two reasons.
• If you use all of LSL, which includes first-order logic, consistency is undecidable.
• If your trait is inconsistent, then for most purposes the theory you are specifying is worthless, because everything will be provable (even equations such as true == false).
You can trade off ease of expression vs. ease of proving consistency as follows (quoted from Garland's posting to `comp.specification.larch', June 26, 1996).
If you are willing to sacrifice ease of proving consistency, you can axiomatize your theories in first-order logic, where consistency is undecidable.
If you are willing to sacrifice ease of expression, you can achieve consistency as do Boyer and Moore [Boyer-Moore79][Boyer-Moore88] with their "principle of definition"; that is, you can
axiomatize your theories in primitive recursive arithmetic rather than in first-order logic.
If you are willing to settle for some sort of middle ground, and you are willing to wait, a future release of the LSL Checker may provide some help. I've been thinking about how to enhance the
checker to provide warnings about potentially inconsistent constructions, basically by using a less restrictive version of Boyer-Moore principle of definition to classify constructs as safe or
Part of my envisioned project for having LSL issue warnings about potential inconsistencies involves generating proof obligations for LP that can be used to establish consistency when it is in
So you can understand this tradeoff better, and so you can be aware of when to worry about consistency, we will explain Boyer and Moore's shell principle and definition principle and how they apply
to LSL.
Boyer and Moore's shell principle (see [Boyer-Moore88] Section 2.3 and [Boyer-Moore79] Section III.E) is a way of introducing new tuple-like sorts; one would follow it literally in LSL by only using
tuple of to construct sorts. This prevents inconsistency, because it prevents the operators that construct the new sorts from changing the theory of existing sorts. (That is, the new theory is a
conservative extension of the existing theory; see, for example, Section 6.12 of [Ehrig-Mahr85].)
Although following the shell principle literally seems far too restrictive in practice for LSL, one can try to use its ideas to define new sorts in a disciplined way to help prevent inconsistency.
This is done by divide the set of operators of your sort into generators and observers and specifying the result of applying each observer to each generator, as described above (see section 2.19 Can
you give me some tips for specifying things with LSL?). For an example, see the handbook trait SetBasics (in [Guttag-Horning93], page 166). However, it's not clear whether following this discipline
guarantees that your theory will be consistent.
Boyer and Moore's definition principle (see [Boyer-Moore88] Section 2.6) is a way of defining auxiliary functions (operators in a trait that are not generators and that are not the primitive
observers used in defining a sort). The definition principle does not allow one to underspecify a function; its value must be defined in all of its declared domain. Furthermore, recursion is only
allowed if one can prove that the recursion is terminating. For LSL this means that a definition of operator f satisfies the definition principle if it satisfies the following conditions.
• There is just one equation that defines the operator f, and it has the following form, where x1, ..., xn are distinct logical variables, called the "formals", and body is an LSL term, called the
f(x1, ..., xn) == body
• The only free variables in body are the formals, x1, ..., xn.
• If the definition is recursive, then the recursion must be proved to terminate.
See section 2.17 Can I define operators using recursion?, for how a nonterminating recursive definition can cause inconsistency.
For examples of what may happen when some of the other conditions of the definition principle are violated, consider the following trait.
BadDefinitionsTrait: trait
includes Integer
moreThanOneAxiom: Int -> Bool
freeVarsInBody: Int -> Int
\forall i, k: Int
moreThanOneAxiom(i) == i >= 0;
moreThanOneAxiom(i) == i <= 0;
freeVarsInBody(i) == k + i;
true == false;
3 == 2;
The two axioms for the operator moreThanOneAxiom makes the theory inconsistent; to see this, consider the following calculation.
= {by the trait Integer}
4 >= 0
= {by the first axiom}
= {by the second axiom}
4 <= 0
= {by the trait Integer}
See section 2.18 What pitfalls are there for LSL specifiers?, for the trait BadRational that illustrates a more subtle way to fall into this kind of inconsistency.
In the trait BadDefinitionsTrait above, the axiom for freeVarsInBody also makes the theory inconsistent. This can be seen as follows.
= {by the trait Integer}
3 + 0
= {by the axiom for freeVarsInBody, with k = 3}
= {by the axiom for freeVarsInBody, with k = 2}
2 + 0
= {by the trait Integer}
See section 2.18 What pitfalls are there for LSL specifiers?, for the trait BadSig that illustrates another way to fall into this kind of inconsistency.
Because Boyer and Moore's definition principle is quite restrictive, it should be emphasized that the definition principle does not have to be used in LSL, and is not even necessary for making
consistent definitions. For example, it seems perfectly reasonable to specify an auxiliary operator by using several equations, one for each generator of an argument sort (see section 2.19 Can you
give me some tips for specifying things with LSL?), instead of using just one equation and if then else. The definition principle requires that all the variables in the defining equation be distinct,
but this seems only to prevent underspecification, not inconsistency. (See section 2.16.2 Do I have to specify everything completely in LSL?, for a discussion of underspecification.) The definition
principle also prohibits mutual recursion.
Remember that it's a tradeoff: if you use the definition principle, you are guaranteed to make consistent auxiliary function definitions, but it may be harder to specify what you want, and your
specifications may be less clear than they would be otherwise.
See section 3.17 How do I prove that my theory is consistent with LP?, for how to use LP to help prove that a trait is consistent. This would be useful if the trait violates either Boyer and Moore's
shell principle or the definition principle.
No, you don't have to write all the traits you are going to use from scratch. You should certainly try to use Guttag and Horning's LSL handbook traits ([Guttag-Horning93], Appendix A), which are
freely available. Others have also made other handbooks available. See section 2.23 Where can I find handbooks of LSL traits?, for details on how to get these handbooks.
The whole idea of LSL is to enable reuse of traits, so you should familiarize yourself with the relevant handbooks before starting to write new traits.
The most commonly-used handbook of LSL traits is Guttag and Horning's handbook ([Guttag-Horning93], Appendix A). This can be obtained by anonymous ftp with the LSL checker (see section 2.3 Where can
I get a checker for LSL?). A hypertext version is on-line in
A general resource for all known handbooks that are publically available is found on the world-wide web, at the following URL.
Besides a pointer to Guttag and Horning's handbooks, other handbooks found there include handbooks of: mathematical traits, calendar traits, Larch/C++ traits. There is also a handbook of "SPECS
traits", which allows one to specify in a style similar to VDM-SL. The LSL input for these handbooks can be obtained from the world-wide web page given above, or by anonymous ftp from the following
In LSL 3.1, you can write a universal quantifier within an LSL term by using \A, and an existential quantifier using \E. As an example, consider the following trait (from Leavens's Math handbook).
DenseOrder(T) : trait
includes PartialOrder
\forall x,y,z: T
(x < y) => (\E z (x < z /\ z < y));
\forall x,y,z,t,u: T
(x < y) => \E z \E t (x < z /\ z < t /\ t < y);
(x < y) => \E z \E t \E u (x < z /\ z < t /\ t < u /\ u < y);
Note that the variables used in quantifiers are declared before the term. The variable z used in the asserts section is existentially quantified, and not universally quantified as might appear from
its declaration.
You can get a LaTeX style file, `larch.sty', and a macro file defining a bunch of mathematical symbols, `larchmath.tex', by anonymous ftp from the following URL.
These files were originally designed by Lamport, and revised by Horning.
The documentation for `larch.sty' says that it is to be used with LaTeX 2.09. However, it can be used with LaTeX2e. To do so, put the following lines at the start of your LaTeX input.
Many published Larch papers and documents have formatting produced by these TeX macros. However, it's not clear that the use of special symbols is a good thing. Some of us prefer to let readers see
the way the input is typed (e.g., \in), because we think that non-mathematicians find it easier to understand words instead of arbitrary symbols. There is some experimental evidence for this
[Finney96], so you might want to think about who your audience is before you go to a great effort to print fancy symbols in your specifications.
See section 2.26 How do I write some of those funny symbols in the Handbook in my trait?.
The ISO Latin codes for all of the "funny symbols" used in Guttag and Horning's handbook are found on page 224 of [Guttag-Horning93]. Type the characters on the right hand side when using LSL. If you
are reading the handbook in print form, you might want to tape a copy of that page to your wall.
See section 2.25 Where can I find LaTeX or TeX macros for LSL?, for details on how to get your specification to print out like the handbook.
Yes, there actually is a version of "spiderweb" specialized for use with LSL. If you are really a fan of such fancy systems, you can find it by using the literate programming library's URL.
(If you don't know what literate programming is, you can find out there as well.)
However, we have found that using the "noweb" system is much easier for most people, and nearly as good. You can get noweb from the literate programming library (see above), or directly from the
following URL.
Even easier, it's possible to use the trait modularization facilities of LSL (includes, and assumes) to use virtually the same style as a literate programming system would support.
Yes, Penix's "lsl2html" tool converts an LSL input file to HTML, so it can be browsed over the net. It can be found at the following URL.
Unfortunately, Penix's tool has a few problems that have never been fixed. Instead, you might want to use Leavens's tool "lcpp2html", which is available from the following URL.
To use this like "lsl2html" invoke it as follows.
lcpp2html -P -I *.lsl
In this chapter, we discuss questions about the larch prover (LP).
The Larch Prover (LP) [Garland-Guttag95] is a program that helps check and debug proofs. It is not geared toward proving conjectures automatically, but rather toward automating the tedious parts of
proofs. It automates equational rewriting (proofs by normalization), but does not (by default) automatically try other proof techniques.
Although LP is a general proof assistant, its main uses in the context of Larch are to:
• aid the debugging of specifications (i.e., theories), by helping develop redundant conjectures and their proofs, and
• checking old proofs in the context of changed theories, to see if the proof is still valid.
The philosophy behind LP is "based on the assumption that initial attempts to state conjectures correctly, and then to prove them, usually fail" ([Garland-Guttag91], page 1). Because of this, LP does
not (by default) automatically attempt steps in a proof that are difficult (i.e., speculative, or mathematically interesting). Thus LP does not appear to be a "smart" assistant in finding proofs; it
will not suddenly try a complex induction by itself. This prevents LP from spending lots of computer time on conjectures that are not true, and helps make LP predictable. LP can be thought of more as
a "faithful" and "exacting" assistant. This is appropriate in the debugging stage of a proof, where the steps you should follow in interacting with LP are as follows.
• LP applies equational rewriting to your conjecture, and either tells you that it's true, or presents you with a subgoal that, if proved, will lead to the proof of your conjecture.
• If you have something left to prove, then ask yourself: "why do I think this is true?"
• If you don't think the current subgoal is true, then you can change your conjecture (for example, by strengthening the hypothesis) or your theory, and start over.
• If you think the current subgoal is true, but think you've started the proof out the wrong way, then you can backtrack in the proof (by using the cancel command).
• If you think the current subgoal is true, then tell LP to carry out a step in the proof that corresponds to your reason. (For example, if you believe that the subgoal is true because it's true in
each of several cases, use the resume by cases command.)
• LP will then carry out the corresponding proof step for you, and this process starts over.
Think of using LP as a game whose prize is certainty about why your conjecture is true, or at least knowledge about what's wrong with your theory or conjecture. The main tactical knowledge you need
to play the game well is the correspondence between reasons and proof techniques (see section 3.15 Can you give me some tips on proving things with LP?). Also needed for good play is the ability to
always question the consistency of your theory and the truth of your conjectures.
The interaction described above is fine for developing conjectures and their proofs, but LP also has features for replaying proofs, so that you don't have to interact with it when checking old proofs
in slightly modified settings. See section 3.20 How can I develop a proof that I can replay later?
See [Guttag-Horning93], Chapter 7, and [Garland-Guttag95] for more information. See section 3.2 What kind of examples have already been treated by LP?, for more discussion about the uses of LP.
The original reason LP was built to help debug formal specifications (see [Garland-Guttag-Horning90] and Chapter 7 of [Guttag-Horning93]). It has, however, also been used for several other purposes,
including the following (reported in messages to the larch-interest mailing list, February 7 and July 23, 1994).
• Formal verification of circuits and general reasoning about hardware designs (including proofs of invariants, properties, refinement, and equivalence between implementations and specifications,
see, for example, [Saxe-etal92] [Staunstrup-Garland-Guttag89] [Staunstrup-Garland-Guttag92]),
• Reasoning about various mathematical theories, such as relational arithmetic and algebra [Martin-Lai92].
• Verifying sequential programs (such as compilers) or properties of such programs.
• Verification of concurrent programs (such as a distributed multiuser service) or properties of concurrent algorithms [Luchangco-etal94] [Soegaard-Anderson-etal93] and communication protocols
Several other papers about uses of LP appear in [Martin-Wing93].
The basic difference between LP and other theorem provers is that LP does not automatically attempt complex (i.e., interesting or speculative) proof steps. LP also has no way to write general proof
strategies or tactics (as can be done in PVS, LCF, and other theorem provers). An additional convenience is that there are tools for supporting the translation of LSL specifications into LP's input
Compared with PVS [Owre-etal95], LP has a less sophisticated user interface, and less automation. For example, PVS allows one to reorder what lemmas should be proved more easily. PVS also has
decision procedures for ground linear arithmetic, and possibly better integration of arithmetic with rewriting. The language used by PVS also has a more sophisticated type system than does that of
LSL (or LP), as it uses predicate subtyping and dependent types. PVS also provides a strategy language for expressing proof strategies, although it seems that users typically do not write their own
strategies. PVS also has various syntactic and semantic features, such as support for tables and various kinds of model checking that are not available in LP. On the other hand, LP is more portable
(runs on more kinds of computers), and because its interface does not rely on emacs, it requires less set-up. Furthermore, as a teaching aid, a less automatic user interface and a simpler type system
have benefits in making students think more.
The Boyer-Moore theorem prover, NQTHM [Boyer-Moore79] [Boyer-Moore88] is also more agressive than LP in trying to actively search for proofs. Its successor, ACL2 is described as "industrial strength"
A major difference between LP and HOL [Gordon-Melham93], Isabelle [Paulson94], and PVS is that LP does not use a higher-order logic, which the others do. Isabelle also has extensive support for term
[[[Should also compare to OTTER, and RRL. Contributions are welcome here...]]]
Currently, LP only runs on Unix machines. You can get an implementation of LP using the world-wide web, starting at the following URL.
You can also get an implementation by anonymous ftp from the following URL.
See section 2.3 Where can I get a checker for LSL?, for information about getting a checker for LSL (that translates LSL traits into LP's input format), and also for information on the Larch
Development Environment (LDE), that supports LSL, LP, LCL, and Larch/C++.
Yes, there is a summary of LP's commands in the section titled "Command Summary" of [Garland-Guttag95]. For ease of reference, you can find this at the following URL.
(You might want to print this out and paste it on your wall.)
Within an LP process, you can see this by issuing the command help commands. Use help display to see help on the command display. Use display ? to see the arguments you can pass to the command
According to Garland (posting on the larch-interest mailing list, July 14, 1995), the erase (or rubout) character used by LP cannot be changed easily. The reason is that "LP is written in CLU, and
LP's line editing features are provided by the CLU runtime environment. This environment hard-wires the delete key [DEL] as the rubout character."
You can erase the current line in LP by typing a C-u (control-U). You can redisplay the current line by typing C-r. See [Garland-Guttag91], page 7 for more details.
See section 3.20 How can I develop a proof that I can replay later?, for a way to avoid tedium and much direct interaction with LP.
If you use emacs, there is a way to get completely around this problem. Simply start emacs, and type M-x shell, and then run LP from within the Unix shell that appears. Besides allowing arbitrary
editing, this trick allows you to scroll backwards to view output.
If you are running LP under Unix, the Unix "quit" character (usually C-g, that's control-g) will interrupt LP's execution.
See section 3.6 Can I change the erase character in LP?, for other characters that aid in editing interactive input to LP. See section 3.20 How can I develop a proof that I can replay later?, for a
way to avoid tedium and much direct interaction with LP.
No, you do not need to use LSL if you use LP. LP has its own input format (although it is very similar to LSL's input format). So, many users of LP simply bypass LSL, and use LP exclusively. On the
other hand, using LSL as an input format to LP has the following advantages.
• The LSL checker automatically generates proof management commands (scripting and logging) for LP, and helps organize theories and conjectures into files. See section 3.10 How do I use LP to check
my LSL traits?.
• Your theories and conjectures are accessible to a wider audience if they are stated in LSL format rather than in LP input format.
No, you don't have to use LP if you use LSL. However, it's very helpful to use LP with LSL, because it aids in debugging LSL traits. The idea is that you state redundant properties of the theory you
are specifying in the implies section of your LSL trait (see section 2.11 When should I use an implies section?), and then use LP to check that those really do follow from the trait. See section 3.10
How do I use LP to check my LSL traits?, and Chapter 7 of [Guttag-Horning93], for how to do this.
There are three main things to check about an LSL trait (see [Guttag-Horning93], Chapter 7).
• Is the trait consistent?
• Do the trait's assumptions (if any) hold?
• Do the traits implications hold?
The first kind of check is vital, because you can prove anything in an inconsistent theory. See section 3.17 How do I prove that my theory is consistent with LP?, and Section 7.6 of
[Guttag-Horning93], for more information on how to do this.
The use of an assumes clause in a trait specifies assumptions about the context in which that trait is used (see section 2.8 What is the difference between assumes and includes?); if your trait has
included other traits with assumes clauses, then these assumptions must be checked (see page 124-125 of [Guttag-Horning93]). However, checking such assumptions is similar to the checking that happens
when checking a trait's implications, which is the focus of this section.
Here are the top-level steps in using LP to check the implications written in the implies section of your trait. Let us suppose your trait is called Foo, and is in a file named `Foo.lsl'.
• To produce LP input from your trait, you would use the `-lp' switch with the LSL checker as follows.
lsl -lp Foo.lsl
This produces the following files: `Foo_Axioms.lp', `Foo_Checks.lp', and `Foo_Theorems.lp'.
• LP will prompt you for input with a prompt that looks like `LP1: '. When you see this prompt, type the following.
execute Foo_Checks
• LP will read in your input and attempt the first (rewriting) steps of the proofs of your conjectures. Often it will suspend each proof waiting for your input. See section 3.1 What is the Larch
Prover (LP)?, for the steps to follow in debugging your trait by trying to prove the conjectures in its implies section. See section 3.15 Can you give me some tips on proving things with LP? for
tips on how to go about a proof.
See section 3.20 How can I develop a proof that I can replay later?, for an alternative to using LP interactively.
See section 3.15 Can you give me some tips on proving things with LP? and Sections 7.4-7.5 of [Guttag-Horning93], for more information on checking implications with LP.
The LSL checker, when used with the `-lp' switch on a file named `Foo.lsl', in general produces three files: `Foo_Axioms.lp', `Foo_Checks.lp', and `Foo_Theorems.lp'. Each of these files is in LP
input format, hence the suffix `.lp'.
The file `Foo_Axioms.lp' contains a translation of the trait in `Foo.lsl', minus the implies section. It is "executed" by LP when LP is executing the translation of some other LSL trait that includes
The file `Foo_Checks.lp' contains a translation of the implications in `Foo.lsl', along with checks of assumptions. As an LP command file, it starts logging and scripting to the appropriate files
(`Foo.lplog' and `Foo.lpscr'), executes the `Foo_Axioms.lp' file, and then attempts proofs for each of the checks. This is the main file you execute when using LP.
The file `Foo_Theorems.lp' contains a translation of the implications in `Foo.lsl', stated as axioms. It is used automatically when it is safe to do so in the LP input generated by the LSL checker.
All of these files are text files, so viewing them with an editor, or printing them out, is a good way to solidify your understanding. (See also page 128 of [Guttag-Horning93].)
No, if LP stops working, it may just mean that it wants more guidance from you. The way to tell if all outstanding conjectures have been proved is to use the qed command [Garland-Guttag95]. If you
see the following, then you are done.
All conjectures have been proved.
However, if you see something like the following, then you are not done.
Still attempting to prove level 2 lemma FooTheorem.2
See section 3.13 How do I find out where I am in my proof?, for a way to find out what has to be done to finish your proof.
When LP stops reading input from your file (for example, the `_Checks.lp' file for an LSL trait), the first thing you should do is to get your bearings. The commands described here also are helpful
if you are using LP interactively, because it's easy to forget what you are trying to do.
To find your bearings, the following commands are helpful (see the section titled "Sample Proofs: how to guide a proof" in [Garland-Guttag95]).
To find out ... Use the command
What is left to be proved? display conjectures
What is the status of my proof? display proof-status
How did I get here? history
What axioms can I use? display
What are the current hypotheses? display *Hyp
What are the induction rules? display induction-rules
What are the deduction rules? display deduction-rules
The commands display conjectures and display seem to be the most useful. When you use display conjectures, LP shows where it stopped in your proofs. There may be several lemmas listed below the first
"conjecture". The last of these (the one with the greatest "level" number) is the one you can work on first.
See section 3.15 Can you give me some tips on proving things with LP?, for tips on how to proceed with your proof, once you've found your bearings.
By default, LP automatically uses only the following forward inference techniques.
• normalization, and
• application of deduction rules (that are not inactive).
See section 3.15 Can you give me some tips on proving things with LP?, and Sections 7.4-7.5 of [Guttag-Horning93], for other proof techniques.
The most important tip for using LP is to think about your conjectures, and to be very skeptical of their truth.
When you first start using LP, you will be tempted to have it do the thinking for you. You may find yourself trying random proof techniques to prove the current subgoal. Resist that temptation! Curb
your desire for automatic proof! Instead, use one of the following two ideas (see the section titled "Sample Proofs: how to guide a proof" in [Garland-Guttag95]).
Think about whether the conjecture is true, and if so, why you believe it. (Imagine trying to convince a very skeptical, but logical, person.) If you believe your conjecture, then your reason may
suggest a proof technique. Here is a small table relating reasons and some of LP's proof techniques.
If your argument Then
has the shape ... try the following ...
For a simple case ..., resume by induction on
then assuming ...
for bigger cases...
On the one hand, resume by cases ...
on the other hand... % or sometimes
resume by induction on ...
If that weren't true, resume by contradiction
then ..., % sometimes followed by
critical-pairs *Hyp with *
By axiom R, ... apply R to conjecture
% or if R contains a free or
% universally quantified variable, x
instantiate x by t in R
% or if R has an existentially
% quantified variable, x
fix x as t in ...
This should simplify normalize conjecture with ...
by ...
% or if the ... is a formula
rewrite conjecture with ...
Proofs by induction are very common and more useful than you might think. For example, induction is used to prove the last two implications in ResponseTrait (see section 2.19 Can you give me some
tips for specifying things with LSL?); in that proof, there are three basis cases, and no inductive case. This shows that any time you have a generated sort, a proof by induction can be used, even if
there are only a finite number of values of that sort. Furthermore, you should consider a proof by induction first, as some other proof techniques (by cases, =>, if-method, contradiction) make proof
by induction more difficult or impossible, because they replace free variables by constants (see [Garland-Guttag91], page 65).
Another idea is to see if the form of your conjecture suggests a proof technique. Here is a small table of conjecture forms and proof techniques. (A name of the form fresh1 is to be replaced by a
fresh variable name, that is, by a name that is not used elsewhere.)
If your conjecture Then
looks like ... try the following ...
~(E1 = E2) resume by contradiction
% sometimes followed by
critical-pairs *Hyp with *
P1 /\ P2 /\ ... resume by /\
P => Q resume by =>
P <=> Q resume by <=>
if H then P else Q resume by if-method
...(if P then ...)... resume by cases P
\E x (P(x)) resume by generalizing x from fresh1
If your conjecture looks like \A x (P(x)), then just try whatever method is suggested for P(x) by itself.
There are some technical caveats for the above suggestions [Garland-Guttag95]. Using resume by /\ can lead to longer proofs, if the same lemma is needed in the proof of each conjunct. Using resume by
=> and resume by <=> makes certain kinds of proofs by induction impossible (so if you want to do a proof by induction, do that before using these methods). The generalization techniques are not
helpful if there are other free variables in the formula.
For complex proofs, it is quite useful to prove lemmas first. You can either state these as things to be proved before what you are trying to prove, or use the prove command during the proof. You may
need to use the command set immunity on to keep LP from deleting your lemma after you prove it.
When all else fails, use the old mathematician's trick of trying to construct a counterexample to your conjecture. If you can't do it, try to figure out why you can't (that will suggest a proof
technique). Or, if you start making progress towards a counterexample, keep going. Repeat the following mantra to yourself.
It might not be true.
Finally, when you finally succeed in proving your conjecture, don't celebrate unless you have also shown that your theory is consistent (see section 3.17 How do I prove that my theory is consistent
with LP?).
See section 3.20 How can I develop a proof that I can replay later?, for advice on how to work noninteractively with LP.
For a short, but important list of tips, see the section titled "Sample Proofs: how to guide a proof" in [Garland-Guttag95]. See Chapter 7 of [Guttag-Horning93] for more details.
The biggest pitfall is to not think about what you are doing, but to simply try random proof strategies. This quickly becomes frustrating. See section 3.15 Can you give me some tips on proving things
with LP?, for how to work with LP in a more fruitful way.
If your theory is inconsistent, proofs may not be easy, but they will always be possible. Always try to convince yourself of your theory's consistency before celebrating your proof. See section 3.17
How do I prove that my theory is consistent with LP?.
It is difficult to prove a theory is consistent using LP. However, one can profitably use LP to search for inconsistency. (See section 2.18 What pitfalls are there for LSL specifiers?, for some
common LSL problems that lead to inconsistency.)
One way to use LP to search for inconsistency is using the complete command. This directs LP to attempt to add enough rewrite rules to make the theory equationally complete. If this process finds an
inconsistency, such as proving that true == false, then your theory is inconsistent. (This usually doesn't stop in an acceptable amount of time and space, but if it does, and if it hasn't found an
inconsistency, and if your theory is purely equational, then you have proved your theory is consistent.)
Often the completion procedure seems to go off down some path, generating more and more complex formulas. If this happens, interrupt it (see section 3.7 How do I interrupt LP?) and try using the
critical-pairs command to direct the search for inconsistency by naming two sets (not necessarily disjoint) of axioms that you wish to deduce consequences of. You can also use any other of the
forward inference techniques in LP (such as the apply command) to try to deduce an inconsistency in a more directed manner.
See [Guttag-Horning93], section 7.6, for more on this topic. See section 2.21 Is there a way to write a trait that will guarantee consistency?, for a discussion of the tradeoffs involved between
writing an LSL trait in a way that avoids inconsistency and the difficulty in proving consistency.
If you decide you don't like the way your current proof is going in LP, use the cancel command to abort the current proof, backtracking to the previous subgoal. Note that this "suspends work on other
proofs until an explicit resume command is given" [Garland-Guttag95]. See the documentation for variations of this command.
If you want to start over from scratch, without using the quit command and restarting LP, use the clear command.
As a way of making the problem of proving things like 0 <= 2 concrete, consider the following LSL trait. This trait uses the Integer trait in Guttag and Horning's handbook (see [Guttag-Horning93], p.
DecimalProblem: trait
includes Integer
~(1 <= 0);
0 <= 2;
The problem is how to coax LP into proving these implications. The following answer is adapted from a posting of Vandevoorde's `comp.specification.larch' (September 7, 1995).
When the LSL checker translates LSL into LP input, it makes the rewrite rules for the trait DecimalLiterals (see [Guttag-Horning93], p. 164) passive, which means that they don't get applied
automatically [Garland-Guttag95]. For example, 2 will not be rewritten to succ(succ(0)). One reason for this is to keep the conjectures and facts more readable.
However, to prove the implications in your trait, you need to make them active with the following command.
make active DecimalLiterals
This is enough to prove ~(1 <= 0) by rewriting. It's also a necessary first step for the proofs below.
To prove 0 <= 2, you also need to use facts named IsTO* (from the trait IsTO, see page 194 of [Guttag-Horning93]). The easiest proof to understand is to instantiate the rule for transitivity as
instantiate x by 0, y by 1, z by 2 in IsTO.2
(It's a good thing you didn't want to prove 0 <= 9!)
The critical-pairs command is useful for finding such instantiations automatically. For example, to prove 0 <= 2, you can give the following commands to LP.
resume by contradiction % This turns the (negated) conjecture into
% a rewrite rule for critical-pairs.
critical-pairs *Hyp with * % This is common in contradiction proofs.
% For this proof, it suffices to use
% critical-pairs *contraHyp with IsTO.4
There are probably many ways to develop a proof that you can replay later. Our technique has the following steps, and starts with an LSL trait. (If you don't use LSL, you can just use a file of LP
1. Write a trait Foo into the file `Foo.lsl'.
2. Use the LSL checker to generate the `Foo_Checks.lp' file.
% lsl -lp Foo.lsl
3. Optional: use LP to debug your trait and to try out ideas for proofs of the implications interactively.
4. Copy the file `Foo_Checks.lp' into the file `Foo.proof'.
% cp Foo_Checks.lp Foo.proof
5. Optional: if you have used LP already on this trait, then you have a head start. There are lines in the `Foo.lpscr' that indicate what to do for the proof, and what happened in LP. Copy those
lines from the `Foo.lpscr' and put them in the `Foo.proof'. For example, after the following line in Foo.proof
(_x1_ \in _x2_) = (_x1_ \in' _x2_)
you might want to add the following lines from the script file. The first indicates how you did the proof, and the rest indicate what happened.
resume by induction
<> basis subgoal
[] basis subgoal
<> basis subgoal
[] basis subgoal
<> induction subgoal
[] induction subgoal
[] conjecture
6. Optional: put the command set box-checking on at the top of your proof, so that these box comments will be checked. You only need this once per file.
7. Optional: if you haven't used LP yet, think about each conjecture, and (if you still believe it), after each conjecture, write a LP command (or commands) that will complete the proof of that
8. After each conjecture to be proved, following the commands to prove it, if any, each list of reasons prove command add the LP command qed. For example, your file `Foo.proof', might have a section
that looks like the following.
(_x1_ \in _x2_) = (_x1_ \in' _x2_)
resume by induction
<> basis subgoal
[] basis subgoal
<> basis subgoal
[] basis subgoal
<> induction subgoal
[] induction subgoal
[] conjecture
9. Execute the file `Foo.proof' in LP by using the command
execute Foo.proof
If this doesn't work, edit the corrections into the file `Foo.proof'. Often you can obtain input for this file from the file `Foo.lpscr'.
10. When your proof executes to completion, use the file `Foo.lpscr' to put into `Foo.proof' all of the <> and [] comments that help keep your proof synchronized with LP. When you are done, edit
`Foo.proof' so that the differences with the file `Foo_Checks.lp' are minimal, as shown by the following Unix command.
% diff Foo_Checks.lp Foo.proof | more
One advantage of the above procedure is that you can work on one conjecture at a time, which allows you to control the order in which the lemmas are proved. Another advantage is that you tend to
think more about what you are doing than if you are interacting directly with LP. Finally, the above procedure is quite helpful when, inevitably, you revise your trait.
When you revise your trait, by editing the file `Foo.lsl', use the following steps to revise your proof.
1. Use the LSL checker to generate the `Foo_Checks.lp' file for the revised trait.
% lsl -lp Foo.lsl
2. Use the Unix diff command to check to see what you need to change in your proof.
% diff Foo_Checks.lp Foo.proof | more
Ignore all the added lines that you put in your proofs such as the following.
> set box-checking on
> resume by induction
> <> basis subgoal
> [] basis subgoal
> <> basis subgoal
> [] basis subgoal
> <> induction subgoal
> [] induction subgoal
> [] conjecture
> qed
But pay attention to other differences, particularly those for which there is a < at the beginning of the line.
3. Revise your `Foo.proof' file according to the output of the previous step, trying to minimize differences as shown by the diff command.
4. Use LP to rerun your proof. In LP, use the command.
execute Foo.proof
If this doesn't work, edit the corrections into the file `Foo.proof'. Often you can obtain input for this file from the file `Foo.lpscr'.
No, you should only have to prove that the assumptions from whatever assumes clauses you have in the included trait hold in your trait. (See pages 124-125 of [Guttag-Horning93].)
According to [Guttag-Horning93] (page 128), LP will reuse the assertions in the implies section of a trait without forcing you to prove them again "when it is sound to do so". [[But when is that
In this chapter, we discuss questions about Larch Behavioral Interface Specification Languages (BISLs).
A behavioral interface specification language (BISL) is a kind of specification language that has the following characteristics.
• It can specify an exact interface for modules written in some particular programming language. For example, Larch/CLU [Wing83] [Wing87] can specify the details of how a CLU procedure is called,
including the types of its arguments, the types of its results, whether it is an iterator, and the names and types of exception results.
• It can specify the behavior of a program module written in that programming language. For example, in Larch/CLU one can specify that a procedure can signal an exception.
An interface describes exactly how to use a module written in some programming language. For example, a function declaration in C tells how to call that function, by giving the function's name, and
by describing the number and types of its arguments.
The concept of behavioral interface specification is an important feature of the Larch family of specification languages [Wing87]. A particularly good explanation of the concept of behavioral
interface specification is [Lamport89]. In that paper, Lamport describes a vending machine specification. To specify a vending machine, it is not sufficient to merely describe a mathematical mapping
between abstract values (money to cans of soda). One also needs to specify that there will be buttons, a coin slot, a place for the can of soda to appear, and the meaning of the user interface
events. The buttons, the coin slot, etc. form the interface of the vending machine; the behavior is described using a mathematical vocabulary that is an abstraction of the physical machinery that
will be used to realize the specification. The same two pieces are needed to precisely specify a function to be implemented in, say, C++: the C++ interface information and a description of its
Many authors use the term interface specification language for this type of language. However, there is a natural confusion between this term and an "interface definition language", such as the CORBA
IDL, which merely specifies interface information and not behavior. To avoid this confusion, we advocate the term "behavioral interface specification language" for any language that specifies not
only interface information, but also behavior.
In the Larch family there are several BISLs (see section 1.1 What is Larch? What is the Larch family of specification languages?). In the silver book [Guttag-Horning93], a Larch BISL is called a
"Larch interface language". However, we do not use the term "Larch interface language" as a synonym for "BISL", because there are BISLs outside the Larch family (such as those in the RESOLVE family
[Edwards-etal94]). However, "Larch interface language" is a synonym for "Larch BISL". Another synonym is "Larch interface specification language".
The first Larch BISL was Larch/CLU [Wing83] [Wing87]. There are now several others. See section 4.2 Where can I get a BISL for my favorite programming language?, for information about how to get an
See section 1.1 What is Larch? What is the Larch family of specification languages?, and [Guttag-Horning93] for more information.
See section 1.1 What is Larch? What is the Larch family of specification languages? for what Larch BISLs have been discussed in literature and technical reports. In this section, we concentrate on
Larch BISLs that have some kind of tool support. As far as we know, these are the following. (Except as noted, the tools are all free.)
No, you don't always have to write an LSL trait to specify something in a BISL, it just seems that way. Seriously, you don't have to write a trait in the following circumstances.
• You are specifying a procedure and all of the mathematical vocabulary needed is available in existing traits. These traits may come from a handbook (see section 2.23 Where can I find handbooks of
LSL traits?), or they may be built-in to your BISL. For example, in most Larch BISLs, you would not have to write a trait for your language's built-in integer type.
• Your BISL may create a trait for some compound type for you automatically. For example, in LCL, traits are created automatically to model C struct types.
If you are specifying an ADT in a Larch BISL, you will always have to use some trait to describe its abstract values (see section 4.4 What is an abstract value?). It might be a trait from a handbook,
or some combination of traits that already exist, but there will have to be one that defines the abstract values for your ADT.
Note that it is almost always possible to use one of LSL's shorthands or a combination of handbook traits to specify the abstract values of an ADT. The only drawback might be that such descriptions
of abstract values might not be as close to your way of thinking about your problem domain as you might like or it might not have enough high-level vocabulary for your purposes. For example, when
specifying the abstract values of symbol tables, one might use the handbook trait FiniteMap (see [Guttag-Horning93], page 185). However, if your symbol table has to handle nested scopes, that trait,
by itself might not be adequate. You might, for example, want to use a stack of finite maps as abstract values, in which case you might also use the both the FiniteMap and Stack traits (see
[Guttag-Horning93], page 170), and also add some additional operators to search the abstract values appropriately. If that kind of thing gets too complex, then it might be better to start writing
your own trait (see section 2.20 How do I know when my trait is abstract enough?). Also, when using LSL shorthands and standard handbook traits, be careful that you do not state contradictory
constraints as axioms (see section 2.18 What pitfalls are there for LSL specifiers?).
Usually, when you are specifying a procedure, you won't have to specify a separate trait, because you will use the vocabulary of the traits that describe the abstract values of whatever types of
arguments and results (and global variables) your procedure uses. However, if this vocabulary isn't expressive enough, then you may find it necessary or convenient (see [Wing95], Section 4.1) to
write your own trait. This trait would specify the additional LSL operators you need to state your procedure's pre- and postconditions; it would normally assume the traits describing the abstract
values of the procedure's argument and result types (see section 2.8 What is the difference between assumes and includes?).
The way you tell your BISL what trait it should use differs between BISLs. For example, in LCL and Larch/C++, the keyword that starts the list of traits to be used is uses. In Larch/ML, the keyword
is using; in Larch/CLU, it is based on; in LM3 it is TRAITS; in Larch/Smalltalk it is trait, in Larch/Ada it is TRAIT.
An abstract value is a mathematical abstraction of some bits that might exist in your computer. (It's an abstraction in the same sense that a road map of the United States is an abstraction of the
geography and roads of the United States.) In a Larch BISL's model of your computer, each variable contains an abstract value. The idea is that, you shouldn't have to reason about the bit patterns
inside the computer, which are mathematically messy, far removed from the concepts that are important in your problem domain, and subject to change. Instead, you should reason about your programs
using simple ideas that are close to your problem and independent of the changing details of your program's data structures.
This idea originated in Hoare's article on specification and verification of ADTs [Hoare72a]. Hoare's example used an ADT of small integer sets, which were implemented using arrays, but specified
using mathematical sets. The mathematical sets were thus the abstract values of the arrays used in the implementation.
See [Liskov-Guttag86] and Chapters 3, 5, and 6 of [Guttag-Horning93] for more background and information.
An object is mutable if its abstract value can change over time. (See section 4.4 What is an abstract value? for more about abstract values.) For example, most collection types, such as arrays or
records in typical programming langauges are types with mutable objects.
By contrast, an object is immutable if its abstract value cannot change. For example, the integers are not mutable, as you cannot change the number 3 to be 4.
Typically, immutable objects are atomic and do not contain subobjects, while mutable objects often contain subobjects.
A type with mutable objects is sometimes called an mutable type and a type with immutable objects is called an immutable type. The specifications of mutable and immutable types differ in that an
immutable type cannot have operations that mutate objects of that type.
No, you don't prove the specification of an ADT in a BISL is the same as a trait. Those are completely different concepts. An LSL trait specifies a theory, not what is to be implemented in a
programming language; the BISL is what is used to specify an implementation (program module). You use the vocabulary in a trait to help specify program modules in a BISL, but there is no proof
obligation that you engender by doing so. See section 1.1 What is Larch? What is the Larch family of specification languages?, and Chapter 3 of [Guttag-Horning93], for more on the general idea of
two-tiered specifications.
What you might want to prove instead is the correctness of an implementation of your ADT specification; for example, if you specify a class Stack in the BISL Larch/C++, then you would use an LSL
trait to aid the specification (see section 1.2 Why does Larch have two tiers?), and implement the class in C++. You could then try to prove that your C++ code is correct with respect to your Larch/
C++ specification. In this proof, you could make use of the trait's theory. However, you wouldn't be proving any correspondence between the trait and the BISL specification.
In most Larch BISLs(2), the keyword requires introduces a precondition into a procedure specification. (This usage dates back to at least Larch/CLU [Wing83].)
A precondition is a predicate that must hold before a the procedure is invoked. It documents the obligations of the caller in the contract that is being specified. Therefore, it can be assumed to be
true in an implementation of the procedure, because if it is does not hold when the procedure is not obligated to fulfill the contract. This semantics is a "disclaimer" semantics [Jackson95]
[Khosla-Maibaum87]; it differs from an "enabling condition" semantics, which would mean that the procedure would not run if called in a state that did not satisfy the precondition.
As an example, consider the specification of a square root function, which is to be written in C (see [Guttag-Horning93], page 2). In this specification the precondition is specified by the following
requires x >= 0;
This means that if the value of the formal parameter x is not greater than or equal to 0, then the specification says nothing about what the implementation should do. Thus that case does not have to
be considered when writing an implementation.
If the precondition of a procedure is the constant predicate true, the entire requires clause can usually be omitted. Such procedures can always be called, and an implementation can make no
assumptions about the call except that the formal parameters will all have whatever types are specified. Because such a procedure will have to validate any properties it needs from its arguments, it
may be safer in the sense of defensive programming. However, it may also be slower than one that requires a stronger precondition, because it will have to spend time validating its arguments. This is
a design tradeoff that can be made either way, but documenting preconditions in procedure specifications prevents the common performance problem of having both the caller and the procedure checking
the validity of the arguments. (See [Liskov-Guttag86] for more discussion of this issue.)
A precondition refers to the state just before the specified procedure's body is run (but after parameter passing). The term comes from Hoare's original paper on program verification [Hoare69], in
which each piece of program code was described with two predicates: a precondition and a postcondition. Both the pre- and the postcondition were predicates on states (that is, they can be considered
functions that take a state and return a boolean). The precondition described the pre-state; that is, it describes the values of program variables just before the execution of a piece of code. The
postcondition described the post-state, which is the state after the code's execution. In a Larch BISL the precondition is like this, but the postcondition is typically a predicate on both the pre-
and the post-state. See section 4.8 What does ensures mean?, for more information.
In most Larch BISLs, the keyword ensures introduces a postcondition into a procedure specification. (This usage dates back to at least Larch/CLU [Wing83].)
A postcondition in a Larch BISL is typically a predicate that is defined on the pre-state and post-state of a procedure call. (See section 4.7 What does requires mean?, for background and
terminology.) It documents the obligations of the called procedure in the contract that is being specified. Therefore, it can be assumed to be true at the end of a call to the procedure, when
reasoning about the call.
As an example, consider the following specification of an integer swap function which is to be written in C++ (this example is quoted from [Leavens97]).
extern void swap(int& x, int& y) throw();
//@ behavior {
//@ requires assigned(x, pre) /\ assigned(y, pre);
//@ modifies x, y;
//@ ensures x' = y^ /\ y' = x^;
//@ }
In this specification, the parameters x and y are passed by reference, and thus each is an alias for a variable in the caller's environment. Because x and y denote variables, they can have different
values in the pre-state and the post-state. The postcondition of the example above says that the pre- and post-state values of the variables x and y are exchanged. The notation x^ refers to the value
of x in the pre-state, and x' refers to its value in the post-state. (This Larch/C++ notation is taken from LCL. In LM3 the ^ notation is not used. In Larch/Smalltalk, the ^ is likewise dropped, and
a subscript "post" is used instead of '.)
Depending on the particular BISL, a procedure specification may have one of two different interpretations. In most Larch BISLs for sequential programming languages, the interpretation is that if the
precondition holds, then the procedure must terminate normally in a state where the postcondition holds. This is called a total correctness interpretation of a procedure specification. In Hoare's
original paper [Hoare69], and in GCIL and Larch/CORBA, a partial correctness interpretation is used. This means that if the precondition holds and if the procedure terminates normally, then the
postcondition must hold. You should check the documentation for your BISL to be sure of its semantics on this point. (Larch/C++ is unique in that it allows you to specify either interpretation; see
[Leavens97], Chapter 6).
In most Larch BISLs, the keyword modifies introduces a list of objects that can be modified by an execution of the specified procedure. (This usage dates back to Larch/CLU [Wing83].)
An object is modified if its abstract value is changed by the execution of the specified procedure. (That is, if it has a different abstract value in the pre-state from the post-state.) Note that the
modifies clause does not mean that the execution must change the object's abstract values; it simply gives the procedure permission to change them. For example, if a swap procedure were passed
variables with identical values (see section 4.8 What does ensures mean?, for the specification of swap), then the values would not be changed.
Note also that a variable's bits may change without a corresponding change in abstract value (see section 4.4 What is an abstract value?). For example, if one has a rational number object, whose
representation uses two integers. Let us write the two integers in the representation as (i,j); thus (2,4) might be one bit pattern that represents the abstract value 1/2. However, changing the bits
from (2,4) to (1,2) would not affect the abstract value.
The modifies clause can be considered syntactic sugar for writing a larger predicate in the postcondition. Since only the objects listed in the modifies clause can have their abstract values changed
by the execution of the procedure, all other objects must retain their abstract values. There is a problem in artificial intelligence called the "frame problem", which is how to know that "nothing
else has changed" when you make some changes to the world. For this reason, the modifies clause in a Larch BISL can be said to specify a frame axiom for the specified procedure. See [Borgida-etal95]
for a general discussion of the frame problem for procedure specifications.
In many specification languages, the frame axiom for a procedure specification is stated in terms of variable names, but in Larch BISLs it is typically stated in terms of objects. The difference is
crucial when the programming language allows aliasing. If the modifies clause in a Larch BISL specified that only certain variable names could change their values, then one would typically need to
have either prohibit aliasing completely, or one would need to write preconditions that prohibited aliases to variables that were to be modified in procedures.
Mentioning an object in a modifies clause should not give the procedure being specified permission to deallocate the object. If that were allowed, it would lead to problems in the BISL's semantics
[Chalin95] [Chalin-etal96]. See section 4.10 What does trashes mean?, for more information on this topic.
A trashes clause is similar to a modifies clause (see section 4.9 What does modifies mean?) in a procedure specification. It gives the procedure permission to deallocate or trash the objects listed.
These objects do not have to be deallocated or trashed, but no other objects may be deallocated or trashed.
The notion of trashing an object means that its abstract value should not be used again. In Larch/C++, allocated variables can be in one of two states: assigned or unassigned; a variable is
unassigned if it has not been assigned a sensible value. However, if one assigns to an assigned variable, a, the value of an unassigned variable, then a becomes unassigned, and hence is considered to
be trashed.
Chalin [Chalin95] [Chalin-etal96] pointed out that specification of what a procedure can modify should be separated from the specification of what it can deallocate or trash. When these notions are
separated, as they are in Larch/C++ (see [Leavens97], chapter 6), there are two frame axioms for procedure specifications.
An alternative to a trashes clause, used in LCL, is to specify input/output modes for formal parameters [Evans00]. Some parameter modes do not allow trashing, and some do not allow modification.
The claims clause, and other kinds of checkable redundancy for BISLs, were introduced into LCL by Tan [Tan94]. In a procedure specification, the keyword claims introduces a predicate, called a claim,
that, like the postcondition, relates the pre- and post-states of a procedure's execution. Roughly speaking, the meaning is that the conjunction of the precondition and the post condition should
imply the claim. This implication can be checked, for example, by using LP.
You could use the claims clause if you wish to highlight some consequences of your procedure specification for the reader. It can also be used to help debug your understanding of a procedure
specification; for example, if you think of more than one way to express a postcondition, write all but one way in a claims clause. Thus the purpose of this clause is similar to the purpose of the
implies section of an LSL trait (see section 2.11 When should I use an implies section?).
The exact meaning of a specification in a BISL is, of course, described by the reference manual for that particular BISL. But in general, the meaning of a BISL specification is a set of program
modules that satisfy the specification. These program modules are to be written in the programming language specified. For example, the meaning of a LCL specification is a set of ANSI C program
modules that satisfy the specification. Similarly, the meaning of a Larch/C++ specification is a set of C++ program modules that satisfy the specification; a Larch/C++ specification cannot be
implemented in, say, Ada or Modula-3.
How does one describe the satisfaction relation for a BISL? Suppose you want to describe satisfaction for the BISL Larch/X, whose programs are to be written in language X. Then you describe the
satisfaction relation in at least the following ways.
• Give a semantics for language X, that assigns to each program module, P, a meaning, M(P). Describe, for each specification, S, written in Larch/X, when M(P) is in the set of meanings specified by
• Develop a verification logic for language X, that says how to check that code satisfies a specification (or some other specification). Then a program module P satisfies a specification S if one
can use the verification logic to check that P satisfies S.
The first technique will probably allow more programs, in principle, to satisfy the specification, especially if the semantics of language X covers all aspects of the language. The second technique
is probably more limiting, but is certainly more useful in practice. These techniques are not mutually exclusive; for example, a relatively complete verification logic could be used as a semantics
for that language. Furthermore, by using complimentary definitions of the programming language (e.g., a denotational semantics and a verification logic), one could prove soundness and completeness
results, which would help debug each kind of semantics, and thus the semantics of the BISL as well.
See [Wing83] for a semantics of Larch/CLU. See [Jones93] for a semantics of LM3. See [Tan94] [Chalin95] for semantics of LCL.
In a procedure specification, you would normally use a precondition to limit the acceptable arguments that the procedure can work with. For example, the standard LSL handbook trait Integer (see
[Guttag-Horning93], page 163) specifies an unbounded set of integers. Many Larch BISLs use this kind of unbounded set of integers as their model of a programming language's built-in integer type.
Suppose that you want to specify an integer discriminant function, say to be implemented in C++. In Larch/C++ you could specify this as follows.
extern int discriminant (int a, int b, int c) throw();
//@ behavior {
//@ requires inRange((b * b) - (4 * a * c));
//@ ensures result = (b * b) - (4 * a * c);
//@ }
Notice the precondition; it says that the result must be able to be represented as an int in the computer (according to the definition of the operator inRange, which is defined in a built-in trait of
Larch/C++ [Leavens97]). Without this precondition, the specification could not be implemented, because if b were the maximum representable integer, and if a and c were 0, then the specification
without the precondition would say that the execution would have to terminate normally and return an integer larger than the maximum representable integer.
(The specification of descriminant above may not be the best design for the procedure, because it requires a precondition that is difficult to test without performing the computation specified. See
section 4.17 Can you give me some tips for specifying things in a BISL?, for more discussion of this point.)
The idea of using preconditions to get around partiality and finiteness problems in procedures goes back to Hoare's original paper on program verification [Hoare69].
To specify an ADT with a finite set of values, such as a bounded queue, use a trait with unbounded values to specify the abstract values. You can then use an invariant in the BISL specification of
your ADT to state that the size of the abstract values is limited. Most likely the ADT operations that construct and mutate objects would need to have preconditions that prevent the abstract value
from growing too large. This approach is preferred over an approach that specifies a finite set of abstract values in LSL, because that trait would be more complicated and less likely to be able to
be reused (see section 2.20 How do I know when my trait is abstract enough?). Even with such a trait, you would still need to have preconditions on your operations that construct and mutate objects,
so there is little gained.
Ways of specifying errors and error checking vary between Larch BISLs, depending primarily on whether the programming language that the BISL is tailored to has exceptions. In a language without
exceptions (like ANSI C), a procedure would have to signal an error by producing an error message and halting the program, or by returning some error indication to the caller of a procedure. If you
want to specify that the program halts, most likey you will just not include the conditions under which it can do so in your precondition (see section 4.13 How do I specify something that is finite
or partial?). If you want to specify the returning of some error indication (a return code, etc.), this is done as if you were specifying any other normal result of a procedure. In a language with
exceptions (like C++), there is the additional option of using the exception handling mechanism to signal errors to the caller. The syntax of this varies, but typically you will declare the
exceptions as usual for the programming language, and there will be some special way to indicate in the postcondition that an exception is to be raised.
Many Larch BISLs (including LCL, LM3, Larch/Ada, and Larch/C++) have special syntax to support the specification of error checking and signalling exceptions. However, there is little common agreement
among the various Larch BISLs on the syntax for doing so; please check the manual for your particular BISL. (See section 1.1 What is Larch? What is the Larch family of specification languages?, for
how to find your language manual.)
See [Liskov-Guttag86] for a discussion of the tradeoffs between using preconditions and various kinds of defensive programming.
A property that is true for all values of a type is called an invariant. Many Larch BISLs have syntax for declaring invariants. For example, LM3 uses the keyword TYPE_INVARIANT, and Larch/C++ uses
the keyword invariant. The predicate following the appropriate keyword has to hold for all objects having that type, whenever a client-visible operation of that type is called or returns. (This
allows the invariant to be violated temporarily within an operation implementation.)
Stating an invariant in the BISL specification of an ADT is often preferable to designing an LSL trait so that the abstract values all satisfy the property. For example, if you attempt to state such
a property as an axiom in a trait, you can cause the trait to be inconsistent, if the abstract values of your type are freely generated (see section 2.18 What pitfalls are there for LSL specifiers?).
But invariants in a BISL do not lead to such inconsistencies, because they describe properties of objects created by an ADT, not properties of all abstract values.
See section 4.13 How do I specify something that is finite or partial?, for more discussion about ways to specify that all values of an ADT are finite.
The main thing to watch out for in a BISL specification is that you don't specify something that cannot be implemented. It's worth repeating the following mantra to yourself whenever you write a
procedure specification.
Could I correctly implement that?
Of course, you would never intentionally specify a procedure that cannot be implemented. So you are only likely to fall into this pit by accident. The following are some ways it might happen.
A very simple problem is forgetting to write a modifies (or trashes) clause in a procedure specification when your postcondition calls for some object to change state (or be deallocated). Remember
that leaving out a modifies clause means that no objects can be modified. Consider the following bad specification of an increment procedure in Larch/C++. (The formal parameter i is passed by
reference. The throw() says that no exceptions can be thrown. In the precondition, the term assigned(i, pre) is a technical detail; it says that i has been previously assigned a value [Leavens97].)
extern void incBad(int& i) throw();
//@ behavior {
//@ requires assigned(i, pre);
//@ ensures i' = i^ + 1;
//@ }
There can be no correct implementation of the above specification, because the omitted modifies clause does not allow i to be modified, but the postcondition guarantees that it will be modified. This
contradiction prevents correct implementations.
A more subtle way that your specification might not be implementable is if you forget to specify a necessary precondition. Suppose we fix the above specification and produce the following
specification of an increment function. What's wrong now?
extern void incStillBad(int& i) throw();
//@ behavior {
//@ requires assigned(i, pre);
//@ modifies i;
//@ ensures i' = i^ + 1;
//@ }
This specification can't be implemented, because if the pre-state value of i is the largest int, INT_MAX, then the value INT_MAX+1 can't be stored in it. So, as specified, there are no correct
implementations. Something has to be done to accommodate the finite range of integers that can be stored in i. Changing the precondition to be the following would take care of the problem.
assigned(i, pre) /\ i^ < INT_MAX
In summary, one thing to watch for is whether you have taken finiteness into account. See section 4.13 How do I specify something that is finite or partial?, for more about finiteness considerations
in BISL specifications.
Another way that your specification might be unimplementable is if your stated postcondition contradicts some invariant property (see section 4.15 How do I specify that all the values of a type
satisfy some property?) that you specified elsewhere.
The following is another Larch/C++ example. The term inRange(e), used in the precondition, means that e is within the range of representable integer values, hence the specification takes finiteness
into account. What else could be wrong with the specification?
extern void transfer(int& source, int& sink, int amt);
//@ behavior {
//@ requires assigned(source, pre) /\ assigned(sink, pre) /\ amt >= 0
//@ /\ inRange(source^ - amt) /\ inRange(sink^ + amt);
//@ modifies source, sink;
//@ ensures source' = source^ - amt /\ sink' = sink^ + amt;
//@ }
The above specification can't be implemented because it doesn't require that source and sink not be aliased. Since both source and sink are passed by reference, suppose that both refer to the same
variable i. Then if amt is nonzero, the postcondition cannot be satisfied. (Suppose the pre-state value of i is 10, and that amt is 5, then the post-state must satisfy source' = 10 - 5 /\ sink' = 10
+ 5, but since both source and sink are aliases for i, this would require that i' be both 5 and 15, which is impossible.) In Larch/C++, this could be fixed by adding the following conjunct to the
precondition, which requires that source and sink not be aliased.
source ~= sink
Some Larch BISLs (e.g., Larch/Ada) prohibit aliasing in client programs, so if you are using such a BISL, then you don't have to worry about this particular way of falling in the pit. But if your
BISL does not prohibit aliasing, then you should think about aliasing whenever you are dealing with mutable objects or call-by-reference.
A basic tip for writing a BISL specification is to try to permit as many implementations as possible. That is, you want to specify a contract between clients and implementors that does everything the
client needs, but does not impose any unneeded restrictions on the implementation.
One thing that experienced specifiers think about is whether the specification permits implementations to be tested. This is particularly important for preconditions. For example, when specifying an
ADT, check that the ADT has enough operations to allow the pre- and postconditions specified to be tested. If you specify a Stack ADT but leave out a test for the empty stack, then it will be
impossible for a client to test the precondition of the pop operation. (However, there are some rare examples, such as a statistical database, where you might not want to allow clients to have such
If implementations can be tested, it is often useful to compare the difficulty in testing the specified precondition with the difficulty of doing the specified computation. For example, in the
specification of discriminant (see section 4.13 How do I specify something that is finite or partial?), checking the precondition is likely to be as difficult as the specified computation. In such a
case, it may be wise to change the specification by weakening the precondition and allowing the procedure to signal some kind of error or exception.
Another tip is to try to use names for operators in traits that are distinct from the name of the procedure you are trying to specify (or vice versa). For example, the casual reader might incorrectly
think that the following specification is recursive.
//@ uses FooTrait;
extern int foo(int input);
//@ behavior {
//@ ensures result = foo(input);
//@ }
There is no logical problem with the above example, since the operators used in a postcondition cannot have anything to do with the interface procedure being specified. However, naive readers of
specifications are (in our experience) sometimes confused by such details.
See section 4.16 What pitfalls are there for BISL specifiers?, for pitfalls to avoid. See [Liskov-Guttag86] [Meyer97] [Meyer92] for general advice on designing and specifying software. See [Wing95]
for some other tips on formal specification.
Please send corrections in this bibliography to larch-faq@cs-DOT-iastate-DOT-edu.
Phillip Baraona, Perry Alexander, and Philip A. Wilsey. Representing Abstract Architectures with Axiomatic Specifications and Activation Conditions. Department of Electrical and Computer
Engineering and Computer Science, PO Box 210030, The University of Cincinnati, Cincinnati, OH, April 1996. Available from the URL
Phillip Baraona and Perry Alexander. Abstract Architecture Representation Using VSPEC. Department of Electrical and Computer Engineering and Computer Science, PO Box 210030, The University of
Cincinnati, Cincinnati, OH, April 1996. Available from the URL
Phillip Baraona, John Penix, and Perry Alexander. VSPEC: A Declarative Requirements Specification Language for VHDL. In J. M. Berge, O. Levia, and J. Rouilard (eds.), High-Level System Modeling:
Specification Languages, pages 51-75. Volume 3 of Current Issues in Electronic Modeling (Kluwer Academic Publishers, June 1995).
Hubert Baumeister. Using Algebraic Specification Languages for Model-Oriented Specifications. Technical Report MPI-I-96-2-003, Max-Planck-Institut fuer Informatik, February 1996. Available from
the URL
Alex Borgida, John Mylopoulos, and Raymond Reiter. On the Frame Problem in Procedure Specifications. IEEE Transactions on Software Engineering, 21(10):785-798, October 1995.
J. P. Bowen and M. G. Hinchey. Seven More Myths of Formal Methods. IEEE Software, 12(4):34-41 (July 1995).
J. P. Bowen and M. G. Hinchey. Ten Commandments of Formal Methods. IEEE Computer, 28(4):56-63 (April 1995).
Robert S. Boyer and J Strother Moore. A Computational Logic. Academic Press, 1979.
Robert S. Boyer and J Strother Moore. A Computational Logic Handbook. Academic Press, 1988.
Patrice Chalin. On the Language Design and Semantic Foundation of LCL, a Larch/C Interface Specification Language. PhD thesis, Computer Science Department, Concordia University, October 1995.
Also Technical Report CU/DCS TR 95-12, which can be obtained from the URL `ftp://ftp.cs.concordia.ca/pub/chalin/tr.ps.Z'.
Patrice Chalin, Peter Grogono, and T. Radhakrishnan. Identification of and solutions to shortcomings of LCL, a Larch/C interface specification language. In Marie-Claude Gaudel and James Woodcock
(eds.), FME'96: Industrial Benefit and Advances in Formal Methods, pages 385-404. Volume 1051 of Lecture Notes in Computer Science, Springer-Verlag, 1996. See also CU/DCS TR 95-04, which can be
obtained from the URL
Jolly Chen. The Larch/Generic Interface Language. Bachelor's thesis, Massachusetts Institute of Technology, EECS department, May, 1989. (Available from John Guttag: guttag@lcs.mit.edu.)
Yoonsik Cheon and Gary T. Leavens. The Larch/Smalltalk Interface Specification Language ACM Transactions on Software Engineering and Methodology, 3(3):221-253 (July 1994).
Yoonsik Cheon and Gary T. Leavens. A Gentle Introduction to Larch/Smalltalk Specification Browsers. Department of Computer Science, Iowa State University, TR #94-01, January 1994. Available from
the URL
Boutheina Chetali. Formal Verification of Concurrent Programs Using the Larch Prover. IEEE Transactions on Software Engineering, 24(1):46-62 (Jan., 1998).
Richard A. De Millo and Richard J. Lipton and Alan J. Perlis. Social Processes and Proofs of Theorems and Programs. Communications of the ACM, 22(5):271-280 (May 1979).
Stephen H. Edwards, Wayne D. Heym, Timothy J. Long, Murali Sitaraman, and Bruce W. Weide. Part II: Specifying Components in RESOLVE. ACM SIGSOFT Software Engineering Notes, 19(4):29-39 (Oct.
Hartmut Ehrig and Bernd Mahr. Fundamentals of Algebraic Specification 1: Equations and Initial Semantics. EATCS Monographs on Theoretical Computer Science, volume 6 (Springer-Verlag, NY, 1985).
David Evans. LCLint User's Guide, Version 2.5, May 2000. Available at the following URL `http://lclint.cs.virginia.edu/guide/'.
L. M. G. Feijs and H. B. M. Jonkers. Formal Specification and Design. Volume 35 of Cambridge Tracts in Theoretical Computer Science (Cambridge University Press, Cambridge, UK, 1992).
James H. Fetzer. Program Verification: The Very Idea. Communications of the ACM, 31(9):1048-1063 (Sept. 1988).
Kate Finney. Mathematical Notation in Formal Specifications: Too Difficult for the Masses? IEEE Transactions on Software Engineering, 22(2):158-159 (February 1996).
John Fitzgerald and Peter Gorm Larsen Modelling Systems: practical tools and techniques in software development. Cambridge University Press, 1998.
Kokichi Futatsugi, Joseph A. Goguen, Jean-Pierre Jouannaud, and Jose Meseguer. Principles of OBJ2. In Conference Record of the Twelfth Annual ACM Symposium on Principles of Programming Languages,
pages 52-66 (ACM, January 1985).
S. J. Garland and J. V. Guttag. A Guide to LP, The Larch Prover. Technical Report 82, Digital Equipment Corp, Systems Research Center, 130 Lytton Avenue Palo Alto, CA 94301, December, 1991. This
is superseded by [Garland-Guttag95].
Stephen J. Garland, John V. Guttag, and James J. Horning. Debugging Larch Shared Language Specifications. IEEE Transactions on Software Engineering, 16(6):1044-1057 (Sept. 1990). Superseded by
Chapter 7 in [Guttag-Horning93].
Stephen J. Garland and John V. Guttag. LP, the Larch Prover. On-line documentation for release 3.1a (MIT Laboratory for Computer Science, April 1995). Available in the LP distribution and from
the URL
J. Goguen, C. Kirchner, A. Megrelis, J. Meseguer, and T. Winkler. An Introduction to OBJ3. In S. Kaplan and J.-P. Jouannaud (eds.), Conditional Term Rewriting Systems, 1st International Workshop,
Orsay, France, pages 258-263. Lecture Notes in Computer Science, Volume 308 (Springer-Verlag, NY, 1987).
J. A. Goguen, J. W. Thatcher, and E. G. Wagner. An Initial Algebra Approach to the Specification, Correctness and Implementation of Abstract Data Types. In Raymond T. Yeh (ed.), Current Trends in
Programming Methodology, volume 4, pages 80-149 (Prentice-Hall, Englewood Cliffs, N.J., 1978).
J. A. Goguen. Parameterized Programming. IEEE Transactions on Software Engineering, SE-10(5):528-543 (Sept. 1984).
J. A. Goguen. Reusing and Interconnecting Software Components IEEE Computer, 19(2):16-28 (February 1986).
M. J. C. Gordon and T. F. Melham. Introduction to HOL: A Theorem Proving Environment for Higher Order Logic. Cambridge University Press, 1993.
David Gries and Fred B. Schneider. Avoiding the Undefined by Underspecification. In Jan van Leeuwen (ed.), Computer Science Today: Recent Trends and Developments, pages 366-373. Volume 1000 of
Lecture Notes in Computer Science (Springer-Verlag, NY, 1995).
David Guaspari, Carla Marceau, and Wolfgang Polak. Formal Verification of Ada Programs. IEEE Transactions on Software Engineering, 16(9):1058-1075 (Sept. 1990). Reprinted in [Martin-Wing93],
pages 104-141.
John V. Guttag, J. J. Horning, and Andres Modet. Report on the Larch Shared Language: Version 2.3. Technical Report 58, Digital Equipment Corp, Systems Research Center, 130 Lytton Avenue Palo
Alto, CA 94301, April, 1990. Order from src-report@src.dec.com or from the URL
J. V. Guttag, J. J. Horning, and J. M. Wing. Larch in Five Easy Pieces. Technical Report 5, Digital Equipment Corp, Systems Research Center, 130 Lytton Avenue Palo Alto, CA 94301, July 1985.
Superseded by [Guttag-Horning93].
John V. Guttag, James J. Horning and Jeannette M. Wing. The Larch Family of Specification Languages. IEEE Software, 2(5):24-36 (Sept. 1985).
J. V. Guttag and J. J. Horning. The Algebraic Specification of Abstract Data Types. Acta Informatica, 10(1):27-52 (1978).
J. V. Guttag and J. J. Horning. A Tutorial on Larch and LCL, a Larch/C Interface Language. In S. Prehn and W. J. Toetenel (eds.), VDM '91 Formal Software Development Methods 4th International
Symposium of VDM Europe Noordwijkerhout, The Netherlands, Volume 2: Tutorials. Lecture Notes in Computer Science, vol. 552, (Springer-Verlag, NY, 1991), pages 1-78.
John V. Guttag and James J. Horning with S.J. Garland, K.D. Jones, A. Modet and J.M. Wing. Larch: Languages and Tools for Formal Specification. Texts and Monographs in Computer Science series
(Springer-Verlag, NY, 1993). (The ISBN numbers are 0-387-94006-5 and 3-540-94006-5.) This is currently out of print, but a (large) postscript file for the entire book can be obtained from the
following URL
Appendix A is available separately from the following URLs.
John V. Guttag. The Specification and Application to Programming of Abstract Data Types. Ph.D. Thesis, University of Toronto, Toronto, Canada, September, 1975.
Anthony Hall. Seven Myths of Formal Methods. IEEE Software, 7(5):11-19 (Sept. 1990).
I. Hayes (ed.), Specification Case Studies, second edition (Prentice-Hall, Englewood Cliffs, N.J., 1990).
C. A. R. Hoare. An Axiomatic Basis for Computer Programming. Comm. ACM, 12(10):576-583 (Oct. 1969).
C. A. R. Hoare. Proof of correctness of data representations. Acta Informatica, 1(4):271-281 (1972).
International Standards Organization. Information technology -- Programming languages, their environments and system software interfaces -- Vienna Development Method -- Specification Language --
Part 1: Base language. Document ISO/IEC 13817-1, December, 1996.
Daniel Jackson. Structuring Z Specifications with Views. ACM Transactions on Software Engineering and Methodology, 4(4):365-389 (Oct, 1995).
Cliff B. Jones, Systematic software development using VDM, second edition (Prentice-Hall, Englewood Cliffs, N.J., 1990). Out-of-print, but available on-line at the URL
Kevin D. Jones. LM3: A Larch Interface Language for Modula-3: A Definition and Introduction: Version 1.0. Technical Report 72, Digital Equipment Corp, Systems Research Center, 130 Lytton Avenue
Palo Alto, CA 94301, June, 1991. Order from src-report@src.dec.com or from the URL
Kevin D. Jones. A Semantics for a Larch/Modula-3 Interface Language. In [Martin-Wing93], pages 142-158.
H. B. M. Jonkers. Upgrading the pre- and postcondition technique. In S. Prehn and W. J. Toetenel (eds.), VDM '91 Formal Software Development Methods 4th International Symposium of VDM Europe
Noordwijkerhout, The Netherlands, Volume 1: Conference Contributions, volume 551 of Lecture Notes in Computer Science, pages 428-456. Springer-Verlag, NY, 1991.
Matt Kaufmann and J S. Moore. An Industrial Strength Theorem Prover for a Logic Based on Common Lisp. IEEE Transactions on Software Engineering, 23(4):203-213 (Apr. 1997).
S. Khosla and T. S. E. Maibaum. The Prescription and Description of State Based Systems. In B. Banieqbal, H. Barringer, and A. Puneli (eds.), Temporal Logic in Specification. Volume 398 of
Lecture Notes in Computer Science, pages 243-294. Springer-Verlag, NY, 1987.
Leslie Lamport. A Simple Approach to Specifying Concurrent Systems. CACM, 32(1):32-45 (Jan. 1989).
Gary T. Leavens. Larch/C++ Reference Manual. Version 5.14, October 1997. Available in `ftp://ftp.cs.iastate.edu/pub/larchc++/lcpp.ps.gz' or on the world wide web via the URL
Gary T. Leavens. An Overview of Larch/C++: Behavioral Specifications for C++ Modules. Technical Report TR #96-01d, Department of Computer Science, Iowa State University, Ames, Iowa, 50011, April
1996, revised July 1997. In Haim Kilov and William Harvey, editors, Specification of Behavioral Semantics in Object-Oriented Information Modeling (Kluwer Academic Publishers, 1996), Chapter 8,
pages 121-142. TR version available from the URL
Gary T. Leavens and Jeannette M. Wing. Protective Interface Specifications In Michel Bidoit and Max Dauchet (editors), TAPSOFT '97: Theory and Practice of Software Development, 7th International
Joint Conference CAAP/FASE, Lille, France, pages 520-534. An extended version is Technical Report TR #96-04d, Department of Computer Science, Iowa State University, Ames, Iowa, 50011, April 1996,
revised October, December 1996, February, September 1997. Available from the URL
Richard Allen Lerner. Specifying Objects of Concurrent Systems. School of Computer Science, Carnegie Mellon University, CMU-CS-91-131, May 1991. Available from the URL
Barbara Liskov and John Guttag. Abstraction and Specification in Program Development (MIT Press, Cambridge, Mass., 1986).
Jacques Loeckx, Hans-Dieter Ehrich, and Markus Wolf Specification of Abstract Data Types (John Wiley & Sons Ltd and B. G. Teubner, NY, 1996).
Victor Luchangco, Ekrem Soylemez, Stephen Garland, and Nancy Lynch. Verifying timing properties of concurrent algorithms. In FORTE'94: Seventh International Conference on Formal Description
Techniques for Distributed Systems and Communications Protocols, Berne, Switzerland (Chapman and Hall, Oct. 1994).
U. Martin and M. Lai. Some experiments with a completion theorem prover. Journal of Symbolic Computation, 13:81-100 (1992).
U. Martin and J. Wing. Proceedings of the First International Workshop on Larch, July 1992. Workshops in Computing series (Springer-Verlag, New York, 1993). (The ISBN is 0-387-19804-0.)
Bertrand Meyer. Applying "Design by Contract". Computer, 25(10):40-51 (Oct. 1992).
Bertrand Meyer. Object-oriented Software Construction, second edition (Prentice-Hall, 1997).
Peter D. Mosses (editor). CFI Catalogue: Language Design. May 1996. Available from the URL
Odyssey Research Associates. Penelope Reference Manual, Version 3-3. Informal Technical Report STARS-AC-C001/001/00, September 1994. Available from the URL
Formal Verification for Fault-tolerant Architectures: Prolegomena to the Deisgn of PVS. Sam Owre, John Rushby, Natarajan Shankar, and Friedrich von Henke. IEEE Transactions on Software
Engineering, 21(2):107-125 (February 1995). See also the URL
Lawrence C. Paulson, with contributions by Tobias Nipkow. Isabelle: A Generic Theorem Prover. Volume 828 of Lecture Notes in Computer Science, Springer-Verlag 1994. See also the URL
An Invitation to Formal Methods. Hossein Saiedian, et al. IEEE Computer, 29(4):16-30 (April 1996).
J. B. Saxe, S. J. Garland, J. V. Guttag, and J. J. Horning. Using transformations and verification in circuit design. International Workshop on Designing Correct Circuits, IFIP Transactions A-5,
(North-Holland 1992). Also Technical Report 78, Digital Equipment Corp, Systems Research Center, 130 Lytton Avenue Palo Alto, CA 94301, September 1991.
David A. Schmidt. Denotational Semantics: A Methodology for Language Development (Allyn and Bacon, Inc., Boston, Mass., 1986).
M. Sitaraman, L. R. Welch, and D. E. Harms. On Specification of Reusable Software Components. International Journal of Software Engineering and Knowledege Engineering, 3(2):207-229 (World
Scientific Publishing Company, 1993).
Gowri Sankar Sivaprasad. Larch/CORBA: Specifying the Behavior of CORBA-IDL Interfaces. Department of Computer Science, Iowa State University, TR #95-27a, December 1995, revised December 1995.
J. F. Soegaard-Anderson, S. J. Garland, J. V. Guttag, N. A. Lynch, and A. Pogosyants. Computed-assisted simulation proofs. In Fourth Conference on Computer-Aided Verification, Elounda, Greece,
Volume 697 of Lecture Notes in Computer Science, pages 305-319 (Springer-Verlag, June 1993).
J. Michael Spivey. The Z Notation: A Reference Manual, second edition, (Prentice-Hall, Englewood Cliffs, N.J., 1992).
J. Staunstrup, S. J. Garland, and J. V. Guttag. Localized verification of circuit descriptions. In Proc. Int. Workshop on Automatic Verification Methods for Finite State Systems, Grenoble,
France. Volume 407 of Lecture Notes in Computer Science, pages 349-364 (Springer-Verlag, 1989).
J. Staunstrup, S. J. Garland, and J. V. Guttag. Mechanized verification of circuit descriptions using the Larch Prover. In Theorem Provers in Circuit Design. IFIP Transactions A-10, pages 277-299
(North-Holland, June, 1992).
Yang Meng Tan. Formal Specification Techniques for Promoting Software Modularity, Enhancing Documentation, and Testing Specifications. MIT Lab. for Comp. Sci., TR 619, June 1994. Also published
as Formal Specification Techniques for Engineering Modular C Programs. International Series in Software Engineering (Kluwer Academic Publishers, Boston, 1995).
Jeannette M. Wing, Eugene Rollins, and Amy Moormann Zaremski. Thoughts on a Larch/ML and a New Application for LP. In [Martin-Wing93], pages 297-312.
Jeannette Marie Wing. A Two-Tiered Approach to Specifying Programs Technical Report TR-299, Mass. Institute of Technology, Laboratory for Computer Science, 1983.
Jeannette M. Wing. Writing Larch Interface Language Specifications. ACM Transactions on Programming Languages and Systems, 9(1):1-24 (Jan. 1987).
Jeannette M. Wing. A Specifier's Introduction to Formal Methods. Computer, 23(9):8-24 (Sept. 1990)
Jeannette M. Wing. Hints to Specifiers. Technical Report, CMU-CS-95-118R, Carnegie Mellon University, School of Computer Science, May 1995. Revision of the paper, "Teaching Mathematics to
Software Engineers," Proceedings of AMAST '95, July 1995. Revision available from the URL
Jump to: a - b - c - d - f - g - i - l - m - n - p - q - r - s - t - z
Please send corrections in this index, including entries that you found that were not helpful, and things you found that weren't indexed, to larch-faq@cs-DOT-iastate-DOT-edu.
Jump to: - - \ - a - b - c - d - e - f - g - h - i - j - l - m - n - o - p - q - r - s - t - u - v - w - z
Although one cannot list both unknown and hmm in the same converts clause, one could add another converts clause to the trait ConvertsTrait, stating that unknown is converted, relative to the other
operators, which in that case include hmm.
The semantics of a Larch BISL is not fixed by being a member of the family, but there is a family resemblance, due mostly to the common heritage from Larch/CLU [Wing83]. The remarks about BISLs
should be taken as generalities about the family, especially about BISLs for sequential programming languages like C and C++, and might not hold for your particular BISL. Consult the documentation
for your particular BISL to be sure.
This document was generated on 18 July 2001 using texi2html 1.56k.
|
{"url":"http://www.eecs.ucf.edu/~leavens/larch-faq.html","timestamp":"2014-04-18T13:08:09Z","content_type":null,"content_length":"309912","record_id":"<urn:uuid:c21b40ab-02a1-4edb-9412-f23e516b28e9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
lgebra ;) OpenGL
I'm in a graphics course that required linear algebra as a pre-requisite (that I didn't have!)... I'm making a simple animated grid mesh (where y=f(x,z)) but need to calculate normals for all my
vertices so that the lighting works well. I've decided to use Goureaud's method which takes the normal of all adjoining polygons to calculate the normal of the vertex. The formula for the normal of a
poly is in pseudo:
Begin Function CalculateSurfaceNormal (Input Triangle) Returns Vector
Set Vector U to (Triangle.p2 minus Triangle.p1)
Set Vector V to (Triangle.p3 minus Triangle.p1)
Set Normal.x to (multiply U.y by V.z) minus (multiply U.z by V.y)
Set Normal.y to (multiply U.z by V.x) minus (multiply U.x by V.z)
Set Normal.z to (multiply U.x by V.y) minus (multiply U.y by V.x)
Returning Normal
End Function
And the formula for finding the normal of a vertex is:
n= (n1+n2+n3+n4) / |n1 + n2 + n3 + n4| where there would be 4 adjoining polygons(triangles).
So I've coded it up but am getting strange normals on the interior vertices... here's the relevant code... sorry for the lengthy preamble but I just didn't want to have to reexplain what I mean
std::vector<GLushort> indices;
for(int i = 0; i < m-1; i++)
for(int j = 0; j < n-1; j++)
Polygon p;
p.P1 = vertices[(j*m)+i].vertex;
p.P2 = vertices[(j*m)+i+1].vertex;
p.P3 = vertices[(j*m)+i+m].vertex;
for(int i = 1; i < m; i++)
for(int j = 0; j < n-1; j++)
Polygon p;
p.P1 = vertices[(j*m)+i].vertex;
p.P2 = vertices[(j*m)+i+m-1].vertex;
p.P3 = vertices[(j*m)+i+m].vertex;
struct Polygon
vec3 P1;
vec3 P2;
vec3 P3;
vec3 normal;
void calculateNormal()
vec3 U = P2 - P3;
vec3 V = P3 - P1;
normal = vec3((U[1]*V[2])-(U[2]*V[1]),
struct Vertex
Vertex(vec3 v, vec3 n) {
vertex = v;
normal = n;
Vertex(vec3 v) {
vertex = v;
void setNormal(std::vector<Polygon> p)
vec3 vecTot;
for(int i = 0; i < p.size(); i++)
vecTot = vecTot + p[i].normal;
//std::cout << p[i].normal;
GLfloat normalized = sqrt((vecTot[0]*vecTot[0])+(vecTot[1]*vecTot[1])+(vecTot[2]*vecTot[2]));
//std::cout << vecTot << " " << normalized << "\n";
normal = vecTot/normalized;
vec3 vertex;
vec3 normal;
//Material material;
std::vector<Polygon> polygons;
Thanks for anyone who has read this far! And thanks for any help I can get
|
{"url":"http://www.dreamincode.net/forums/topic/271354-anyone-up-for-some-linear-algebra-;-opengl-question/","timestamp":"2014-04-17T22:31:57Z","content_type":null,"content_length":"150312","record_id":"<urn:uuid:9f5466ee-3778-455f-9762-8582117eb2d3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The unluckiest of numbers if you happen to be superstitious. This belief has a couple of historical roots. According to Biblical tradition, there were 13 people at Christ's Last Supper, and Christ
was crucified on Friday 13th. Further back in time, Alexander the Great decided he wanted to be the thirteenth god alongside the 12 that already stood for each month of the year, so he had a
thirteenth statue built on the place of his capital. His death shortly after gave the number a bad name. Many buildings don't have a floor labeled 13 and many hotels will have room numbered 12A
instead of 13. There is even a name for a morbid fear of 13: triskaidekaphobia. Fresh disasters involving the number hardly help triskaidekaphobics overcome their affliction. The most notorious of
these involved the Apollo 13 Moon mission, which was launched on April 11, 1970 (the sum of 4, 11 and 70 equals 85, the digital sum of which is 13), from Pad 39 (three times 13) at 13:13 local time,
and suffered an explosion on April 13. (The astronauts did, however, make it home safely, which could be considered good luck.) There is always at least one Friday 13th in each year; in some years,
there are two and rarely three (for example, 1998 and 2009). There were 13 original U.S. colonies (hence the 13 stripes on the American flag) and 13 signers of the Declaration of Independence. In
Japan, the numbers 4 and 9 are considered unlucky, not because 13 can be represented as sum of these two perfect squares but because of their pronunciation. In Japanese, four is shi, which is
pronounced the same as the word for death; nine is ku, which sounds the same as the word for torture. And speaking of torture, it was not unusual in times past for bakers to come in for stiff
punishment if they shortchanged their customers. In ancient Egypt, someone found selling light loaves might end up with his ear nailed to a doorpost, while in medieval Britain the punishment was
likely to be a spell in the pillory. This led to the custom of adding a thirteenth loaf to every batch of 12 to be on the safe side, and hence the expression "a baker's dozen."
Mathematically, the reverse of the square of 13 is the same as the square of the reverse of 13: 13^2 = 169; the reverse of 169 is 961 and the reverse of 13 is 31; 31^2 = 961.
Thirteen is the smallest prime number that can be expressed as the sum of the squares of two prime numbers: 13 = 2^2 + 3^2. Also the sum of all prime numbers up to 13 (2 + 3 + 5 + 7 + 11 + 13) is
equal to the thirteenth prime number (41), and this is the largest such number. On the anagrammatical front is this nice equation: ELEVEN + TWO = TWELVE + ONE.
Related category
|
{"url":"http://www.daviddarling.info/encyclopedia/T/thirteen.html","timestamp":"2014-04-20T13:22:04Z","content_type":null,"content_length":"8794","record_id":"<urn:uuid:6c4ec603-57cc-49e4-bea3-f11aca81ac7d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hon. Ministers Clement and Goodyear, Please target this!
Posted on January 18, 2011 by Ghoussoub
Your era at the helm of Canada’s national strategy for research and development, has been one of action, proactive leadership in research policy, tight management of federally funded programs, as
well as increased and sustained support for certain research areas.
Indeed, we have been hearing a great deal lately about the targeting of Canada’s research effort towards government priorities, such as the Automobile of the 21^st century, Bombardier, Pratt &
Whitney and others.
Other research areas were also identified by the Council of Canadian Academies (CCA) in its report, “The State of Science and Technology in Canada”(STIC), and eventually earmarked as government
priorities in the “Science and Technology Strategy”.
We respect and accept unreservedly your right and responsibility as elected officials to determine and select the areas of research that warrant government funding, and we have no doubt that the
currently targeted areas were sound choices and are worth supporting.
It has been said that, “the CCA report and S&T Strategy selected these targeted areas, after soliciting and receiving a broad input from the Canadian science and technology community.” Unfortunately
the call for input was apparently not broad enough to reach us, and so we missed this golden opportunity to contribute ours. I am therefore appealing directly to you, because I feel that some
extremely important research challenges were missed. Their essence and existence, or maybe their relevance, must have slipped by the Tri-Council presidents, their advisors, the CCA, and the authors
of STIC, when they were conducting their analysis of what should be our national priorities.
This doesn’t surprise me in the least, because these research areas are simply of a different level of complexity than those we are used to in official reports and communiqués. They are epic
challenges that defy the current state of human intellect.
I urge you to please take a look at these 20 research areas and to consider making at least some of them government priorities and therefore targeted for federal funding.
These problems, Hon. Ministers, are nothing short of the holy grail of 21st century science. The solution of any one of them would signal a new threshold of Biblical proportion to the state of our
knowledge, but also to our standard of living. It would be as earth shattering as the discovery of “infinitesimal calculus” by Isaac Newton. Not the calculus that kids love to hate in high school and
early college, but the one behind almost every major scientific and technological discovery of the last four centuries.
I need to warn you however, that all these problems are firmly anchored in the mathematical sciences, which are not considered so hot lately because Mathematics is by now an old science, as old as
Pythagoras, Euclid and Archimedes. On the other hand, if resolved they will have huge ramifications into almost every aspect of human endeavor, from medicine to commerce, and unfortunately warfare.
Another note of caution is that most of these problems may not be solved during your terms in office. Actually, they may take a century or so to settle. But every step on the way towards their
resolution would be a giant leap forward for mankind.
You may have guessed by now that in Canada these problems fall under the rubric of “basic research”, which is also sometimes dubbed, “curiosity-driven”. But please take a look and judge by
yourselves. This is curiosity of a different kind, the one that transcends time and human limitations, but also the one that one day your successors in public office will put a tax on, just like the
great Michael Faraday told British Chancellor Gladstone, when the latter asked him about the use of “this electricity” he was working on.
My plea may seem like a Michael Moore’s slogan: “Target This! Random challenges from an unempowered Canadian”. But it is not. I did not make up these problems. They were conceived by true geniuses,
and took decades of deep reflective thought to formulate. The only inference from Moore’s movie is that some of our current granting policies may be making us a nation that is addicted to fast
solutions for easy problems.
It is also useful to note that these problems were also earmarked by DARPA for targeted funding. Yes, DARPA or Defense Advanced Research Projects Agency.
Here they are. I know you will find them interesting. I also know that you will be pleasantly surprised by how obvious it is that any progress made towards the solution of these basic research
problems could have a huge impact on humanity.
1. The Mathematics of the Brain
Develop a mathematical theory to build a functional model of the brain that is mathematically consistent and predictive rather than merely biologically inspired.
2. The Dynamics of Networks
Develop the high-dimensional mathematics needed to accurately model and predict behavior in large-scale distributed networks that evolve over time occurring in communication, biology, and the social
3. Geometric Langlands Program and Quantum Physics
How does the program of Princeton based Canadian mathematician Robert Langlands, which originated in number theory and representation theory, explain the fundamental symmetries of physics? And vice
4. Settle the 150-years old Riemann Hypothesis
The Holy Grail of number theory, which is at the basis of cryptography, coding and digital commerce that your government is rightly keen on promoting and developing.
5. 21st Century Fluids
Classical fluid dynamics was extraordinarily successful in obtaining quantitative understanding of shock waves, turbulence, and solutions, but new methods are needed to tackle complex fluids such as
foams, suspensions, gels, and liquid crystals.
6. Computational Complexity
As data collection increases can we “do more with less” by finding lower bounds for sensing complexity in systems? This is related to questions about entropy maximization algorithms.
7. What are the Fundamental Mathematical Laws of Biology?
8. Biological Quantum Field Theory
Quantum and statistical methods have had great success modeling virus evolution. Can such techniques be used to model more complex systems such as bacteria? Can these techniques be used to control
pathogen evolution?
9. What are the Physical Consequences of the latest breakthrough in Geometry?
Can profound theoretical advances in understanding three dimensions be applied to construct and manipulate structures across scales to fabricate novel materials?
10. Algorithmic Origami and Biology
Build a stronger mathematical theory for isometric and rigid embedding that can give insight into protein folding.
11. Optimal Nanostructures
Develop new mathematics for constructing optimal globally symmetric structures by following simple local rules via the process of nanoscale self-assembly.
12. The Mathematics of Quantum Computing, Algorithms, and Entanglement
In the last century we learned how quantum phenomena shape our world. In the coming century we need to develop the mathematics required to control the quantum world.
13. An Information Theory for Virus Evolution
Can Shannon’s theory shed light on this fundamental area of biology?
14. The Geometry of Genome Space
What notion of distance is needed to incorporate biological utility? i.e. how to measure the similarity or difference between two genomes instances.
15. What are the Symmetries and Action Principles for Biology?
Extend our understanding of symmetries and action principles in biology along the lines of classical thermodynamics, to include important biological concepts such as robustness, modularity,
evolvability, and variability.
16. Computation at Scale
How can we develop asymptotics for a world with massively many degrees of freedom?
17. The Smooth Poincare Conjecture in Dimension 4
What are the implications for space-time and cosmology? And might the answer unlock the secret of “dark energy”?
18. Capture and Harness Stochasticity in Nature
Develop methods that capture persistence in stochastic environments.
19. Settle the Hodge Conjecture
This conjecture in algebraic geometry is a metaphor for transforming transcendental computations into algebraic ones.
20. Creating a Game Theory that Scales
Since the work of Nobelist (and beautiful mind) John Nash, Game theory has been fundamental to economics, management and decision-making. What new scalable mathematics is needed to replace the
traditional mathematical (i.e., Partial Differential Equations) approach to differential games?
One Response to Hon. Ministers Clement and Goodyear, Please target this!
Leave a Reply Cancel reply
Recent comments
• Andrew Park on Budget 2014 is nothing short of a paradigm shift for Canada’s research and innovation
• Hadi Kharaghani on UBC appoints a doer as its 13th President
• jimwoodgett on UBC appoints a doer as its 13th President
This entry was posted in Op-eds, R&D Policy. Bookmark the permalink.
|
{"url":"http://nghoussoub.com/2011/01/18/hon-ministers-clement-and-goodyear-please-target-this/","timestamp":"2014-04-18T14:46:16Z","content_type":null,"content_length":"65532","record_id":"<urn:uuid:7fc422d1-3ecc-4397-a167-233b827801eb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
|
show funcion is solution of differential function.
February 5th 2009, 11:30 AM #1
Senior Member
Jan 2007
show funcion is solution of differential function.
Not sure what they want here, please help.
show funcion f(x) = n=0->infinity x^n / n!
show that f(x) = e^x
Depends what you can use. If you can just use a Taylor series this is straightforward...
A Taylor's series is
$f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!}(x-a)^n.$
where $f^{(n)}(a)$ means the nth derivative of f(x) evaluated at x=a. So in your case
$f^{(n)}(x) = e^x$
and you substitute a=0 to get the result you need.
Note: A Taylor series with a=0 is also called a Maclaurin series.
Hope this helps.
February 5th 2009, 01:18 PM #2
Junior Member
Jan 2009
|
{"url":"http://mathhelpforum.com/calculus/71977-show-funcion-solution-differential-function.html","timestamp":"2014-04-18T01:30:09Z","content_type":null,"content_length":"32429","record_id":"<urn:uuid:7dd142fe-13fc-40a6-aa85-542f5b05ee7a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Copyright © University of Cambridge. All rights reserved.
Why do this problem?
At first glance this looks like any collection of word problems about division. However each different scenario draws attention to a different way of thinking about the idea of division:
• sharing, grouping
• successive subtraction
• or the inverse of multiplication.
An article which explores different ways of thinking about division can be found here.
Whilst the children are working on each question, the teacher can observe just how they are considering it, and this may be somewhat different from the taught approach. Given the opportunity,
children often have their own ways of working.
Possible approach
In each case it would be possible to use a range of representations of the situation to help solve the problem and children should be encouraged to explain how they have tackled the problem and
arrived at their solution using different resources to help them. Look and listen carefully to hear how they make sense of the question and develop a strategy for solving it.
Talk to the children about how different people do things in different ways and explain that this actvity is all about that - it's important that the children don't presume that there is one way and
one way only to see the calculation.
Working within a pair or small group and tackling one problem at a time can help children to focus more deeply on one task rather than racing through them. You could suggest that they think and talk
about what they are going to do before they actually begin.
They may decide to enact with objects or make a picture and just record the answer. Or use these as a prompt to transfer the problem to a calculation which they would record horizontally. Whichever,
your observations will allow you to reflect on the children's confidence, language and understanding and possibly what misconceptions they hold.
Key questions
Tell me how you are working this out.
What do you think of _____'s way of doing this?
How did you know what to do first?
Is the way you've done this one different from the way you have done the others?
Possible extension
Ask the pupils to create stories that involve calculations for their partners to do.
Give the pupils a written form of a division question eg $18/? = 3$ and challenge them to create a story around it.
Ask the children to create their own division problems based on stories about sharing, grouping, 'undoing' multiplication or successive subtraction. Additional links to other division problems can be
found in this article.
Possible support
Some pupils will benefit from working with some small toys/dolls so that they can enact the situation before being able to think about the calculation, for example the children could use Lego figures
to show the children at the party and counters for the cakes with circles of paper as plates. The physical act of moving objects in different ways while enacting the story can often help less
confident children to work out how to think about the division in an appropriate way that makes sense to them.
|
{"url":"http://nrich.maths.org/8308/note?nomenu=1","timestamp":"2014-04-16T08:01:57Z","content_type":null,"content_length":"7716","record_id":"<urn:uuid:bd7dc1c3-6be8-4fe7-92df-6a1930b84642>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
th Moment Exponential Stability of Impulsive Stochastic Neural
Networks with Mixed Delays
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 175934, 20 pages
Research Article
Pth Moment Exponential Stability of Impulsive Stochastic Neural Networks with Mixed Delays
^1School of Mathematics and Statistics, Central South University, Hunan, Changsha 410083, China
^2School of Mathematics and Computational Science, Changsha University of Science and Technology, Hunan, Changsha 410004, China
Received 29 June 2012; Revised 15 October 2012; Accepted 2 November 2012
Academic Editor: Dane Quinn
Copyright © 2012 Xiaoai Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
This paper investigates the problem of pth moment exponential stability for a class of stochastic neural networks with time-varying delays and distributed delays under nonlinear impulsive
perturbations. By means of Lyapunov functionals, stochastic analysis and differential inequality technique, criteria on pth moment exponential stability of this model are derived. The results of this
paper are completely new and complement and improve some of the previously known results (Stamova and Ilarionov (2010), Zhang et al. (2005), Li (2010), Ahmed and Stamova (2008), Huang et al. (2008),
Huang et al. (2008), and Stamova (2009)). An example is employed to illustrate our feasible results.
1. Introduction
The dynamics of neural networks have drawn considerable attention in recent years due to their extensive applications in many fields such as image processing, associative memories, classification of
patters, and optimization. Since the integration and communication delays are unavoidably encountered in biological and artificial neural systems, it may result in oscillation and instability. The
stability analysis of delayed neural networks has been extensively investigated by many researchers, for instance, see [1–30].
In real nervous systems, there are many stochastic perturbations that affect the stability of neural networks. The result in Mao [24] suggested that one neural network could be stabilized or
destabilized by certain stochastic inputs. It implies that the stability analysis of stochastic neural networks has primary significance in the design and applications of neural networks, such as [7,
12–16, 18, 20, 22–24, 26, 27, 30].
On the other hand, it is noteworthy that the state of electronic networks is often subjected to some phenomenon or other sudden noises. On that account, the electronic networks will experience some
abrupt changes at certain instants that in turn affect dynamical behaviors of the systems [5, 6, 17–23, 28, 29]. Therefore, it is necessary to take both stochastic effects and impulsive perturbations
into account on dynamical behaviors of delayed neural networks [18, 20, 22, 23].
Very recently, Li et al. [22] have employed the properties of M-cone and inequality technique to investigate the mean square exponential stability of impulsive stochastic neural networks with bounded
delays. Wu et al. [23] studied the exponential stability of the equilibrium point of bounded discrete-time delayed dynamic systems with linear impulsive effects by using Razumikhin theorems. To the
best of authors’ knowledge, however, few authors have considered the pth moment exponential stability of impulsive stochastic neural networks with mixed delays.
Motivated by the discussions above, our object in this paper is to present the sufficient conditions ensuring pth moment exponential stability for a class of stochastic neural networks with
time-varying delays and distributed delays under nonlinear impulsive perturbations by virtue of Lyapunov method, inequality technique and Itô formula. The results obtained in this paper generalize
and improve some of the existing results [5, 8, 18, 19, 26–28]. The effectiveness and feasibility of the developed results have been shown by a numerical example.
2. Model Description and Preliminaries
Let denote the set of real numbers, the -dimensional real space equipped with the Euclidean norm , the set of nonnegative integral numbers. stands for the mathematical expectation operator. denotes
the well-known -operator given by the Itô formula. is -dimensional Brownian motion defined on a complete probability space with a natural filtration generated by , where we associate with the
canonical space generated by and denote by the associated -algebra generated by with the probability measure. Let , and be th row vector of .
In [5, 6], the researchers investigated the following impulsive neural networks with time-varying delays: The authors in [7, 26, 27] studied the stochastic recurrent neural networks with time-varying
In this paper, we will study the generalized stochastically perturbed neural network model with time-varying delays and distributed delays under nonlinear impulses defined by the state equations:
where , the time sequence satisfies , ; and corresponds to the state of the th unit at time ; , ,and denote the constant connection weight; is the time-varying transmission delay and satisfies ,
for . ,, denote the activation functions of the th neuron; the delay kernel is the real-valued nonnegative piecewise continuous functions defined on ; corresponds to the numbers of units in a
neural network; denotes the external bias on the th unit; represents the abrupt change of the state at the impulsive moment .
System (2.3) is supplemented with initial condition given by where . Denote by the family of all bounded -measurable, -value random variables , satisfying , where is continuous everywhere except at
finite number of points ,at which and exist and .
The norms are defined by the following norms, respectively:
Throughout this paper, the following standard hypothesis are needed.
(H1) Functions are continuous and monotone increasing, that is, there exist real numbers such that for all ,,.
(H2)Functions , are Lipschitz-continuous on with Lipschitz constants ,and , respectively. That is, for all ,.
(H3)The delay kernels satisfy where is continuous and integrable, and the constant denotes some positive number.
(H4)There exist nonnegative constants , such that for all ,.
(H5)There exist nonnegative matrixes such that for any , where is an integer.
We end this section by introducing three definitions.
Definition 2.1. A constant vector is said to be an equilibrium point of system (2.3) if is governed by the algebraic system where it is assumed that impulse functions satisfy for all and .
Definition 2.2. The equilibrium of system (2.3) is said to be pth moment exponentially stable if there exist and such that where is an any solution of system (2.3) with initial value .
Definition 2.3 (Forti and Tesi, 1995 [25]). A map is a homeomorphism of onto itself if is continuous and one-to-one, and its inverse map is also continuous.
3. Main Result
For convenience, we denote that where are positive constant, , , , , , and are real numbers and satisfy
Lemma 3.1. If denote nonnegative real numbers, then where denotes an integer.
A particular form of (3.3), namely,
Lemma 3.2. If is a continuous function and satisfies the following conditions. (1) is injective on , that is, for all . (2) as .
Then, is homeomorphism of .
Theorem 3.3. System (2.3) exists a unique equilibrium under the assumptions (H1)–(H3) if the following condition is also satisfied:
Proof. Defining a map , where the map is a homeomorphism on if it is injective on and satisfies as .
In the following, we will prove that is a homeomorphism.
Firstly, we claim that is an injective map on . Otherwise, there exist , and such that , then It follows from (H1)–(H3) that Therefore, From (H6), it leads to a contradiction with our assumption.
Therefore, is an injective map on .
To demonstrate the property as , we have where . Then, we have Using the Hölder inequality, we obtain which leads to From (3.12), we see that as . Thus, the map is a homeomorphism on under the
sufficient condition (H6), and hence it has a unique fixed point . This fixed point is the unique solution of the system (2.3). The proof is now complete.
To establish some sufficient conditions ensuring the pth moment exponential stability of the equilibrium point of of system (2.3), we transform to the origin by using the transformation for . Then
system (2.3) can be rewritten as the following form: where In order to obtain our results, the following assumptions are necessary.
(H7)When , for any .
When , for any .
(H8)There exist constant and , such that
Theorem 3.4. Assume that (H1)–(H5) and (H7)-(H8) hold, then system (2.3) exists a unique equilibrium , and the equilibrium point is pth moment exponentially stable.
Proof. From (H7), if , then if , for any .
Then regardless of cases, (H6) and are satisfied. Therefore, system (2.3) exists a unique equilibrium point .
Now, we define For , , we obtain while Lemma 3.1 is used in the second inequality. Let for , where , applying It formula to (3.21), we can get From (H8), we have On the other hand, we have for ,,
by (3.23), (3.25), and (H8), we have where .
On the other hand, we observe that It follows that for , where which means that Therefore, the equilibrium point of system (2.3) is pth moment exponentially stable.
Corollary 3.5. If , under the assumptions (H1)–(H5), system (2.3) exists a unique equilibrium point , and is pth moment exponentially stable if the following two conditions are satisfied:
(H10)There exist constant and , such that
Proof. In Theorem 3.4, let , for all . The result is obtained directly.
Remark 3.6. In Corollary 3.5, if , the conditions (H9) and (H10) are less weak than the following conditions: There exist constant and , such that while and were asked in Theorem3.2 in [18].
Corollary 3.7. Under the assumptions and , system (2.1) exists a unique equilibrium point , and is globally exponentially stable if , and the following condition is also satisfied:
Proof. In Theorem 3.4, when , choosing , for , and , for all , then the result is obtained.
Remark 3.8. If ,,it follows from (3.35) that this is less conservative than the following inequality: while (3.37) was required in Theorem3.1 in [5] and in the only theorem in [8].
When ,,,(3.37) is equal to while (3.38) was required in Theorem3.2 in [19].
Similar to Corollaries 3.5 and 3.7, the following results are directly gained from Theorem 3.4.
Corollary 3.9. Under the assumptions (H2), (H4), and (H8), system (2.2) exists a unique equilibrium point , and is pth moment exponentially stable if , and the following condition is satisfied:
Especially, if , then the equilibrium is exponentially stable in mean square if the following condition is also satisfied:
Remark 3.10. If , it follows from (3.39) that This is less conservative than the following inequality: while (3.42) was required in Theorem in [26].
Remark 3.11. If , it follows from (3.40) that this is less conservative than the following inequality: while (3.44) was required in Theoerm in [27].
4. Illustrative Example
In this section, we will give an example to show that the conditions given in the previous sections are less weak than those given in some of the earlier literatures, such as [18].
Consider the following stochastic neural networks with mixed time delays where ,,,, and.
In the following, we introduce the following nonlinear impulsive controllers: In this case, we have ,,, for , and.
For , we can compute that Choosing for ,we have and ,then Thus, all conditions of Theorem 3.4 in this paper are satisfied; the equilibrium solution is exponentially stable in mean square. From
above discussion, it is easy to see that
|
{"url":"http://www.hindawi.com/journals/mpe/2012/175934/","timestamp":"2014-04-19T06:04:07Z","content_type":null,"content_length":"1042955","record_id":"<urn:uuid:ec465250-0ef1-422e-974c-1760e4f7d77b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] Scipy for Cluster Detection
David Warde-Farley dwf@cs.toronto....
Thu Dec 27 00:24:26 CST 2007
On 26-Dec-07, at 7:28 PM, Lorenzo Isella wrote:
> Dear All,
> I hope this will not sound too off-topic.
> Without going into details, I am simulating the behavior of a set of
> particles interacting with a short-ranged, strongly binding
> potential. I
> start with a number N of particles which undergo collisions with each
> other and, in doing so, they stick giving rise to clusters.
> The code which does that is not written in Python, but it returns the
> positions of these particles in space.
> As time goes on, more and more particles coagulate around one of these
> clusters. Eventually, they will all end up in the same cluster.
> Two particles are considered to be bounded if their distance falls
> below
> a certain threshold d.
> A cluster is nothing else than a group of particles all directly or
> indirectly bound together.
This doesn't sound like what would typically be considered a
Do you have code that produces the (N^2 - N)/2 distances between pairs
of particles? How large is N? If it's small enough, turning it into a
NumPy array should be straightforward, and then thresholding at d is
one line of code. Then what you end up with is a boolean matrix that
where the (i,j) entry denotes that i and j are in a cluster together.
This can be thought of as a graph or network where the nodes are
particles and connections / links / edges exist between those
particles that are in a cluster together.
Once you have this your problem is what graph theorists would call
identifying the "connected components", which is a well-studied
problem and can be done fairly easily, Google "finding connected
components" or pick up any algorithms textbook at your local library
such as the Cormen et al "Introduction to Algorithms".
I'm not in a position to comment on whether there's something like
this already in SciPy, as I really don't know. However, finding
connected components of a graph (i.e. your clusters) is a very well
studied problem in the computer science (namely graph theory)
literature, and the algorithms are simple enough that less than 20
lines of Python should do the trick provided the distance matrix is
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-December/014927.html","timestamp":"2014-04-18T00:37:24Z","content_type":null,"content_length":"4846","record_id":"<urn:uuid:9661787d-8085-4646-810f-5027113320ea>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Possible winners in partially completed tournaments
, 1994
"... In this paper, we study network flow algorithms for bipartite networks. A network G = (V; E) is called bipartite if its vertex set V can be partitioned into two subsets V 1 and V 2 such that all
edges have one endpoint in V 1 and the other in V 2 . Let n = jV j, n 1 = jV 1 j, n 2 = jV 2 j, m = jE ..."
Cited by 44 (7 self)
Add to MetaCart
In this paper, we study network flow algorithms for bipartite networks. A network G = (V; E) is called bipartite if its vertex set V can be partitioned into two subsets V 1 and V 2 such that all
edges have one endpoint in V 1 and the other in V 2 . Let n = jV j, n 1 = jV 1 j, n 2 = jV 2 j, m = jEj and assume without loss of generality that n 1 n 2 . We call a bipartite network unbalanced if
n 1 ø n 2 and balanced otherwise. (This notion is necessarily imprecise.) We show that several maximum flow algorithms can be substantially sped up when applied to unbalanced networks. The basic idea
in these improvements is a two-edge push rule that allows us to "charge" most computation to vertices in V 1 , and hence develop algorithms whose running times depend on n 1 rather than n. For
example, we show that the two-edge push version of Goldberg and Tarjan's FIFO preflow push algorithm runs in O(n 1 m + n 3 1 ) time and that the analogous version of Ahuja and Orlin's excess scaling
- Algorithmica , 1999
"... Identifying the teams that are already eliminated from contention for first place of a sports league, is a classic problem that has been widely used to illustrate the application of linear
programming and network flow. In the classic setting each game is played between two teams and the first place ..."
Cited by 6 (0 self)
Add to MetaCart
Identifying the teams that are already eliminated from contention for first place of a sports league, is a classic problem that has been widely used to illustrate the application of linear
programming and network flow. In the classic setting each game is played between two teams and the first place goes to the team with the greatest total wins. Recently, two papers [Way] and [AEHO]
detailed a surprising structural fact in the classic setting: At any point in the season, there is a computable threshold W such that a team is eliminated (has no chance to win or tie for first
place) if and only if it cannot win W or more games. They used this threshold to speed up the identification of eliminated teams. In both papers, the proofs of the existence the threshold are tied to
the computational methods used to find it. In this paper we show that thresholds exist for a wide range of elimination problems (greatly generalizing the classical setting), via a simpler proof not
connected to any partic...
- SIAM Journal on Discrete Mathematics , 1999
"... . In the baseball elimination problem, there is a league consisting of n teams. At some point during the season, team i has w i wins and g ij games left to play against team j.Ateamis eliminated
if it cannot possibly finish the season in first place or tied for first place. The goal is to determine ..."
Cited by 5 (0 self)
Add to MetaCart
. In the baseball elimination problem, there is a league consisting of n teams. At some point during the season, team i has w i wins and g ij games left to play against team j.Ateamis eliminated if
it cannot possibly finish the season in first place or tied for first place. The goal is to determine exactly which teams are eliminated. The problem is not as easy as many sports writers would have
you believe, in part because the answer depends not only on the number of games won and left to play, but also on the schedule of remaining games. In the 1960's, Schwartz showed how to determine
whether one particular team is eliminated using a maximum flow computation. This paper indicates that the problem is not as di#cult as many mathematicians would have you believe. For each team i,letg
i denote the number of games remaining. We prove that there exists a value W # such that team i is eliminated if and only if w i + g i <W # . Using this surprising fact, we can determine all
eliminated team...
- OR/MS Today , 2004
"... ..."
"... Abstract. A problem of intense interest to many sports fans as a season progresses is whether their favorite team has mathematically clinched a playoff spot; i.e., whether there is no possible
scenario under which their team will not qualify. In this paper, we consider the problem of determining whe ..."
Cited by 2 (1 self)
Add to MetaCart
Abstract. A problem of intense interest to many sports fans as a season progresses is whether their favorite team has mathematically clinched a playoff spot; i.e., whether there is no possible
scenario under which their team will not qualify. In this paper, we consider the problem of determining when a National Hockey League (NHL) team has clinched a playoff spot. The problem is known to
be NP-Complete and current approaches are either heuristic, and therefore not always announced as early as possible, or are exact but do not scale up. In contrast, we present an approach based on
constraint programming which is fast and exact. The keys to our approach are the introduction of dominance constraints and special-purpose propagation algorithms. We experimentally evaluated our
approach on the past two seasons of the NHL. Our method could show qualification before the results posted in the Globe and Mail, a widely read newspaper which uses a heuristic approach, and each
instance was solved within seconds. Finally, we used our solver to examine the effect of scoring models on elimination dates. We found that the scoring model can affect the date of clinching on
average by as much as two days and can result in different teams qualifying for the playoffs. 1
, 1999
"... The competition for baseball playoff spots -- the fabled "pennant race" -- is one of the most closely-watched American sports traditions. Baseball fans, known for their love of statistics, check
newspapers and web sites daily looking for measures of their team's progress (or lack thereof !). While t ..."
Cited by 2 (0 self)
Add to MetaCart
The competition for baseball playoff spots -- the fabled "pennant race" -- is one of the most closely-watched American sports traditions. Baseball fans, known for their love of statistics, check
newspapers and web sites daily looking for measures of their team's progress (or lack thereof !). While traditionally-reported playoff race statistics such as games back and "magic number" are
informative, they are overly conservative and ignore the remaining schedule of games. By using optimization techniques, one can model schedule effects explicitly and determine when a team has locked
up a playoff spot or is truly "mathematically eliminated" from contention. This paper describes the Baseball Playoff Races web site, a popular site developed at Berkeley that provides automatic daily
updates of new, optimization-based playoff race statistics. During the development of the site, it was found that the first-place elimination status of all teams in a division can be determined using
a single linear prog...
"... Abstract. The modelling of complex problems tends to be most effective when modelling is calibrated using a concrete solver and modifications to the model are made as a result. In some cases,
the final model is significantly different from the simple model that best fits the constraints of the probl ..."
Add to MetaCart
Abstract. The modelling of complex problems tends to be most effective when modelling is calibrated using a concrete solver and modifications to the model are made as a result. In some cases, the
final model is significantly different from the simple model that best fits the constraints of the problem. While there are several projects that are attempting to create black box solvers with
generic modelling languages, for the moment the modelling processes is intimately tied to the solving and search procedure. This paper looks at the changes to the simple model and the search
procedures that were made to create an efficient solution to the NHL Playoff Qualification problem. The approaches for improving the model could be extended to other applications as the techniques
are not specific to this problem. 1
"... Given a symmetric n X n matrix A and n numbers rl, " " r n, necessary and sufficient conditions for the existence of a matrix B, with a given zero pattern, with row sums r 1,..., r n, and such
that A = B + BT are proven. If the pattern restriction * The research of all three authors was supported b ..."
Add to MetaCart
Given a symmetric n X n matrix A and n numbers rl, " " r n, necessary and sufficient conditions for the existence of a matrix B, with a given zero pattern, with row sums r 1,..., r n, and such that A
= B + BT are proven. If the pattern restriction * The research of all three authors was supported by the Funda~ao Calouste Gulbenkian, Lisboa. tThe research of this author was carried out within the
activity of the Centro de Algebra da
, 1994
"... In this paper, network flow algorithms for bipartite networks are studied. A network G = (V. E) is called bipartite if its vertex set V can be partitioned into two subsets V1 and V2 such that
all edges have one endpoint in VY and the other in V2. Let n = I., nl = ll I1, n = IV21, m = El and assume ..."
Add to MetaCart
In this paper, network flow algorithms for bipartite networks are studied. A network G = (V. E) is called bipartite if its vertex set V can be partitioned into two subsets V1 and V2 such that all
edges have one endpoint in VY and the other in V2. Let n = I., nl = ll I1, n = IV21, m = El and assume without loss of generality that n s < n2. A bipartite network is called unbalanced if n << n and
balanced otherwise. (This notion is necessarily imprecise.) It is shown that several maximum flow algorithms can be substantially sped up when applied to unbalanced networks. The basic idea in these
improvements is a ntwo-edge push rule that allows one to "charge" most computation to vertices in VI, and hence develop algorithms whose running times depend on n rather than n. For
example, it is shown that the two-edge push version of Goldberg and Tarjan's FIFO preflow-push algorithm runs in O(nlm + n3) time and that the analogous version of Ahuja and Orlin's excess scaling
algorithm runs in O(n m + n log U) time, where U is the largest edge capacity. These ideas are also extended to dynamic tree implementations, parametric maximum flows, and minimum-cost flows.
"... Many sports fans invest a great deal of time into watching and analyzing the performance of their favorite team. However, the tools at their disposal are primarily heuristic or based on folk
wisdom. We provide a concrete mechanism for calculating the minimum number of points needed to guarantee a pl ..."
Add to MetaCart
Many sports fans invest a great deal of time into watching and analyzing the performance of their favorite team. However, the tools at their disposal are primarily heuristic or based on folk wisdom.
We provide a concrete mechanism for calculating the minimum number of points needed to guarantee a playoff spot and the minimum number of points needed to possibly qualify for a playoff spot in the
National Hockey League (NHL). Our approach uses a combination of constraint programming, enumeration, network flows and decomposition to solve the problem efficiently. The technique can successfully
be applied to any team at any point of the season to determine how well a team must do to make the playoffs.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2709329","timestamp":"2014-04-17T23:46:16Z","content_type":null,"content_length":"37195","record_id":"<urn:uuid:85ceadd3-4812-48d1-b0f4-0b199f586385>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Whiz Kid Technomagic Zone Plate Designer
Design your own!
An extension to pinhole photography, a zone plate—or a Fresnel zone plate—is used to take pictures with a camera without the use of a lens. It looks like the above picture. Think of a Fresnel zone
plate as a pinhole surrounded by another pinhole which, too, is surrounded by yet another pinhole, and so on.
While it may not be obvious from the picture, each of the white rings in a zone plate covers an area of the exact same size as the pinhole in the middle of the zone plate. Thus, if there are four
such zones (one pinhole and three rings) on a zone plate, it lets through four times as much light as the pinhole alone, allowing you to take a picture twice as fast. Twice, not four times, because
the photograph is two-dimensional, so we divide the time by the square root of four, which is two. If we used sixteen such zones, we could take the picture four times faster, and so on.
This extra light does come at a price. The more zones you use, the smaller your f-stop is. The smaller the f-stop, the less depth of focus. Thus, when designing a zone plate, you need to consider the
tradeoff of speed vs. sharpness and decide how many clear zones to choose.
The Pinhole
The second consideration is the size of the pinhole. Since the pinhole is in the shape of a circle, its size depends on its diameter. Once we have determined the diameter of the pinhole, we can
calculate the diameters of all the rings (both clear and dark) mathematically from the diameter of the pinhole.
You need to consider two factors when deciding the diameter of the pinhole: focal length and wavelength.
Focal Length
The focal length (or focal distance) is essentially the distance of the pinhole from the plane of the film. If you are designing a zone plate for an existing camera, just measure the distance of the
film plane from the plane where you will mount your zone plate. Do your measurements in millimeters. If you think in inches, just multiply the length in inches by 25.4 to convert it to millimeters.
If you are building your own camera, or are using a view camera, you can decide what focal length to use based on the size of the film. For each film format there is a “normal” focal length, though
not everyone necessarily agrees what it is. Generally, people will say that the normal focal length to use with 35 mm film is about 50mm, while with a 4x5 inch film 150-180mm.
I used the word essentially in my first definition of the focal length. In true pinhole photography, we could skip that word entirely. In true pinhole photography the entire image is equally sharp—or
blurred—reducing the distance of the film plane from the pinhole to the only factor. The Fresnel zone plate, however, functions as a lens. We need to consider both the distance of the film plane from
the pinhole (or, actually, from the zone plate) and the distance of the object we are photographing from the pinhole (the zone plate). The farther away the object is, the less it affects the focal
length. And if it is infinitely far, it has no influence on the focal length. Now, in photography infinity is often not too far. Surely, if you are taking a picture of a mountain from a fairly nice
distance, you can treat it as if it were in infinity. But if you are taking a portrait, you cannot treat your subject as being at infinity.
The actual focal length can be calculated by multiplying the distance of the object from the zone plate by the distance of the film plane from the zone plate and then dividing the result by the sum
of the two distances. So, if you are taking a portrait of a person standing 2 meters in front of your zone plate, and your film is 150 mm away from the zone plate, assuming you are using a 4x5 view
camera, the most likely candidate for experimenting with zone plates, first you need to convert the distance from meters to millimeters. Two meters is 2000 mm. Now, the correct focal length is 2000 *
150 / (2000 + 150) = 140 mm. That is quite a difference from the 150 mm which you would get if you only considered the distance of the film plane from the zone plate.
By the way, this works in the opposite direction as well. If you have a zone plate with a 150 mm focus length and your subject is two meters in front of your zone plate, just change the plus sign in
the above calculation to a minus sign to determine how far the film plane should be from your zone plate. That is, to focus at the object, you would adjust the film to zone plate distance to 2000 *
150 / (2000 - 150) = 162 mm.
The wavelength is the length of the light wave. It varies across the visible light spectrum, ranging from 400 nm at the violet end to 700 nm at the red end. Those wavelengths are very short: one
nanometer is one millionth of a millimeter.
Because the wavelengths vary across the spectrum, you need to design a different zone plate for different colors. An ocean scene or a snow scene will probably be mostly blue or even violet (since
there may be much ultraviolet light in the scene), so you might use a zone plate designed for 400-450nm. A sunrise or a sunset scene may be filled with red or orange light rays, so you would go for
the high end, perhaps 650 nm. For a “general” photography situation you may go for the middle of the range, which is 560 nm. This is not because that is the right wavelength for most colors, but
because it will be equally wrong throughout the picture. Your best zone plate pictures will acount for this chromatic dependence of the Fresnel zone plate: they will have one dominant color and the
zone plate will be optimized for that color.
Once you have decided on the focal length and the wavelength, determining the diameter of the pinhole is easy. Multiply the focal length (in millimeters) by the wavelength (in nanometers). The square
root of the result is the radius of the pinhole in micrometers. Just double it, and you have the diameter. So, for example, if you want to shoot sunrise at 35 mm film using a “normal” focal length,
you will multiply 50 (the focal length) by 650 (the wavelength) and get 32500. Its square root is 180.28 which gives the radius of 180 micrometers (0.18 mm), or a diameter of 360 micrometers (0.36
From the diameter of the pinhole you can determine the diameters of all the rings (both black and clear). The outside diameter of the first ring (black) is the diameter of the pinhole multiplied by
the square root of 2. The outside diameter of the second ring (clear) is the diameter of the pinhole multiplied by the square root of 3. The next one, pinhole diameter times square root of 4. Then
square root of 5. And so on, and so on.
Buy It or Make It?
We live in the computer age, so these calculations are very simple. But the task of creating a zone plate dealing with micrometers seems overwhelming at first. That is why many people just cough up
the cash and willingly pay $20-$30 per zone plate from an “official” zone plate source.
There are some compelling reasons to make your own zone plates, however. For one, you can do it for a lot less money. Even if money is no objection, you need to fine tune your zone plate to whatever
photographic situation you want to use it for. While the “official” zone plate suppliers may offer a zone plate for specific focal lengths, you may want to use a focal length that is not exactly what
they are offering. And, of course, you may want zone plates with a different number of zones (rings) depending on the intensity of available light and sharpness/softness considerations which differ
from picture to picture. Last but not least, you need to consider the wavelength. Chances are your “official” zone plate uses the wavelength of 550 or 560 nm which is right in the middle but is
hardly suitable for all situations. Actually, it is only suitable for photographing green objects.
That means that if you are serious about zone plate photography, you need a whole slew of different zone plates (and if you are not serious, why not just use a standard camera with a lens).
Outsourcing them all might end up costing you more than a set of high quality lenses!
Last but not least, most people who like to use zone plates also like to tinker with their equipment, building it all from scratch. Zone plate photography is an extension of pinhole photography, so
you can use a metal can or an empty cereal box as your camera. If you build everything else, you certainly can build your own zone plates! It is much easier than at first it may seem.
Making a Zone Plate
The traditional way of making a zone plate is to create a large drawing of the exact image of the zone plate, then photographing it on a high contrast black and white film, thus reducing the image to
the correct size. If the film used is a standard negative film, the original drawing must be color reversed. That is, the pinhole must be drawn in black, the black rings in white (or just not drawn
since you would use white paper) and the clear rings in black. If, on the other hand, you use a black and white slide film, the original must simply be an enlarged drawing of the zone plate.
People have been using this method since long before the computer age. But modern computers make the creation of a zone plate this way a snap. Not only can computers calculate the diameter of the
pinhole and the rings, they can produce the exact drawing of the zone plate. All you need is a high resolution bitmap of the correct zone plate, print it out on a white sheet of paper, and take the
picture on high contrast black and white film. If you shoot on 35 mm film, you can produce some 36 zone plates with one roll of film for the fraction of the cost of even one “official” zone plate.
Reducing the Zone Plate
It is a matter of simple geometry to convert the large drawing into a small photoimage of the correct size. With a handheld calculator, simply divide any dimension of the drawing by the corresponding
dimension on the film image. Multiply the distance of the film plane in your camera by the result of that calculation to figure out how far in front of your camera you need to place the drawing.
Position your drawing (attach to a wall, perhaps), place your camera the calculated distance in front of the drawing (a tripod is a good idea here, or a copystand, if you have one), centering the
bull’s eye in your viewfinder or focusing screen, and take the picture.
Play with it! Photography is an art form. Make slight variations in the size of the pinhole, automatically scaling all rings, by moving the camera closer to or farther from the drawing. Take test
pictures using the different zone plates. Decide visually which one works best for whatever photographic message you want to send instead of just relying on some theory.
As an example of the calculation, suppose you have a drawing on which the rings are surrounded with a 5 inch frame. You know that reducing the frame to 1/4 inch will reduce the pinhole and the rings
to the correct size. Dividing 5 by 1/4 yields 20. Now you know you need to reduce all dimensions 20 times. If you are using a 35 mm camera with a standard 50 mm lens, the lens is 50 mm in front of
the film plane. Multiply 50 by 20 and you get 1000. Place the drawing 1000 mm (one meter) in front of the camera. We used inches in one calculation, millimeters in the other. As long as we have
divided inches by inches and multiplied millimeters to get millimeters, we did not mix our metaphors.
What if your drawing does not have a frame that can be conveniently reduced to 1/4 inch? Then measure the diameter of the largest ring on the drawing and divide it by the diameter it needs to have in
the final zone plate. You will probably know the final diameter in millimeters, so measure the drawing in millimeters as well. Suppose you have a large poster (like this one). You measure its outer
diameter at 22 inches, while you have calculated (using the form below) that for your target focal length and wavelength you need to reduce it to 2.1 mm (yes, zone plates are very small). First
convert the 22 inches into millimeters multiplying by 25.4, which will give you 558.8—the diameter in mm. Divide that by the desired diameter of 2.1 mm. That gives you 266.1. So to reduce all
dimensions of the poster 266.1 times, place it 266.1 mm * 50 = 13,305 mm in front of your 35 mm camera. Of course, you can convert the 13,305 mm into more convenient units—13.3 meters or 43.6 feet.
A High Tech Alternative
Naturally, once you have a computer bitmap, you may use a more high tech way of creating the zone plate, though this would require you to have access to the proper equipment. For one, if you have a
high resolution laser printer, just create the bitmap at the same resolution, then print it on a transparency sheet (of course, you can print a whole number of different—or identical—bitmaps on the
same sheet, producing several zone plates on just one sheet). And if you happen to have access to a digital film recorder (they are getting much more common than they used to be, so even if you do
not have one, you may know someone who does, just ask around), you could output the bitmap directly on a high resolution black and white film at exactly the same resolution as the bitmap, thus
creating a perfect zone plate.
The Megapinhole
If you like to experiment and find new ways of working with photography, you may want to try things other than a standard zone plate. Some of the best pictures were taken with a multipinhole camera,
that is, with a camera that has more than one pinhole. But you can even go beyond that: How about playing with a large number of pinholes spread out along the Fresnel zones? Because the zones get
thinner and thinner as they are rippling away from the center pinhole, these holes have to get smaller and smaller. There also has to be a large number of them. For the lack of a better name, I have
called it a megapinhole (though I was tempted to call it either a “starfish zone plate” or a “UFO zone plate”). It would be very hard to drill or pierce the holes of a megapinhole, but we can use the
same technique as with the zone plate: Create a drawing and photograph it.
I have played around with a system of designing megapinholes mathematically. Just as with zone plates, I create them based on their focal length, wavelength, and the number of zones. But I added a
new factor: density. This decides how densely packed each zone is with holes. Reasonable density values are between 5 and 41. The above picture has a density of 11.
Measurable Dimensions
When creating your own zone plate or megapinhole by photographing a drawing, make sure the drawing has at least one of the three dimensions you can see in this picture. As long as you can measure (or
figure out) one of them and either know or can calculate what its corresponding dimension should be on the final zone plate or megapinhole, you can easily calculate how far your camera needs to be
from the drawing to produce the correct zone plate megapinhole (as explained in the Reducing the Zone Plate section).
The three easily measurable dimensions are:
• The frame. If you can produce an image with a discernable frame and know what its reduced size should be, use it. That is the easiest approach because all you need to measure is a side of a box.
In the above image, it is the border of the black square.
• The pinhole. If you do not have a frame, measure the diameter of the pinhole. You can calculate what its size should be in the actual zone plate or megapixel by the method explained in the
section titled Diameter. Because in our image, it equals to the size of any of the sides of the yellow square, I will refer to it throughout these pages as “the yellow dimension”.
• The outside ring. Finally, if you have no frame and are designing a Slovak style zone plate or megapixel (described at the bottom of this page) which does not have a center pinhole, measure the
diameter of the outside ring. You can calculate its destination diameter by counting up which zone it is. You need to count both the white and the black zones (including the inner zone, the one
where the center pinhole would be if you used it). The first one (the center) is zone 1, the next one zone 2, etc. Multiply the calculated diameter of the center zone by the square root of the
zone number, and you know the correct diameter of the outside zone. In our image, it is zone 7—the spaces between the “bubbles” are thin but they are actual Fresnel zones.
This is quite easy with standard zone plates but is a bit trickier with the megapinholes. Although the megapinhole contains a number of pinholes that are spread over the outer zone, their edge is
not necessarily at the theoretical edge you would get with the above calculation (see the green box in the image). So, unless you know the exact diameter of the outside ring, you are better off
using either of the other two dimensions listed. But if you do not have those dimensions available (no frame, no center pinhole) and do not know the correct diameter of the outside zone, use the
calculation for a standard zone plate, then take several photographs: One from the calculated diameter, one (or more) a bit closer, one (or more) a bit further. Then take pictures using the
plates you have made, and decide visually which one works best for you. Again, photography is an art. Science is here to help us, not to limit us.
On the rest of these pages I will refer to this value as “the green dimension.”
Putting It All Together
All that remains is getting the bitmap. Nothing could be simpler! Just fill out the form below. This was a very simple form originally, but I kept adding more options. If this is your first time
here, all you really need to specify is your focal length and click “Design the plate.” The rest is up to you. And don’t forget to let your friends know how much you enjoy zone plates. The easiest
way of doing that is by wearing a zone plate T-shirt.
Wavelength nm (400-700, see chart below)
Focal length mm
Size measured in (see notes below)
Resolution DPI (see notes below)
Clear zones , skipping over the first zones.
Antialiasing (see notes below)
Image type (see notes below)
I want to design a .
If you have opted for a megapinhole, also please fill out this:
Megapinhole Density (see notes below)
Megapinhole Size (see notes below)
Aligned at the of the zone.
Give me the Slovak style.
Wavelengths of Light
Here is a quick reference of the wavelengths of the different colors within the visible spectrum of light. All values are in nm (nanometers):
│ Violet │ 400 - 420 │
│ Blue │ 420 - 490 │
│ Green │ 490 - 570 │
│ Yellow │ 570 - 590 │
│ Orange │ 590 - 650 │
│ Red │ 650 - 700 │
Design Options
This is the size of the black box surrounding the zone plate. Depending on your choice of units, the small box is either 10 mm (1 cm) or 1/4 inch wide and tall, the medium box either 20 mm (2 cm) or
1/2 inch, the large box is either 30 mm (1 cm) or 1 inch (25.4 mm), and the extra large box spans either 50 mm (5 cm) or 2 inches on each side. Remember, while the rings of the zone plate are very
small, you need to be able to handle the zone plate with your bare fingers. You will also need enough margin to attach it to the front of your camera.
By the way, the image frame of the 35 mm film is 24 mm x 36 mm. One inch is 25.4 mm, slightly more than the smaller dimension of the 35 mm film frame. And, of course, 30 mm is more than 24 mm. That
means that if you shoot the “large” image with a 35 mm camera, it will fill up the smaller dimension (vertical in landscape orientation). As long as the other dimension measures, depending on which
units you chose, either exactly 30 mm or exactly 1 inch of the box (on the developed film), you have the right zone plate.
The “extra large” size is good for the 6x6 cm film format. Of course, you can shoot it on the 35 mm film as well. It will fill the entire frame, so you will not be able to measure it. But if you are
doing all your measuring and math before shooting, you will get the correct size zone plate.
Choose a resolution in dots per inch. It should be between 50 - 9600. If you choose anything below 50, a default value will be substituted. But what if you need a bitmap of a resolution higher than
9600? Well, you can have it, but you will need to create it yourself. This web site runs on a busy server, and many times when we tried a resolution above 9600, we got nothing because the server
just killed the software we use to create the bitmap. But don’t worry, you can create the bitmap yourself. If you choose any resolution above 9600 DPI, or if the bitmap would excede 6000 pixels per
dimension, we will not send you the bitmap, but an encapsulated PostScript file which you can export to a bitmap of any size you want using your own computer.
Of course, PostScript may be just what you need. If you are using a high resolution imagesetter or digital film recorder, it probably wants PostScript rather than a bitmap. So, just enter any value
higher than 9600 DPI to the resolution field, and you will get an encapsulated PostScript file. This file works correctly whether you use it as an encapsulated PostScript file, or you just dump it
directly to a PostScript printer.
A bitmap is a digital representation of an image. No matter how large a resolution you choose, the circles will appear jagged when viewed by a human on the computer screen. Antialiasing is a trick
to fool your brain into not seeing the jaggies. It also increases the size of the file the bitmap is in considerably.
If you are going to render the bitmap using a high resolution digital film recorder, or if you are going to print it directly on a transparency with a laser printer, choose none. If you are going to
be printing it out on paper and then taking a picture using a high contrast film, choose some or full.
Image Type
You can get a regular positive image. Or you can get a negative with either a thin or a thick border. The border is there to give you a frame of reference. In other words, it allows you to create a
black box around the rings of the zone plate in the final positive. It is this final black box that should be 1/4 inch, 1/2 inch, or 1 inch, as mentioned above. Measuring it (after developing your
film) allows you to verify that you made the zone plate at the correct size for the focal length and wavelength you designed it for.
Megapinhole Size and Density
If you opt for a megapinhole, set its density to anything between 5 and 41 (or 6 - 41 when designing in the Slovak style). The higher the number, the higher the density (the more pinholes within the
megapinhole). For example, 123 clear zones with the maximum density (41) will give you a plate with 123,733 pinholes! But you will definitely need to request the encapsulated PostScript file to get
sufficient resolution with that.
Additionally, you can choose the size of the pinholes. The size does not affect the center pinhole, only those spread over the Fresnel zones. That implies that if you set the density to 0, the size
will be ignored (since there are no pinholes in the zones, only rings). You may want to start experimenting with the largest size with only a few zones at first. Later, you may want to play with it:
Increase the number of zones and decrease the size and density of the pinholes.
Megapinhole Alignment
You can choose whether you want the center of each of the small pinholes located at the edge of a Fresnel zone or in its center. This makes a subtle difference in the optical properties of the
megapinhole plate. Play with, try both, and compare the results. Then decide which one you want to use for each particular photographic project of yours.
Slovak Style
A zone plate is created by covering every other Fresnel zone. Traditionally, at least in the field of photography, zone plates have been made by covering every odd zone. If you choose the Slovak
style, every even zone will be covered.
Combining the Slovak style with the megapinhole or with a few skipped zones—or both—can produce some very interesting results! Try it. Play with it. Be creative. And, by all means, be original!
Photography is less than two hundred years old. Believe it or not, there still are revolutionary discoveries to be made in the field of photography.
Copyright © 2003 G. Adam Stanislav
All rights reserved
[ Zone Plate Articles | Home ]
[ Introduction to Modern Optics ]
This Pinhole Photography site
owned by G. Adam Stanislav
hosted by RingSurf
Next | Previous
Random Site | List Sites
|
{"url":"http://www.whizkidtech.redprince.net/zoneplate/","timestamp":"2014-04-19T12:25:07Z","content_type":null,"content_length":"30753","record_id":"<urn:uuid:af318d9d-fd6d-4d92-8de6-6cc544f66e92>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Northampton, PA Math Tutor
Find a Northampton, PA Math Tutor
...I have received my elementary teaching certification both in NJ and PA. As a teacher of students with special needs, much of the content I teach is at the elementary level. I have my special
education teaching certification both from NJ and PA.
24 Subjects: including algebra 1, prealgebra, ACT Math, geometry
...I am willing to tutor any high school or college student that struggles or would like additional help in math or science. I have spent many hours as a volunteer tutor in high school and was a
part-time calculus tutor in college. I am patient and understand that all students learn differently, a...
9 Subjects: including calculus, geometry, precalculus, differential equations
...I've had a lot of experience observing and tutoring students in and out of the classroom and have studied teaching methods for the past four years. My GPA is a 3.81 so all the math classes
I've taken have went very well and I'd love to offer help to anyone who needs it! I'm experienced in all m...
14 Subjects: including geometry, trigonometry, statistics, discrete math
...After college I began tutoring professionally, working with a company that tutored students in math and reading. With this company I have worked with all ages. What I love about tutoring is
the one-on-one interaction.
44 Subjects: including algebra 2, Spanish, calculus, prealgebra
Hello perspective tutees! My name is Kelli and I will be a senior at Muhlenberg College this coming year. I am living in the Allentown area this summer and would love to help you (or your son or
daughter) prepare for the standardized testing that they will be taking in the coming school year.
13 Subjects: including ACT Math, reading, literature, grammar
Related Northampton, PA Tutors
Northampton, PA Accounting Tutors
Northampton, PA ACT Tutors
Northampton, PA Algebra Tutors
Northampton, PA Algebra 2 Tutors
Northampton, PA Calculus Tutors
Northampton, PA Geometry Tutors
Northampton, PA Math Tutors
Northampton, PA Prealgebra Tutors
Northampton, PA Precalculus Tutors
Northampton, PA SAT Tutors
Northampton, PA SAT Math Tutors
Northampton, PA Science Tutors
Northampton, PA Statistics Tutors
Northampton, PA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Northampton_PA_Math_tutors.php","timestamp":"2014-04-19T23:38:47Z","content_type":null,"content_length":"23876","record_id":"<urn:uuid:bf9645b6-afbc-4a66-9ffc-e565b8021209>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Multiple Non-Interactive Zero Knowledge Proofs Under General Assumptions
Results 1 - 10 of 137
, 1995
"... We argue that the random oracle model -- where all parties have access to a public random oracle -- provides a bridge between cryptographic theory and cryptographic practice. In the paradigm we
suggest, a practical protocol P is produced by first devising and proving correct a protocol P R for the ..."
Cited by 1333 (62 self)
Add to MetaCart
We argue that the random oracle model -- where all parties have access to a public random oracle -- provides a bridge between cryptographic theory and cryptographic practice. In the paradigm we
suggest, a practical protocol P is produced by first devising and proving correct a protocol P R for the random oracle model, and then replacing oracle accesses by the computation of an
"appropriately chosen" function h. This paradigm yields protocols much more efficient than standard ones while retaining many of the advantages of provable security. We illustrate these gains for
problems including encryption, signatures, and zero-knowledge proofs.
- SIAM Journal on Computing , 2000
"... The notion of non-malleable cryptography, an extension of semantically secure cryptography, is defined. Informally, in the context of encryption the additional requirement is that given the
ciphertext it is impossible to generate a different ciphertext so that the respective plaintexts are related. ..."
Cited by 447 (22 self)
Add to MetaCart
The notion of non-malleable cryptography, an extension of semantically secure cryptography, is defined. Informally, in the context of encryption the additional requirement is that given the
ciphertext it is impossible to generate a different ciphertext so that the respective plaintexts are related. The same concept makes sense in the contexts of string commitment and zero-knowledge
proofs of possession of knowledge. Non-malleable schemes for each of these three problems are presented. The schemes do not assume a trusted center; a user need not know anything about the number or
identity of other system users. Our cryptosystem is the first proven to be secure against a strong type of chosen ciphertext attack proposed by Rackoff and Simon, in which the attacker knows the
ciphertext she wishes to break and can query the decryption oracle on any ciphertext other than the target.
- In Proc. of the 22nd STOC , 1995
"... We show how to construct a public-key cryptosystem (as originally defined by Diffie and Hellman) secure against chosen ciphertext attacks, given a public-key cryptosystem secure against passive
eavesdropping and a non-interactive zero-knowledge proof system in the shared string model. No such secure ..."
Cited by 249 (15 self)
Add to MetaCart
We show how to construct a public-key cryptosystem (as originally defined by Diffie and Hellman) secure against chosen ciphertext attacks, given a public-key cryptosystem secure against passive
eavesdropping and a non-interactive zero-knowledge proof system in the shared string model. No such secure cryptosystems were known before. Key words. cryptography, randomized algorithms AMS subject
classifications. 68M10, 68Q20, 68Q22, 68R05, 68R10 A preliminary version of this paper appeared in the Proc. of the Twenty Second ACM Symposium of Theory of Computing. y Incumbent of the Morris and
Rose Goldman Career Development Chair, Dept. of Applied Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 76100, Israel. Work performed while at the IBM Almaden Research
Center. Research supported by an Alon Fellowship and a grant from the Israel Science Foundation administered by the Israeli Academy of Sciences. E-mail: naor@wisdom.weizmann.ac.il. z IBM Research
Division, T.J ...
- In 42nd FOCS , 2001
"... The simulation paradigm is central to cryptography. A simulator is an algorithm that tries to simulate the interaction of the adversary with an honest party, without knowing the private input of
this honest party. Almost all known simulators use the adversary’s algorithm as a black-box. We present t ..."
Cited by 214 (13 self)
Add to MetaCart
The simulation paradigm is central to cryptography. A simulator is an algorithm that tries to simulate the interaction of the adversary with an honest party, without knowing the private input of this
honest party. Almost all known simulators use the adversary’s algorithm as a black-box. We present the first constructions of nonblack-box simulators. Using these new non-black-box techniques we
obtain several results that were previously proven to be impossible to obtain using black-box simulators. Specifically, assuming the existence of collision resistent hash functions, we construct a
new zeroknowledge argument system for NP that satisfies the following properties: 1. This system has a constant number of rounds with negligible soundness error. 2. It remains zero knowledge even
when composed concurrently n times, where n is the security parameter. Simultaneously obtaining 1 and 2 has been recently proven to be impossible to achieve using black-box simulators. 3. It is an
Arthur-Merlin (public coins) protocol. Simultaneously obtaining 1 and 3 was known to be impossible to achieve with a black-box simulator. 4. It has a simulator that runs in strict polynomial time,
rather than in expected polynomial time. All previously known constant-round, negligibleerror zero-knowledge arguments utilized expected polynomial-time simulators.
- in Cryptology — Eurocrypt 2004, LNCS , 2004
"... We propose simple and efficient CCA-secure public-key encryption schemes (i.e., schemes secure against adaptive chosen-ciphertext attacks) based on any identity-based encryption (IBE) scheme.
Our constructions have ramifications of both theoretical and practical interest. First, our schemes give a n ..."
Cited by 199 (11 self)
Add to MetaCart
We propose simple and efficient CCA-secure public-key encryption schemes (i.e., schemes secure against adaptive chosen-ciphertext attacks) based on any identity-based encryption (IBE) scheme. Our
constructions have ramifications of both theoretical and practical interest. First, our schemes give a new paradigm for achieving CCA-security; this paradigm avoids “proofs of well-formedness ” that
have been shown to underlie previous constructions. Second, instantiating our construction using known IBE constructions we obtain CCA-secure encryption schemes whose performance is competitive with
the most efficient CCA-secure schemes to date. Our techniques extend naturally to give an efficient method for securing also IBE schemes (even hierarchical ones) against adaptive chosen-ciphertext
attacks. Coupled with previous work, this gives the first efficient constructions of CCA-secure IBE schemes. 1
- SIAM J. COMPUTING , 1991
"... This paper investigates the possibility of disposing of interaction between prover and verifier in a zero-knowledge proof if they share beforehand a short random string. Without any assumption,
it is proven that noninteractive zero-knowledge proofs exist for some number-theoretic languages for which ..."
Cited by 188 (19 self)
Add to MetaCart
This paper investigates the possibility of disposing of interaction between prover and verifier in a zero-knowledge proof if they share beforehand a short random string. Without any assumption, it is
proven that noninteractive zero-knowledge proofs exist for some number-theoretic languages for which no efficient algorithm is known. If deciding quadratic residuosity (modulo composite integers
whose factorization is not known) is computationally hard, it is shown that the NP-complete language of satisfiability also possesses noninteractive zero-knowledge proofs.
, 2003
"... This paper provides theoretical foundations for the group signature primitive. We introduce strong, formal definitions for the core requirements of anonymity and traceability. We then show that
these imply the large set of sometimes ambiguous existing informal requirements in the literature, thereb ..."
Cited by 129 (6 self)
Add to MetaCart
This paper provides theoretical foundations for the group signature primitive. We introduce strong, formal definitions for the core requirements of anonymity and traceability. We then show that these
imply the large set of sometimes ambiguous existing informal requirements in the literature, thereby unifying and simplifying the requirements for this primitive. Finally we prove the existence of a
construct meeting our definitions based only on the assumption that trapdoor permutations exist.
, 2002
"... We show how to securely realize any two-party and multi-party functionality in a universally composable way, regardless of the number of corrupted participants. That is, we consider an
asynchronous multi-party network with open communication and an adversary that can adaptively corrupt as many pa ..."
Cited by 125 (32 self)
Add to MetaCart
We show how to securely realize any two-party and multi-party functionality in a universally composable way, regardless of the number of corrupted participants. That is, we consider an asynchronous
multi-party network with open communication and an adversary that can adaptively corrupt as many parties as it wishes. In this setting, our protocols allow any subset of the parties (with pairs of
parties being a special case) to securely realize any desired functionality of their local inputs, and be guaranteed that security is preserved regardless of the activity in the rest of the network.
This implies that security is preserved under concurrent composition of an unbounded number of protocol executions, it implies non-malleability with respect to arbitrary protocols, and more. Our
constructions are in the common reference string model and rely on standard intractability assumptions.
- In EuroCrypt99, Springer LNCS 1592 , 1999
"... Abstract. We examine the concurrent composition of zero-knowledge proofs. By concurrent composition, we indicate a single prover that is involved in multiple, simultaneous zero-knowledge proofs
with one or multiple verifiers. Under this type of composition it is believed that standard zero-knowledge ..."
Cited by 110 (3 self)
Add to MetaCart
Abstract. We examine the concurrent composition of zero-knowledge proofs. By concurrent composition, we indicate a single prover that is involved in multiple, simultaneous zero-knowledge proofs with
one or multiple verifiers. Under this type of composition it is believed that standard zero-knowledge protocols are no longer zero-knowledge. We show that, modulo certain complexity assumptions, any
statement in NP has k ɛ-round proofs and arguments in which one can efficiently simulate any k O(1) concurrent executions of the protocol.
, 2000
"... We show that if any one-way function exists, then 3-round concurrent zero-knowledge arguments for all NP problems can be built in a model where a short auxiliary string with a prescribed
distribution is available to the players. We also show that a wide range of known efficient proofs of knowledge ..."
Cited by 106 (2 self)
Add to MetaCart
We show that if any one-way function exists, then 3-round concurrent zero-knowledge arguments for all NP problems can be built in a model where a short auxiliary string with a prescribed distribution
is available to the players. We also show that a wide range of known efficient proofs of knowledge using specialized assumptions can be modified to work in this model with no essential loss of
efficiency. We argue that the assumptions of the model will be satisfied in many practical scenarios where public key cryptography is used, in particular our construction works given any secure
public key infrastructure. Finally, we point out that in a model with preprocessing (and no auxiliary string) proposed earlier, concurrent zero-knowledge for NP can be based on any one-way function.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=117910","timestamp":"2014-04-21T03:36:05Z","content_type":null,"content_length":"38548","record_id":"<urn:uuid:a7ff9762-d307-45d8-92d9-84be619c421b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
|
So why isnt everything divided by zero undefined?
So why isnt everything divided by zero undefined?
so im doing my math hw
and they gave me -15/0 and i thought it was 0 so i go check my answer and its undefined but then they give me 0/12 and the answer is 0.Can someone explain why the first wouldnt be zero?
Anything DIVIDED by zero is undefined.
Zero divided by ANYTHING is zero.
In one, the zero is the denominator. The zero can never be in the denominator, hence it is always undefined when a zero is in the denominator.
In the other, the zero is the numerator. This is always zero.
Think about the way you say it:
"Any number DIVIDED BY ZERO is undefined."
"Zero divided by ANY NUMBER is zero".
"Seven divided by zero is undefined."
"Twenty divided by zero is undefined."
--Whenever any number is divided by zero, the answer is always undefined.
On the other hand:
"Zero divided by seven is zero."
"Zero divided by twenty is zero."
Notice where the "zero" is in each of these statements.
Nov 1 at 16:58
-15 divided by 0 is undefinded because NOTHING times 0 can equal -15
0 divided by 12 is 0 because 12 times 0 is 0
Nov 1 at 20:44
Numerator is on top and denominator on theb ottom.
Just try making the denominator smaler, i.e. see what happens as it tenmds to 0
= 15
-15/0.5 = -30
-15/ 0.2 = 72
So as the denomintaor gets smaller the expression gets bigger
Nov 2 at 0:53
For the first question, you could say that zero goes into 15 infinitely many times. That is, you can keep adding 0 + 0 + 0 + 0 +.... and you'll never get to 15. So -15/0 is infinity, which is not
really a number, it's undefined.
The second question is the opposite case. 12 doesn't go into zero at all, or in other words it goes into zero, zero times. So the answer to 0/12 is just zero.
Nov 2 at 5:25
If you divide by 0, the answer is undefined because it is infinite.
If you mulitply by 0, the answer is 0, because you are saying , zero lots of something.
but if you divide by 0 you are saying, how many time does zero go into this number, and the answer is infinite.
Think about dividing by 2, then you get half of the original number.
Now think about dividing by 1, then you get the number you began with - so yiur anser is getting bigger.
So if you divide by the smallest number possible, you are going to get the biggest number possible, so big that you cannot define it. Hence, it is undefineable or undefined! Infinity is undefined.
You cant say exactly what it is, it is without bounds.
Nov 2 at 10:21
|
{"url":"http://www.qfak.com/education_reference/science_mathematics/?id=b1246977","timestamp":"2014-04-18T16:02:25Z","content_type":null,"content_length":"18491","record_id":"<urn:uuid:2460f5f4-2235-46f0-80f4-6f50fd079a84>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Regge calculus: Questions of consistency resolved?
up vote 10 down vote favorite
Regge calculus is an approximation scheme for General Relativity, which has been introduced in early-sixties and has been adopted both in numerical relativity and numerical quantum relativity. In
contrast to its widespread use in computational science, there does not seem to exist much theory on whether the Regge calculus is actually consistent - i.e. whether there is some degree of exactness
(like mesh width) we can adapt arbitrarly to obtain a solution with arbitrarly small error (say in some Sobolev-norm).
In fact there has been debate about this:
The question of consistency is a purely mathematical one, and therefore I do not expect it to be "debatable". I will have to work with this theory and hence I do wonder in how far there is a
theoretical basis to tell apart "It works" and "It does not work".
Thank you very much!
mp.mathematical-physics general-relativity na.numerical-analysis
1 Have you read the papers you linked to? The first paper makes a pretty strong case for inconsistency. But the second paper says RC is nevertheless convergent. It is possible for a numerical method
to be inconsistent but convergent. – timur Jun 1 '11 at 3:01
3 I first read "Reggae calculus". I'm somewhat disappointed. – Gunnar Magnusson Dec 15 '11 at 10:00
@Gunnar: What is Reggae calculus? – timur Aug 24 '12 at 0:12
8 @timur: what Bob Marley gets from not brushing his teeth. – Ryan Thorngren Aug 24 '12 at 0:53
@Ryan: That's great! Thanks. – timur Aug 24 '12 at 1:14
add comment
1 Answer
active oldest votes
The consistency is proved by Cheeger, M\"uller and Schrader in 1984, "On the Curvature of Piecewise Flat Sapces". Roughly speaking, given a smooth Riemannian manifold with a smooth
metric, there exists a sequence of triangulation, on which Regge's definition converges to the smooth curvature as a measure.
At the linearized level, there is also a recent paper on the consistency: Christiansen 2011, "On The Linearization of Regge Calculus". One of the theorems in the paper is that we have
up vote 11 consistency between linearized Regge and linearized Einstein equation as well.
down vote
That said, when you talk about the convergence of a numerical algorithm, it depends on a lot of other things as well, such as your formulation of the Einstein's equation. You will also
need some form of stability to ensure convergence. Those questions remain to be solved (hopefully in my thesis:-).
add comment
Not the answer you're looking for? Browse other questions tagged mp.mathematical-physics general-relativity na.numerical-analysis or ask your own question.
|
{"url":"http://mathoverflow.net/questions/59186/regge-calculus-questions-of-consistency-resolved/105363","timestamp":"2014-04-19T12:24:45Z","content_type":null,"content_length":"57484","record_id":"<urn:uuid:8d85fab7-c137-4561-ac6c-13352a218ef5>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical English Usage
Mathematical English Usage - a Dictionary
[see also: name, notion, word, language]
So all the terms of (2) are accounted for, and the theorem is proved.
This term drops out when $f$ is differentiated.
Each of the terms that make up $G(t)$ is well defined.
term-by-term differentiation
This theorem accounts for the term `subharmonic'.
The term `upath' is a mnemonic for `unit speed path'.
The answer depends on how broadly or narrowly the term `matrix method' is defined.
The above definition was first given in [D]; care is required because this term has been used in a slightly different sense elsewhere.
The $z$-component can be expressed in terms of the gauge function.
Our aim here is to give some sort of functorial description of $K$ in terms of $G$.
This can be easily reformulated in purely geometric terms.
[see also: call, designate, name]
Garling has introduced a more general class of martingales, termed Hardy martingales.
Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z
|
{"url":"http://www.impan.pl/cgi-bin/dict?term","timestamp":"2014-04-21T12:33:33Z","content_type":null,"content_length":"5110","record_id":"<urn:uuid:4667ae0f-21b9-461a-b7dc-370b0901d3a4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Give exact and approximate interest calculations
1. 157890
Give exact and approximate interest calculations
There are a number of different interest rate formulas find the "exact rate" and "approximate rate" for the following numbers
a i=4%, Inflation=2%
b. i=15%, inflation=11%
c. i=54%,Inflation rate=46%
The exact rate is :1The correct formula is 1 + r = (1 + i)/(1=P(e) ) where r i s the exact real interest rate, i is the observed current nominal interest rate and Pe is the expected rate of
The approximate rate is:
What is the exact rate of interest for the above three?
|
{"url":"https://brainmass.com/business/interest-rates/157890","timestamp":"2014-04-21T00:21:45Z","content_type":null,"content_length":"27813","record_id":"<urn:uuid:4708d1d3-12b0-45cb-8bb6-65e7575a7c07>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Higher Dimensions Database
Prism (EntityClass, 5)
From Higher Dimensions Database
A prism is a shape which has been constructed by extrusion.
When forming a prism from a prism, the shape formed is symmetrical in the way that the two new dimensions cannot be told apart: the order does not matter. For this reason, the shape formed by forming
a prism from a prism is called a diprism, and the shape formed by forming a prism from a diprism is called a triprism, and so on. Thus, a k-prism is the shape formed by extruding a base shape into a
new dimension k times.
There is a simple relationship between the numbers of elements of a prism, and the numbers of elements of its base:
H[n](+A) = 2H[n](A) + H[n-1](A)
where H[n](X) is the number of n-dimensional elements of the shape X, A is the base shape and +A is the prism.
The first term in the RHS of this equation corresponds to the elements of the base that get duplicated and form the "ends" of the prism, and the second term corresponds to the elements that get
extruded (and hence are also prisms) and form the "sides" of the prism.
Notice that the only difference from the equation for pyramids is the coefficient of the first term.
Pages in this category (7)
• Triangular prism
|
{"url":"http://hddb.teamikaria.com/wiki/Prism","timestamp":"2014-04-21T14:42:17Z","content_type":null,"content_length":"14868","record_id":"<urn:uuid:5a7cc75c-2ee8-493b-a57f-6224c230e901>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Problem. I must find time.
1. The problem statement, all variables and given/known data
You are on a train that is traveling 3.0 m/s (to the left direction) along a level straight track. very near and parallel to the track is a wall that slopes upward at a 12 degree angle with the
horizontal. As you face the windoes (.90 m high), (2.0 m wide) in your compartment, the train is moving to the left. The top edge of the wall 1st appears at window corner A and eventually disappears
at window corner A and eventually disappears
at window corner B. How much time passes between appearance and disappearance of the upper edge of the wall.
2. Relevant equations
Time= distance/average speed
3. The attempt at a solution
I multiplied .90 and 2.0 and I got .18m and then I divided it by 3.0 and I got .06 s. This is way off. What do I need to do.
|
{"url":"http://www.physicsforums.com/showthread.php?t=182358","timestamp":"2014-04-16T22:13:52Z","content_type":null,"content_length":"25594","record_id":"<urn:uuid:6c21888c-4e5d-4a7e-aaf7-be34e1a68854>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
|
math resources by individuals
math resources by individuals
The page level above is math resources.
Most of interesting people for nlab will have their own pages in $n$lab; many of them keep their own articles and books there, but some do more. Thus this will not be a general list of homepages of
mathematicians, but rather of those pages which have particularly important or large collections of articles, books and especially advanced lecture notes of major importance to $n$lab and in the
areas of prime importance to $n$lab (sheaves, stacks, (higher) categories, modern geometry and homotopy theory, algebraic topology, foundations, mathematical physics). Still let us keep the
alphabetic order (by family name)
Revised on September 2, 2013 20:54:46 by
andrew smith?
|
{"url":"http://ncatlab.org/nlab/show/math+resources+by+individuals","timestamp":"2014-04-21T07:28:31Z","content_type":null,"content_length":"23181","record_id":"<urn:uuid:f97dcaec-c720-4910-abb3-545a28197e21>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On cyclotomic field extensions
July 20th 2011, 01:11 PM
On cyclotomic field extensions
This is part of a proof which characterises the Galois groups of cyclotomic field extensions, however this part shouldn't require any knowledge of such things. I'm stuck on another part of the
proof too (it's a few pages long) but I'll deal with this bit first
Let $\zeta$ be the standard nth primitive root of unity and let p be some prime not dividing n. Let P be a maximal ideal of $\mathbb{Z}[\zeta]$ containing p. We have that $\zeta^{r}-1\in{P}$ for
some r between 1 and n-1. The next line of the proof reads:
It is this line that I don't understand. I feel it may have something to do with the fact that the second bracketed term has r additions but I'm unsure and would really appreciate any help.
|
{"url":"http://mathhelpforum.com/advanced-algebra/184888-cyclotomic-field-extensions-print.html","timestamp":"2014-04-23T11:30:19Z","content_type":null,"content_length":"4732","record_id":"<urn:uuid:39fffb5b-b699-4028-8196-ba3afd7489b5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Miami, FL ACT Tutor
Find a West Miami, FL ACT Tutor
...I've tutored mathematics for nearly a decade, and Algebra 2 is among my favorite subjects to teach. If you are looking for a top-notch, reliable tutor who is dedicated to helping your student
achieve success, look no further. I took Calculus AB in my junior year of high school, took Calculus BC in senior year, and scored a 5 on both AP exams.
59 Subjects: including ACT Math, chemistry, calculus, English
As the daughter of a college math professor, I have been helping students understand their math homework and ace their exams for as long as I can remember. My first tutoring job was as a private
tutor for a college football team. From there, I went on to work with private clients of all ages and subjects in Los Angeles and Beverly Hills.
16 Subjects: including ACT Math, writing, statistics, geometry
I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and
Programming. After college I moved to Spain where I gave private test prep lessons to high school students ...
11 Subjects: including ACT Math, physics, calculus, geometry
...I like to base my curriculum off of students' as well as parents' needs, and am very flexible in helping your student identify their strengths and weaknesses not only in the subject but in
their study skills as well. I have worked with several different types of students ranging from those with ...
24 Subjects: including ACT Math, reading, chemistry, Spanish
...Sincerely, RocioI would like to be certified in ESL/ESLO because I have been teaching ESL for adults for two years at Stony Point High School. I have also taken the training with Literacy of
Austin group. Teaching English to adults has been a rewarding and interesting experience and allowed me to expand my teaching methods and help the students to reach their potential in a second
16 Subjects: including ACT Math, chemistry, Spanish, geometry
Related West Miami, FL Tutors
West Miami, FL Accounting Tutors
West Miami, FL ACT Tutors
West Miami, FL Algebra Tutors
West Miami, FL Algebra 2 Tutors
West Miami, FL Calculus Tutors
West Miami, FL Geometry Tutors
West Miami, FL Math Tutors
West Miami, FL Prealgebra Tutors
West Miami, FL Precalculus Tutors
West Miami, FL SAT Tutors
West Miami, FL SAT Math Tutors
West Miami, FL Science Tutors
West Miami, FL Statistics Tutors
West Miami, FL Trigonometry Tutors
Nearby Cities With ACT Tutor
Coconut Grove, FL ACT Tutors
Coral Gables, FL ACT Tutors
Doral, FL ACT Tutors
Hialeah ACT Tutors
Hialeah Gardens, FL ACT Tutors
Hialeah Lakes, FL ACT Tutors
Maimi, OK ACT Tutors
Miami ACT Tutors
Miami Beach ACT Tutors
Miami Gardens, FL ACT Tutors
Miami Shores, FL ACT Tutors
North Miami Beach ACT Tutors
North Miami, FL ACT Tutors
South Miami, FL ACT Tutors
Sweetwater, FL ACT Tutors
|
{"url":"http://www.purplemath.com/West_Miami_FL_ACT_tutors.php","timestamp":"2014-04-21T07:54:24Z","content_type":null,"content_length":"24143","record_id":"<urn:uuid:1f86f76e-55a7-4dc9-b9da-c438b53c5a6e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
5th Grade Math: Least Common Multiple Help
5th Grade Math: Least Common Multiple
The least common multiple of a given set of whole numbers is the smallest whole number that is divisible by each number in the set.
In arithmetic and number theory, the least common multiple or lowest common multiple (lcm) or smallest common multiple of two integers a and b is the smallest positive integer that is a multiple of
both a and b. Since it is a multiple, it can be divided by a and b without a remainder. If there is no such positive integer, e.g., if a = 0 or b = 0, then lcm(a, b) is defined to be zero.
Word Problems
Many 5th grade math students find least common multiple difficult. They feel overwhelmed with least common multiple homework, tests and projects. And it is not always easy to find least common
multiple tutor who is both good and affordable. Now finding
least common multiple help
is easy. For your least common multiple homework, least common multiple tests, least common multiple projects, and least common multiple tutoring needs, TuLyn is a one-stop solution. You can master
hundreds of math topics by using TuLyn.
At TuLyn, we have over 2000 math video tutorial clips including
least common multiple videos
least common multiple practice word problems
least common multiple questions and answers
, and
least common multiple worksheets
least common multiple videos
replace text-based tutorials in 5th grade math books and give you better, step-by-step explanations of least common multiple. Watch each video repeatedly until you understand how to approach least
common multiple problems and how to solve them.
• Tons of video tutorials on least common multiple make it easy for you to better understand the concept.
• Tons of word problems on least common multiple give you all the practice you need.
• Tons of printable worksheets on least common multiple let you practice what you have learned in your 5th grade math class by watching the video tutorials.
How to do better on least common multiple: TuLyn makes least common multiple easy for 5th grade math students.
5th Grade: Least Common Multiple Videos
least common multiple video clips for 5th grade math students.
5th Grade: Least Common Multiple Word Problems
Three clocks ring once at the same time
Three clocks ring once at the same time. After that, the first clock rings after every 90 minutes, the second after every 30 minutes, and third after every 60 ...
least common multiple homework help word problems for 5th grade math students.
Fifth Grade: Least Common Multiple Practice Questions
least common multiple homework help questions for 5th grade math students.
How Others Use Our Site
I have to help my 5th grade boy with his homework. He uses Singapore math curriculum.
I teach a small group of 5th graders who struggle with math, especially what is taught at the 5th grade level.
It is for my 5th grader and 1st grader and for us (parents) to help with teaching/math homework.
Hopefully it will help me explain it to my 5th grader.
I am a 5th grade math teacher. It will help me alot.
Refreshing my memory of math to help my soon-to-be 5th grader.
It will nhelp me to gain fundamentals lacking to complete math problems and help me to help my struggling 5th grader.
I`ve been moved from 5th grade to 7th grade math. I need reinforcement and additional support.
I saw a sheet that another teacher assigned to her class from you site and thought there would be other good activities to supplement my 5th grade Math program.
I think it will help with teaching my 5th grade students.
Will help my 5th grade child.
This will help enrich my math instruction for my 5th graders.
Practice worksheets for my 5th grade math students.
It will help me to teach Math to my 5th grade class.
It will reinforce the material I am teaching to my 5th grade students.
Helps me generate worksheets and practice sets for my son in 5th grade.
I think your site can explain 5th grade math better to me then my teacher can.
I teach 5th grade and short video clips on various math concepts may be beneficial to my students`s learning.
I am teaching special ed students algebra and geometry on grade level. I have no materials for bringing them up from 5th grade - 7th grade level to 9th grade -10th grade level.
I teach 5th grade math. This will help to make professional worksheets to practice the skills that I am teaching.
With my 5th grade class.
I think it will help me give my 5th grade students a visual, to help them with more difficult math skills.
I teach 5th grade advanced math.
It will help with my studies and also help me to help my grandaughter with her math studies she is in the 5th grade
Resource for teaching EIP math for 5th grade
I am a student teacher, covering a 5th grade class and have been teaching first grade for a number of years. It will help me refresh my memory and get the facts straight for the students.
It`s for my son which is in the 5th grade. Having problems with improper & proper fractions( different denominators)
|
{"url":"http://www.tulyn.com/5th-grade-math/least-common-multiple","timestamp":"2014-04-19T22:31:50Z","content_type":null,"content_length":"19647","record_id":"<urn:uuid:7b1dee56-33ca-4751-8510-da312b316abf>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A function has a local maximum at x=-2, 6, and a local minimum at x=1.
January 4th 2011, 08:23 PM #1
Jan 2011
A function has a local maximum at x=-2, 6, and a local minimum at x=1.
I can't solve this question, I need some help! Please sketch the graph if you can!
A function has a local maximum at x=-2, a local maximum at x=6, and a local minimum at x=1. What does this information tell you about the function? Sketch a possible function that has these
Last edited by Chris L T521; January 5th 2011 at 07:18 AM. Reason: Restored Original Post.
What are the derivatives at those locations?
$f'(x)=a(x+2)b(x-6)c(x-1)=abcx^3-5abcx^2-8abcx+12abc \ \ a,b,c\in\mathbb{R}$
$\displaystyle f(x)=\int f'(x)dx$
Notice that the problem says "sketch a possible function". There are an infinite number of such functions, you should try graphing the simplest one. What does a graph look around a minimum or
maximum? Notice that you are not given the value of the function at x= 1, -2, or 6 so you can take them to be whatever you want. (The "local minimum" can be higher than the "local maxima"!)
Note that the derivatives at these points can also be infinite of undefined. If the derivative is infinite there is a cusp. If the derivative is undefined there is a sharp edge. (There are other
possibilities for undefined derivatives, but in the other cases there would be no min or max.)
January 4th 2011, 08:29 PM #2
MHF Contributor
Mar 2010
January 4th 2011, 08:31 PM #3
Jan 2011
January 4th 2011, 08:32 PM #4
MHF Contributor
Mar 2010
January 4th 2011, 10:01 PM #5
MHF Contributor
Mar 2010
January 5th 2011, 02:27 AM #6
MHF Contributor
Apr 2005
January 5th 2011, 02:34 AM #7
Senior Member
Nov 2010
Staten Island, NY
|
{"url":"http://mathhelpforum.com/calculus/167486-function-has-local-maximum-x-2-6-local-minimum-x-1-a.html","timestamp":"2014-04-19T21:05:48Z","content_type":null,"content_length":"48561","record_id":"<urn:uuid:0bc37392-c840-41d3-9a1c-a360a4b66267>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Calculate Percentage Increase
Edit Article
Edited by Carolyn Barratt, Amanda, Lojjik Braughler, Chris Hadley and 19 others
Knowing how to calculate percentage increase is useful in a variety of situations. For instance, in doing your tax return, you may need to calculate percentage increase of certain expenses when
tracking your home budget. It is also necessary to calculate percentage increase first before you can go on to calculate more complex results, such as return on investment. If you want to know how to
calculate percentage increase in no time at all, just follow these steps.
1. 1
Write down the current value and the starting value. For instance, let's say you received a renewal notice that your auto insurance premium for the upcoming year, Year 2, is going to be $450, up
from $400 in Year 1. In this example, the current value is $450 and the starting value is $400. Write them both down next to each other.
2. 2
Subtract the starting value from the current value. The starting value in Year 1 is $400, so subtracting the starting value ($400) from the current value (#450) yields a result of $50. $450 -
$400 = $50.
3. 3
Divide the result by the starting value. The result, or the difference between the starting value and the current value, is $50. Take this number and divide it by $400, the starting value. $50/
$400 = .125. Now, all you have to do is convert this value to a percentage.
4. 4
Multiply the result by 100. This will allow you to convert the value into a percentage. Therefore, .125 x 100 = 12.5, so the percentage change from the starting value, $400, and the current
value, $450, was 12.5%.
• Ask your teacher for help if you're not able to do this on your own.
Article Info
Thanks to all authors for creating a page that has been read 884,271 times.
Was this article accurate?
|
{"url":"http://www.wikihow.com/Calculate-Percentage-Increase","timestamp":"2014-04-16T16:24:26Z","content_type":null,"content_length":"65153","record_id":"<urn:uuid:96751a37-0fe1-457c-ade2-6bf70531eb4a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Oblivious Transfer
1-out-of-$n$ Oblivious Transfer
Suppose $A$ has a list $\left({x}_{1},...,{x}_{n}\right)$, $B$ has $i∈\left\{1,...,n\right\}$.
We want a SFE protocol where ${F}_{A}=0,{F}_{B}={x}_{i}$, that is,
1. $B$ learns ${x}_{i}$ and nothing else
2. $A$ learns nothing about $i$
Theorem: [Kilian'87] 1-out-of-2 OT is universal for 2-party SFE
In other words, given a 1-out-of-2 OT protocol, one can do any 2-party SFE. (Yao's construction requires a block cipher in addition to a 1-out-of-2 OT protocol.)
Note that oblivious transfer implies 2-party SFE, which implies key exchange, and hence 1-out-of-2 OT cannot be built from a blackbox one-way function. Instead, we build one using the DDH assumption
Bellare-Micali Construction
Let $G$ be a group of prime order $p$, and let $g∈G$ be a generator, and $H:G→\left\{0,1{\right\}}^{n}$ be a hash function.
Suppose $A$ has ${x}_{0},{x}_{1}∈\left\{0,1{\right\}}^{n}$, and $B$ has $b∈\left\{0,1\right\}$.
1. $A$ publishes a random $c←G$, $B$ picks $k←{\mathrm{ℤ}}_{p}$, sets ${\mathrm{PK}}_{b}={g}^{K},{\mathrm{PK}}_{1-b}=c/{g}^{k}$ and sends ${\mathrm{PK}}_{0},{\mathrm{PK}}_
{1}$ to $A$. $A$ checks ${\mathrm{PK}}_{0}{\mathrm{PK}}_{1}=c$.
2. $A$ encrypts ${x}_{0}$ with El Gamal using ${\mathrm{PK}}_{0}$, i.e. sets ${C}_{0}=\left[{g}^{{r}_{0}},H\left({\mathrm{PK}}_{0}^{{r}_{0}}\right)\oplus {x}_{0}\right]$, encrypts ${x}_{1}$ using $
{\mathrm{PK}}_{1}$, and sends ${C}_{0},{C}_{1}$ to $B$.
3. $B$ decrypts ${C}_{b}$ using $K$, i.e. if ${C}_{b}=\left[{V}_{1},{V}_{2}\right]$, $B$ computes ${X}_{b}=H\left({V}_{1}^{K}\right)\oplus {V}_{2}$.
Security: $A$ cannot learn anything about $b$ (information theoretic result).
If $B$ is honest-but-curious, then assuming DDH, $B$ can only decrypt one of ${C}_{0}$ or ${C}_{1}$.
If $B$ is malicious, then assuming DDH is not enough: conceivably $B$ could generate ${\mathrm{PK}}_{0},{\mathrm{PK}}_{1}$ in such a way that $B$ knows partial information about their corresponding
private keys, and perhaps $B$ can then learn partial information about both ${x}_{0}$ and ${x}_{1}$. However, if $H$ is a random oracle, then the protocol is secure under CDH.
Naor-Pinkas Construction
[Naor, Pinkas '00] Let $G$ be a group of prime order $q$ and let $g∈G$ be a generator. Suppose $A$ has ${m}_{0},{m}_{1}$ and $B$ has $V∈\left\{0,1\right\}$.
1. $B$ sends the tuple $\left(g,x={g}^{a},y={g}^{b},{z}_{0}={g}^{{c}_{0}},{z}_{1}={g}^{{c}_{1}}$ to $A$, where $a,b←{\mathrm{ℤ}}_{q}$, ${c}_{v}=\mathrm{ab}$ and ${c}_{1-v}←
2. $A$ verifies that ${z}_{0}e {z}_{1}$ and applies a partial DDH random self-reduction: $\left(g,x,y,{z}_{0}\right)$ becomes ${T}_{0}=\left(g,x,{y}_{0},z{\prime }_{0}\right)$, and $\left(g,x,y,{z}_
{1}\right)$ becomes ${T}_{1}=\left(g,x,{y}_{1},z{\prime }_{1}\right)$.
3. $A$ encrypts ${m}_{0}$ using ${T}_{0}$ and ${m}_{1}$ using ${T}_{1}$, that is, $A$ sends to $B$ the ciphertexts $\left({\mathrm{CT}}_{0}=\left({y}_{0},\mathrm{mz}{\prime }_{0}\right),{\mathrm
{CT}}_{1}=\left({y}_{1},\mathrm{mz}{\prime }_{1}\right)\right)$.
4. $B$ decrypts ${\mathrm{CT}}_{v}$.
This protocol is information theoretically secure against $B$, and DDH secure against $A$. The random self-reduction destroys any partial information in the message $B$ sends to $A$. Note that this
construction also generalizes to a 1-out-of-$n$ protocol.
DDH Random Self-Reduction
Suppose we have a tuple $g,{g}^{x},{g}^{y},{g}^{z}$. Then to perform a random self-reduction, pick random $a,b←{\mathrm{ℤ}}_{p}^{*}$, and output $g,{g}^{\left(x+b\right)a},{g}^{y},{g}^
Note that this transformation takes DH-tuples to DH-tuples, and non-DH-tuples to non-DH-tuples. Furthermore, the new exponents are independent of the originals. This is easy to see if we start with a
DH tuple. On the other hand, if the tuple is not DH, then given any $x\prime ,z\prime$, there exists a unique $a,b$ such that $\left(x+b\right)a=x\prime ,\left(z+\mathrm{by}\right)a=z\prime$ (we can
solve to get $a=\left(z\prime -x\prime y\right)/\left(z-\mathrm{xy}\right)$ and $b$ can be easily determined from $a$). As expected, these solutions are not well defined if $z=\mathrm{xy}$, i.e. the
original tuple is DH.
1-out-of-n From 1-out-of-2
We show how to construct a 1-out-of-$n$ OT protocol from any 1-out-of-2 OT protocol.
Suppose $A$ has ${m}_{0},...,{m}_{N}∈\left\{0,1{\right\}}^{n}$ and $B$ has $t∈\left\{0,...,N\right\}$. Assume $N={2}^{l}-1$ for some $l$.
1. $A$ prepares $2l$ keys $\left({K}_{1}^{0},{K}_{1}^{1}\right),...,\left({K}_{l}^{0},{K}_{l}^{1}\right)$.
2. Let ${F}_{k}:\left\{0,1{\right\}}^{l}→\left\{0,1{\right\}}^{n}$ be a PRF. $A$ sends to $B$ the tuple $\left({C}_{0},...,{C}_{N}\right)$: view the message index as a bit string $I={I}_
{1}...{I}_{l}∈\left\{0,1{\right\}}^{l}$, and encrypt using
${C}_{I}={m}_{I}\oplus {\oplus }_{i=1}^{l}{F}_{{K}_{i}^{{I}_{i}}}\left(I\right)$
Note $A$ sends $O\left(N\right)$ bits to $B$.
3. Let $t={t}_{1}...{t}_{l}∈\left\{0,1{\right\}}^{l}$. Then $l$ 1-out-of-2 OT's are performed where during the $j$th OT, $A$ has $\left({K}_{j}^{0},{K}_{j}^{1}\right)$ and $B$ has ${t}_{j}&
4. $B$ now has ${K}_{1}^{{t}_{1}},...,{K}_{l}^{{t}_{l}}$ and can decrypt ${C}_{t}$ to get ${m}_{t}$.
Thus with $\mathrm{log}N$ 1-out-of-2 OT's, a single 1-out-of-$N$ OT can be constructed, that has $O\left(N\right)$ communication complexity.
|
{"url":"http://crypto.stanford.edu/pbc/notes/crypto/ot.xhtml","timestamp":"2014-04-20T08:15:08Z","content_type":null,"content_length":"26245","record_id":"<urn:uuid:0bbeb168-18a9-432e-b7b2-99c8b2bd6936>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How do you Play Horn and Whirl Bets?
Horn Betting in Craps
A horn bet is single roll bet with the combination of numbers 12, 11, 3 and 2. The horn bet actually has four individual bets on numbers 12,11,3 and 2. So, your bet is divided into four parts. Just
like the three way craps bet, if you win in the Horn bet, you will win on the shown number and lose the remaining three bets.
If a shooter points to a 12, then too the payoff ratio will be 30:1 and the bet of 11, 3, and 2 will be lost. The chips boxes are clearly kept in the following numbers individually.
Since the dealer of the casino has complete control of the Horn Bet, so he will divide the chips into four parts and put them on the 12, 11, 3 and 2 boxes. Generally most of the casinos keep boxes
for Horn Bets.
Whenever you make a horn Bet in craps always keep in mind that after the division into four parts, the chips have to be even in number i.e. they must be divisible in four parts. Most of them must not
be aware of the fact that if there are one two or three extra chips, then they will be taken over by the casino itself.
In case you win and if the payoff includes some fractions of a dollar, then as the casino cannot pay you the fraction, it will keep it for them. Most of the players due to their lousy nature would
not want to do the mathematics will probably give a treat to the casino.
They would prefer tossing for one $5 chip and then say "horn High Twelve". Now, after this the dealer will make four non fraction bets and put the leftover on the called number. For a Horn High 12, a
$5 chip will be changed by the dealer into five 1$ chips.
Then, the dealer will then put one dollar each on the numbers 12,11,3 and 2. Then, for the remaining $ 1, the dealer will put the same in number 12. So that will make $2 for the number 12 bet. So,
the number 12 will have higher amount of bet as compared to the other three ones.
Whirl Bets in Craps
Therefore, your calculation should be such that your bet is exactly 20 percent so that they can be equally dealt in the five numbers.
A player wins a whirl bet if he wins any of the five numbers i.e. 12, 11, 3, 2 or a 7 and will lose if any of the numbers don't show up.
So, if it shows a 3 or an 11, then the payouts will be in the ratio of 15:1 and immediately the other numbers lose. If it's 2 or a 12, then the ratio of payout will be 30:1 and the others will lose.
If the number seven shows up, then the payoff ratio will be 4:1 and you will lose the number 3, 2, 12, and 11. Basically, if you roll a seven on a whirl bet, you get your money back.
It all depends on the layout of the table that the Whirl Bet should have a separate box or not at the center of a casino table. If the layout does not approve of the box, then the dealer will
typically divide the chips into five parts and will place them accordingly in the numbers.
Again, if you win, the result of the payoff has fractions of one dollar then the casino will not pay you the fraction and will keep for them. You have to be smart and play smart too and learn the
proper technique of playing the craps in the right manner. Now that you know the rules of the game, play the craps with a correct strategy.
|
{"url":"http://www.howtoplayonlinecraps.com/horn-whirl-bets.php","timestamp":"2014-04-20T15:50:54Z","content_type":null,"content_length":"7823","record_id":"<urn:uuid:97eb518a-c8e2-44da-9f3d-c5be2e0e0790>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Browse by Type
Number of items: 5.
Gillow, K. A. (1998) Codimension-two free boundary problems. PhD thesis, University of Oxford.
Wilson, J. (1998) Curves of genus 2 with real multiplication by a square root of 5. PhD thesis, University of Oxford.
Ng, F. S. L. (1998) Mathematical modelling of subglacial drainage and erosion. PhD thesis, University of Oxford.
Belton, A. C. R. (1998) A matrix formulation of quantum stochastic calculus. PhD thesis, University of Oxford.
Oliver, J. M. (1998) The thinning of the liquid layer over a probe in two-phase flow. Masters thesis, University of Oxford.
|
{"url":"http://eprints.maths.ox.ac.uk/view/type/thesis/1998.html","timestamp":"2014-04-18T23:16:44Z","content_type":null,"content_length":"7792","record_id":"<urn:uuid:4f529f7f-4fe3-4869-9b36-8e76a32f1a5e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I was all proud of myself for getting the square root to work, except that it doesn't find the square root for 1, so I must've done it wrong. Here's my assignment, just to make sure I interpreted it
"To approximate the square root of a positive number n using Newton's method, you need to make an initial guess at the root and then refine this guess until it is "close enough." Your first initial
approximation should be root = 1;. A sequence of approximations is generated by computing the average of root and n/root.
Use the constant:
private static final double EPSILON = .00001;
Your loop for findSquareRoot should behave like this:
make the initial guess for root
while ( EPSILON < absolute value of the difference between root squared and n )
calculate a new root
return root
Your class should have a constructor that takes the number you want to find the square root of. Implement the usual accessor method(s) and a findSquareRoot method that uses Newton's method described
above to find and return the square root of the number. Add a method setNumber that takes a new number that replaces the number in the instance field. Supply a toString method that returns a string
containing the number and the square root of the number. "
And here's what I have....
public class NewtonsSquareRoot{
private static final double EPSILON = .00001;
private int myNumber;
private double root;
private double guess;
public NewtonsSquareRoot(int number){
myNumber = number;
public int getNumber(){
return myNumber;
public double findSquareRoot(){
guess = 1;
root = Math.sqrt(myNumber);
while (EPSILON < Math.abs(Math.pow(root, 2) - myNumber))
return root;
public void setNumber(int number){
myNumber = number;
public String toString(){
String s = new String();
s = "The square root of " + myNumber + " is " + root + ".";
return s;
Thanks in advance for helping
|
{"url":"http://www.dreamincode.net/forums/topic/73357-finding-the-square-root-with-a-while-loop/","timestamp":"2014-04-18T18:29:24Z","content_type":null,"content_length":"114477","record_id":"<urn:uuid:43baf626-a678-411b-9317-a02830e2aa04>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Writing Linear Equations
Chapter 5: Writing Linear Equations
Created by: CK-12
You saw in the last chapter that linear graphs and equations are used to describe a variety of real-life situations. In mathematics, the goal is to find an equation that explains a situation as
presented in a problem. In this way, we can determine the rule that describes the relationship. Knowing the equation or rule is very important since it allows us to find the values for the variables.
There are different ways to find the best equation to represent a problem. The methods are based on the information you can gather from the problem.
This chapter focuses on several formulas used to help write equations of linear situations, such as slope-intercept form, standard form, and point-slope form. This chapter also teaches you how to fit
a line to data and how to use a fitted line to predict data.
Chapter Outline
Chapter Summary
Files can only be attached to the latest version of None
|
{"url":"http://www.ck12.org/book/Basic-Algebra/r1/section/5.0/anchor-content","timestamp":"2014-04-20T16:30:09Z","content_type":null,"content_length":"94708","record_id":"<urn:uuid:e72d4538-e3fb-4e4f-8018-45f9b363698b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nonstandard proofs for simple theorems
In an upcoming assignment I have to prove that the sum of consecutive squares is n(n+1)(2n+1)/6. The problem is definitely below the difficulty level of the course so I'd like to have some fun and
present an uncommon proof of it. The two common ones I'm familiar with are by induction and summing (n+1)^3 - n^3. You're encouraged to share any other nonstandard proofs as well.
Re: Nonstandard proofs for simple theorems
How about geometrically?
Consider a 1x1x1 cube stacked on top of the rear-left quarter of a 2x2x1 cuboid, which is stacked on top of the rear-left corner of a 3x3x1 cuboid, etc, up to nxnx1. Stacked such that the heights of
each point, viewed from above, should be:
Code: Select all
.. 321
. . 321
. .321
The volume of this shape is the sum we're looking for.
Now, this shape resembles a square pyramid - so consider a pyramid, with an n*n base at the bottom of the shape, and apex at the top in the rear-left corner. This pyramid has volume 1/3 n^3. Also,
the pyramid is contained entirely within our shape, so to find the volume of the shape, we just need to see what pieces of our shape protrude from that pyramid. Those pieces are: on the front and
right sides of the shape, the "steps" up the shape all protrude out. There are n(n-1)/2 1x1x1 cubes on the front and n(n-1)/2 1x1x1 cubes on the right, each of which have half their volume protruding
from the pyramid (they're cut in half along the diagonal by the side of the pyramid). And then there are n 1x1x1 cubes up the corner of the stairs, which each have a small pyramid-shaped piece
removed, so they have 2/3 of their volume protruding from the pyramid.
Adding up all these volumes, we get that the total volume of the shape is n^3/3 + n(n-1)/2 + 2n/3 - expanding these and simplifying shows it's equal to n(n+1)(2n+1)/6.
While no one overhear you quickly tell me not cow cow.
but how about watch phone?
Re: Nonstandard proofs for simple theorems
What would constitute a *simple* proof?
I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.
Re: Nonstandard proofs for simple theorems
(I assume that other theorem are allowed here.)
How about Fürstenberg's proof of the infinitude of primes?
http://en.wikipedia.org/wiki/F%C3%BCrst ... _of_primes
(or http://www.cut-the-knot.org/proofs/Furstenberg.shtml , since Wikipedia is going away in a day or so.)
Re: Nonstandard proofs for simple theorems
I like the proof by decomposition into triangular numbers. It’s really a proof by picture, and works best when drawn visually, but I hope I can convey it in text.
By inspection we note that, for n>1, a grid of squares n×n (like a checkerboard) can be split into two “triangular” pieces, by cutting along a “staircase” next to the main diagonal. That is, you get
one piece of (1+2+3+⋯+(n-1)+n) and one piece of (1+2+3+⋯+(n-1)). Those have, respectively, (n+1 Choose 2) and (n Choose 2) squares in them. For n=1 of course the formula for the larger triangle
holds, and the smaller “triangle” simply has zero squares.
We can think of this n×n grid as being extruded into the third dimension, so it is made of blocks, each a little cube, still laid out in a square. We can then think putting the n-square on the
ground, stacking the (n-1) square on top of it, the (n-2) square on top of that, and on up to the 1-square, a single block at the top of a pyramid.
Each layer of the pyramid decomposes into triangles, of size (k+1 Choose 2) and (k Choose 2), except the top layer which just has (1+1 Choose 2) = 1 block.
We can then stack up the larger triangles from each layer, to form a tetrahedron of base n, and the smaller triangles to form a second tetrahedron, of base n-1. Those tetrahedra have block-counts,
respectively, \sum_{i=2}^{n+1}{i \choose 2} and \sum_{i=2}^n{i\choose 2}.
By basic properties of Pascal’s triangle, however, the sum of the first m entries along a diagonal equals the entry on the next diagonal in, one row down from the last of the summed values. Since the
entries along the jth diagonal of Pascal’s triangle are just the values of r-choose-j, that means \sum_{r=j}^{m}{r\choose j} = {m+1\choose j+1}.
For the case at hand, we get from this \sum_{i=2}^{n+1}{i\choose 2} = {n+2 \choose 3} and \sum_{i=2}^n{i\choose 2} = {n+1 \choose 3}. Adding these together for the total we want, of course, gives
(n+2)(n+1)(n)/6 + (n+1)(n)(n-1)/6 = n(n+1)(2n+1)/6.
Small Government Liberal
Re: Nonstandard proofs for simple theorems
A fairly similar proof to the above is to note that n(n-1)(n-2)/6 = 0 + 0 + 1 + 3 + 6 + ... + (n-1)(n-2)/2. (This is easy to see by looking at n = 0, 1, 2, 3.) When the sum has 2m terms, we can group
them into pairs: 2m(2m-1)(2m-2)/6 = (0+0) + (1+3) + (6+10) + ... + ( (2m-2)(2m-3)/2 + (2m-1)(2m-2)/2 ) = 0^2 + 2^2 + 4^2 + ... + (2(m-1))^2 = 4(0^2 + 1^2 + ... + (m-1)^2).
So, 0^2 + 1^2 + ... + (m-1)^2 = m(2m-1)(m-1)/6.
Combinatorially, rather than cutting each square into two triangles and getting two sums of m consecutive triangular numbers, we first subdivide each square so that the grid is twice as fine, then
cut those squares into two triangles. This gives us just one sum of 2m consecutive triangular numbers.
If you're curious, I reverse-engineered this proof form the fact that the roots of g(m) = m(2m-1)(m-1)/6 are 0, 1/2, and 1. It suggests that g(n/2), whose roots are integers, has a sensible
interpretation, which in turn suggests splitting each term in 0^2 + 1^2 + ... + (m-1)^2 into two. If you intend to emphasize the point that there are usually lots of proofs of a given fact, saying a
few words about this sort of reverse engineering might be worthwhile.
Jerry Bona wrote:The Axiom of Choice is obviously true; the Well Ordering Principle is obviously false; and who can tell about Zorn's Lemma?
Re: Nonstandard proofs for simple theorems
Also, I know that faulhaber's formula is a formula for the sums of squares, cubes, etc, but is there a general way to factorise the formulas? For simplicity let us call this formula (sum of the first
x kth powers) f[k](x), f[1](x)=x(x+1)/2, f[2](x)=x(x+1)(2x+1)/6, and so on, but is there a rule for the factors? And at f[4](x), a quadratic term appears, so I shall call the largest (factor of
highest degree) g[k](x). g[4](x)=3x^2+3x-1, g[5](x)=2x^2+2x-1, g[6](x)=3x^4+6x^3-3x+1 (correct me if I'm wrong but I don't think this has any nice factors), anyone have a formula for that? What are
the roots of the functions?
Last edited by tomtom2357 on Tue Jan 17, 2012 7:43 am UTC, edited 1 time in total.
I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.
Re: Nonstandard proofs for simple theorems
Here is a proof without words due to Man-Keung Sui, http://hkumath.hku.hk/~mks/, which demonstrates the result.
Re: Nonstandard proofs for simple theorems
That is a cool proof!
I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.
Re: Nonstandard proofs for simple theorems
tomtom2357 wrote:What are the roots of the functions?
I plotted these for the first fifty functions or so once. If I recall correctly, for high k, they seem to trace out a vaguely ellipse-shaped curve in the complex plane. At the time, I didn't know how
to interpret this, and I suppose I still don't. It would be interesting to try to work out a limiting distribution of the roots (you might need to rescale, or it might not exist), and think about
what that distribution says about these sums, Bernoulli numbers, and so on.
Something worth thinking about: if g[k](x) were the characteristic polynomial of some sensible operator, the roots would be its eigenvalues, which gives you a tool with which to analyze them.
Jerry Bona wrote:The Axiom of Choice is obviously true; the Well Ordering Principle is obviously false; and who can tell about Zorn's Lemma?
Re: Nonstandard proofs for simple theorems
At what point do the roots become complex?
I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.
Re: Nonstandard proofs for simple theorems
tomtom2357 wrote:At what point do the roots become complex?
Here is a hint.
Re: Nonstandard proofs for simple theorems
If you calculate a Riemann sum of x^2 over [0,1] by partitioning the unit interval into n equally spaced segments, the formula for the sum of the first n squares shows up.
I was hoping to use this to find a clever proof starting from the integral and working back to the sum formula, but I don't really see how to do it. Still, it's fun to see how the Riemann sums for x^
2 reduce to the formula for the sum of the first n squares, and likewise the Riemann sums for x^n reduce to the formula for the first n n-th powers.
Re: Nonstandard proofs for simple theorems
It may be of interest to note that the dot-product of the nth row (counting from 1) of the Eulerian number triangle, with a length-n section of the nth diagonal (counting from 0) of Pascal’s triangle
beginning at row r (counting from 0), equals rⁿ. We take the stance that Pascal’s triangle is surrounded by 0’s going off to the left and right, so any values from “outside” the triangle are zeros.
By applying the same procedure to the (n+1)st diagonal of Pascal’s triangle beginning on row (r+1), we get the sum of the nth powers from 1 through r, that is 1ⁿ+2ⁿ+⋯+rⁿ. Conceptually, this means
splitting the n-dimensional hypercube of side length r into several n-dimensional “pyramids” of side lengths n, n-1, n-2, …, n-(r-1).
Since it is simple to sum generalized triangular numbers (simplectic numbers?) by going one row down and one diagonal inward on Pascal’s triangle, this makes the calculation straightforward. The only
tricky part is knowing how many simplices of each side-length to use. For the n=2 case, it is one of each, and involves merely splitting a square number into two triangular numbers as I described
Pascal’s triangle:
Eulerian number triangle:
By way of example, to find the sum of the cubes from 1 to n, we look at the 3rd row of the Eulerian number triangle, which is “1, 4, 1”, and we look at the 4th diagonal of Pascal’s triangle, which
begins with “1, 5, 15”. Now for any three consecutive entries on that diagonal, say on rows r+1, r+2, and r+3, their values are (r+1 Choose 4), (r+2 Choose 4), and (r+3 Choose 4), where we define (a
Choose b) = 0 when 0≤a<b. Dotting those with the Eulerian entries gives 1\cdot{{r+1} \choose 4} + 4\cdot{{r+2} \choose 4} + 1\cdot{{r+3} \choose 4}. Expanding that out gives the appropriate Faulhaber
polynomial, and we see,
1\cdot{{r+1} \choose 4} + 4\cdot{{r+2} \choose 4} + 1\cdot{{r+3} \choose 4} = \sum_{i=1}^{r}i^3
To prove this works, it is sufficient to show that 1\cdot{r\choose 3} + 4\cdot{{r+1} \choose 3} + 1\cdot{{r+2} \choose 3} = r^3. And that is straightforward to prove, just by expanding and
simplifying. Pascal’s triangle and the formulæ for summing its diagonals take care of the rest.
The recursive rule for the Eulerian number triangle is similar to that for Pascal’s triangle. Calling the two entries above a given entry its “parents”, in Pascal’s triangle you just add the parents
together for the value of the child. With the Eulerian number triangle, you first multiply each parent by the index of the diagonal it and the child are both on (counting from 1). That is, the edge
diagonals have index 1, the next diagonals in have index 2, then 3 and so forth. So the child of 57 and 302 is 5•57+3•302=1191.
If we want the sum of the cubes from 1 to 3, we have 1•1 + 4•5 + 1•15 = 36 = 1+8+27. If we want the sum of the cubes from 1 to 10, we have 1•(11×10×9×8)/24 + 4•(12×11×10×9)/24 + 1•(13×12×11×10)/24 =
(55)(6+36+13) = 55². Of course, we could’ve gotten those particular result more quickly by noting \sum_{i=1}^{n}i^3 = \left(\sum_{i=1}^{n}i\right)^2 = n^2(n+1)^2/4. That, incidentally, has a neat
visual proof of its own.
By looking at cubes made of unit-cube blocks, we can slice the n×n×n cube into n square of n×n each. Those squares can be arranged in an “L” pattern. If n is odd then the length of the legs of the L
are equal, and if n is even then the last square is cut in half into two rectangles, one of which goes to each leg of the L so the legs of the L are still equal. Those L grids nest together, and when
you put the L’s from i=1,2,3,…,n together, they fit perfectly to make a square of side length 1+2+3+⋯+n.
Finally, I apologize for all the “counting from 0” and “counting from 1” stuff. It seems the standard practice for numbering Pascal’s triangle’s entries is 0-based, and for the Eulerian number
triangle is 1-based. I followed both those conventions, and I hope it was not too confusing.
Small Government Liberal
Re: Nonstandard proofs for simple theorems
Proginoskes wrote:(I assume that other theorem are allowed here.)
How about Fürstenberg's proof of the infinitude of primes?
http://en.wikipedia.org/wiki/F%C3%BCrst ... _of_primes
(or http://www.cut-the-knot.org/proofs/Furstenberg.shtml , since Wikipedia is going away in a day or so.)
Here's another proof that there are infinitely many primes, using ideas of compression. The basic idea is if there were only n primes, we could use prime factorization to compress every k-bit string
into n log(k+1) bits, violating the pigeonhole principle.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.
"With math, all things are possible." —Rebecca Watson
Re: Nonstandard proofs for simple theorems
antonfire wrote:
tomtom2357 wrote:What are the roots of the functions?
I plotted these for the first fifty functions or so once. If I recall correctly, for high k, they seem to trace out a vaguely ellipse-shaped curve in the complex plane. At the time, I didn't know
how to interpret this, and I suppose I still don't. It would be interesting to try to work out a limiting distribution of the roots (you might need to rescale, or it might not exist), and think
about what that distribution says about these sums, Bernoulli numbers, and so on.
Interesting, an ellipse is formed, I wonder why. Also, I figured out how to circumvent the wikipedia blackout! You go to the page, but as soon as the page comes up, you stop the loading of the page,
and the blackout screen never comes up!
I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.
Re: Nonstandard proofs for simple theorems
Blatm wrote:In an upcoming assignment I have to prove that the sum of consecutive squares is n(n+1)(2n+1)/6.
The book "Concrete Mathematics" by Knuth/Patashnik/Graham has about 10 neat different proofs for this, so maybe you should have a look at it.
I'll add my personal weird proof (very sloppily written and all, but it can also be done rigorous, just that the fun is then lost to me):
Let D be the differentiation operator and look at what the operator e^D (defined as a taylor series, e^D=id+D+D^2 /2!...) does to functions:
and so forth. Obviously this holds for all polynomials, and a lot of functions can be expanded into polynomials. Hence e^D takes f(x) to f(x+1).
if we want to sum f(1)+f(2)+...+f(n) then that is obviously equivalent to [1+e^D+e^(2D)+...+e^((n-1)D)]*f(1) (be careful with the n-1)
We interpret this new operator as a geometric series with common factor e^D and hence use the common result for geometric series
----------- * f(1) = ----------*(e^(nD)*f(1)-f(1)) =--------*(f(n+1)-f(1))
Now recall the taylor expansion of (e^x-1)^-1 and just transer it over to the (e^D-1)^-1 case and you get:
Now you just use f(x)=x^2 and determine all the terms, noticing that there are not infinitely many of them as the third derivative is already zero (hence we have exact summation). Once you have all
the terms, you can factorise them to get the desired result (n)(n+1)(2n+1)/6. Also, you now have a formula to sum any f(x)=x^integer, approximate sums with integrals and correction terms, know where
the bernoulli numbers come from etc.. hence a lot of motivation to look into similar matters.
Sorry for the horrible typesetting. Also, if you want a nicer derivation, look up "Street-Fighting Mathematics" by Sanjoy (freely, legally available online as pdf).
Re: Nonstandard proofs for simple theorems
skeptical scientist wrote:Here's another proof that there are infinitely many primes, using ideas of compression. The basic idea is if there were only n primes, we could use prime factorization
to compress every k-bit string into n log(k+1) bits, violating the pigeonhole principle.
That one is really awesome. I love it!
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))
Re: Nonstandard proofs for simple theorems
vadsomhelst wrote:[...]
This is a really neat trick. Thanks very much.
Re: Nonstandard proofs for simple theorems
Xanthir wrote:
skeptical scientist wrote:Here's another proof that there are infinitely many primes, using ideas of compression. The basic idea is if there were only n primes, we could use prime
factorization to compress every k-bit string into n log(k+1) bits, violating the pigeonhole principle.
That one is really awesome. I love it!
I seem to recall there being a way to use this idea to get rather close to the prime number theorem, maybe proving that π(n)≥n/log[2] n (rather than n/log[e] n), but I can't seem to remember how it
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.
"With math, all things are possible." —Rebecca Watson
Re: Nonstandard proofs for simple theorems
I have a nonstandard proof that pi/4=sum(-1^n/(2n+1), n>=0), it goes as follows: Let f(x)={-1 if -pi<x<=0, 1 if 0<x<=pi}, f(x+2*pi)=f(x), now the Fourier series of f(x)=(4/pi)*sum(sin((2n+1)*x)/2n+1,
n>=0). Now, (4/pi)*sum(sin((2n+1)*(pi/2))/2n+1, n>=0)=f(pi/2)=1, so pi/4=sum(sin((2n+1)*(pi/2)), n>=0)=sum(-1^n/(2n+1), n>=0), QED.
I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.
Re: Nonstandard proofs for simple theorems
I'm pretty sure that's one of the standard proofs. Or at least, this is the proof that I'm familiar with, as opposed to the one on wiki using the integral form of arctan.
Re: Nonstandard proofs for simple theorems
I'm not sure how non-standard it is, but I like this little proof for Euler's formula, rather than the Taylor series one that is typically shown to freshmen students:
Let f(x) = cos(x) + isin(x)
Note that f(0) = 1
Then df/dx = -sin(x) + icos(x)
df/dx = i(cos(x) + isin(x)) = if(x)
df/f = idx
ln f = ix + C
f(x) = Ce^(ix)
From BC we get
f(x) = e^(ix) and therefore e^(ix) = cos(x) + isin(x)
Re: Nonstandard proofs for simple theorems
I remember one I read in another thread on here a while back: A proof that the nth-root of 2 is irrational for n>2:
Say there was a p/q, p and q both integers, such that (p/q)^n = 2. Then:
p^n/q^n = 2
p^n = 2q^n
q^n + q^n = p^n
However, this violates Fermat's Last Theorem, and thus has no solutions.
The proof of the statement for n=2 is left as an exercise for the reader.
While no one overhear you quickly tell me not cow cow.
but how about watch phone?
Re: Nonstandard proofs for simple theorems
phlip wrote:I remember one I read in another thread on here a while back: A proof that the nth-root of 2 is irrational for n>2:
Say there was a p/q, p and q both integers, such that (p/q)^n = 2. Then:
p^n/q^n = 2
p^n = 2q^n
q^n + q^n = p^n
However, this violates Fermat's Last Theorem, and thus has no solutions.
The proof of the statement for n=2 is left as an exercise for the reader.
This is possibly a circular proof, because fermat's last theorem could rely on this
Last edited by tomtom2357 on Tue Feb 21, 2012 11:17 am UTC, edited 1 time in total.
I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.
Re: Nonstandard proofs for simple theorems
Regardless - even if the proof of Fermat's Last Theorem relies on that fact at some point, which is entirely possible, it's still a valid proof. It's just one that's redundant, as it's proving
something you'd need to already have accepted in order to accept the premises of the proof.
While no one overhear you quickly tell me not cow cow.
but how about watch phone?
Re: Nonstandard proofs for simple theorems
tomtom2357 wrote:This is possibly a circular proof, because fermat's last theorem could rely on this
It almost certainly does not. Though I'm pretty sure it uses the fact that the integers are integrally closed, which immediately implies nth roots of 2 are irrational.
Re: Nonstandard proofs for simple theorems
Hopper wrote:
tomtom2357 wrote:This is possibly a circular proof, because fermat's last theorem could rely on this
It almost certainly does not. Though I'm pretty sure it uses the fact that the integers are integrally closed, which immediately implies nth roots of 2 are irrational.
Maybe Fermat's proof relied on it (if he had one).
Re: Nonstandard proofs for simple theorems
You only need one proof to not rely on it for it to be a non-circular argument.
Re: Nonstandard proofs for simple theorems
Another proof that there are infinitely many primes:
What they (mathematicians) define as interesting depends on their particular field of study; mathematical anaylsts find pain and extreme confusion interesting, whereas geometers are interested in
Re: Nonstandard proofs for simple theorems
The sum of the interior angles of an N-gon is 180(N-2). This is usually (?) proven by showing that an N-gon can be cut into N-2 triangles, each containing 180 degrees.
A while back, I was tutoring a geometry student, and she was working on problems like these without knowing the theorem (and thus, fumbling blindly). I offered, "What if we cut the polygon into
triangles?" but before I could draw a diagram to illustrate, she drew the triangles in the pentagon she was working with. However, unlike the "normal" method which would cut the pentagon into 3
triangles, she cut it into 5 triangles, with their vertices meeting in the middle like the spokes of a wheel. I was about to correct her when I realized this was a slight variation of the same proof:
an N-gon forms N triangles, totaling 180N degrees, but then you need to subtract away the central angles, which total to 360. 180N-360 is of course the same as the usual 180(N-2), distributed
Probably less interesting than most of this thread, but a 7th-grader invented it, which I thought was pretty impressive
No, even in theory, you cannot build a rocket more massive than the visible universe.
Re: Nonstandard proofs for simple theorems
Huh, I'd never heard of either of those proofs... though when you said "divided into triangles", I thought of the wheel layout first. However, I'll say that the wheel layout will only work directly
if the polygon is a star-convex set - ie you can find a centre point such that all the vertices can connect to that centre point with a straight line that stays inside the shape. Whereas the other
triangulation will work regardless, as any polygon can be triangulated in that way.
The proof I've heard is to extend all the edges of the polygon in, let's say, the clockwise direction, so what we instead have is a bunch of rays going off to infinity. Now, the exterior angles
between each pair of rays at each corner of our polygon, will add up to 360 degrees (you can slide them all inward so they're all at the same point, and they stay congruent, and clearly add up to
360). And the internal angle of each point is 180 minus the external angle. So the internal angles must add up to 180n - 360.
This one only works for convex polygons, though. But both this and the wheel one can be adapted with special cases for the bits that don't work... basically by allowing negative angles (eg for the
external-angle proof, for instance if one of the angles is 200 degrees, you can say that the external angle is -20 degrees, and everything still adds up).
While no one overhear you quickly tell me not cow cow.
but how about watch phone?
Re: Nonstandard proofs for simple theorems
I discovered (on http://en.wikipedia.org/wiki/Mersenne_prime) a non-standard proof of the infinitude of the prime numbers:
Using the fact that any factor of 2^p-1 (for prime p) is of the form 2kp+1, every factor of 2^p-1 must be bigger than p, therefore for every prime p, there is a prime greater than p.
I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.
|
{"url":"http://forums.xkcd.com/viewtopic.php?p=2861518","timestamp":"2014-04-19T17:22:31Z","content_type":null,"content_length":"97026","record_id":"<urn:uuid:4f6f6b8f-25f8-4138-b897-501f4bc54678>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need Help with Two Simple Calculating Programs
Hey, I'm new to this and taking a class in college. I'm really struggling with what seems should be some of the most basic things. The problems I need help with are:
1)A program that calculates and prints the product of two even integers between one to 1000 (I'm not sure why it has to be to 1000).
It should look like:
Code: Select all
The product of even integers from 1 to 7 is: 48
2)Build a small calculator by taking three inputs. Two of which are integer or floats and the third is an input of +,-,/,*. It should then print out a result.
Obviously I don't want the straight out answers, but I barely have an idea where to start. I've tried using while loops and range functions for the first problem but have been unable to organize them
in a way to get the results I need.
Thanks all in advance for the help!
I hope this will be the first post of many on this forum and that at some point I will be answering questions like this for people in my position.
Re: Need Help with Two Simple Calculating Programs
It would be nice if you could show us some efforts you have made, so that we know what parts you understand and what you don't.
That way it's much easier to point you in the right direction in stead of handing you an answer.
You're right to use a loop in the first exercice. Try incrementing a variable on each iteration of the loop and finally returning that variable.
The seconds exercise would contain some if and elif statements to check which operator(+,-,/,*) is passed to the function.
Re: Need Help with Two Simple Calculating Programs
IllusionaryRooster wrote:I hope this will be the first post of many on this forum and that at some point I will be answering questions like this for people in my position.
This is a noble goal and I hope I don't sound too demotivating when I now ask you to please read the thread for new users.
Learn: How To Ask Questions The Smart Way
Join the #python-forum IRC channel on irc.freenode.net and chat with uns directly!
|
{"url":"http://www.python-forum.org/viewtopic.php?p=9397","timestamp":"2014-04-20T10:50:53Z","content_type":null,"content_length":"18218","record_id":"<urn:uuid:67cefa45-1a58-48df-883e-2675ce1b750a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions - Re: Matheology � 224
Date: Mar 22, 2013 7:10 PM
Author: Virgil
Subject: Re: Matheology � 224
In article
WM <mueckenh@rz.fh-augsburg.de> wrote:
> On 22 Mrz., 21:54, William Hughes <wpihug...@gmail.com> wrote:
> > On Mar 22, 7:24 pm, WM <mueck...@rz.fh-augsburg.de> wrote:> On 22 Mrz.,
> > 16:24, William Hughes <wpihug...@gmail.com> wrote:
> >
> > <snip>
> >
> >
> >
> > > > Infinite sets are different from finite sets
> > > > but they do not contain anything
> > > > "beyond any finite set".
> >
> > > Of course.
> >
> > We have now established that there
> > are sets that do not contain anything
> > "beyond any finite set" but
> > are different from finite sets.
> That is potential infinity.
> Actually infinite sets, however, contain something beyond any finite
> set.
They may do so but are not compelled to do so.
For example, other than in the weirdness of WMytheology, the set of all
finite naturals is an infinite set not containing anything but finite
> Consider the decimal representation of pi. It is an actually infinite
> sequence beyond any finite sequence.
But none of the digit positions of the the decimal representation of pi
WM falsely conflates the infiniteness of a set with the infiniteness of
its members,
A finite set can contain members all of which are infinite and an
infinite set can contain members all of which are finite, at least
outside the weirdness of WMytheology.
> Or consider the Binary Tree, that contains only all finite paths.
There are infinitely many finite binary trees which contain only finite
paths. But no one such tree can contain all of them.
And any COMPLETE infinite binary tree must contain infinitely and
uncountably many infinite paths and no finite paths.
> It
> is countable.
"It", being what WM claims above is "the Binary Tree, that contains only
all finite paths, does not exist outside the padded walls of
> >
> > So if you show that P is true
> > for all finite sets of natural numbers,
> > saying that there is a set for which
> > P is not true, is not a claim that
> > this set must contain something other
> > than a natural number.
> No it is a claim of foolishness.
Only inside the padded walls of Wolkenmuekenheim.
> If I prove something for all finite
> sets but say that there is a finite set for which the proof does not
> hold, then I can refrain from proving anything.
Judging by the messes that WM now claims to be his proofs, it appears
that WM is already refraining from proving anything.
> It would be nonsense
> to do so.
An act being nonsensical has not yet been sufficient to prevented WM
from performing it.
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8727065","timestamp":"2014-04-19T23:58:44Z","content_type":null,"content_length":"4527","record_id":"<urn:uuid:fb0bb37f-9078-454b-9837-d7158b61ae49>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Raritan, NJ Trigonometry Tutor
Find a Raritan, NJ Trigonometry Tutor
...I favor a dual approach, focused on both understanding concepts and going through practice problems. Let me know what concepts you're struggling with before our session, so I can streamline
the session as much as possible! In my free time, I like to play with my pet chickens, play Minecraft, code up websites, and write sci-fi creative stories.
26 Subjects: including trigonometry, English, calculus, physics
...I am a recent Rutgers graduate with a Bachelors of Science in Biomedical Engineering. Throughout my time there, I have excelled at all my courses related to Math, Calculus, and Science. I love
children and find their curiosity and playfulness to be inspiring.
14 Subjects: including trigonometry, calculus, physics, algebra 2
...Graduating in the top 2% of my class at IIT Madras in Engineering, I was a straight A student in all Math. courses including differential equations. I went onto take AND teach Applied Math.
courses at Berkeley and Stanford. The ME 200B syllabus at Stanford included the three main methods of sol...
19 Subjects: including trigonometry, physics, calculus, geometry
...This has proven to be a proven, effective strategy to help my students perform at a high level on the test. I am a Certified Teacher of Mathematics and a Certified Teacher of Mathematics. I
have a Bachelor of Science degree in Engineering Technology and have completed several Masters level courses in Education.
41 Subjects: including trigonometry, English, chemistry, physics
...I scored a perfect 800 on the SAT math section when I took the test prior to going to Princeton. I find that an approach of repeatedly working through actual SAT problems and identifying the
types of problems that the SAT commonly asks is the best training to do well on the test. I have an MA in philosophy from Virginia Tech and I completed PhD coursework at Brown.
40 Subjects: including trigonometry, chemistry, reading, physics
Related Raritan, NJ Tutors
Raritan, NJ Accounting Tutors
Raritan, NJ ACT Tutors
Raritan, NJ Algebra Tutors
Raritan, NJ Algebra 2 Tutors
Raritan, NJ Calculus Tutors
Raritan, NJ Geometry Tutors
Raritan, NJ Math Tutors
Raritan, NJ Prealgebra Tutors
Raritan, NJ Precalculus Tutors
Raritan, NJ SAT Tutors
Raritan, NJ SAT Math Tutors
Raritan, NJ Science Tutors
Raritan, NJ Statistics Tutors
Raritan, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/raritan_nj_trigonometry_tutors.php","timestamp":"2014-04-17T04:48:35Z","content_type":null,"content_length":"24389","record_id":"<urn:uuid:8bbc417c-babd-45a9-a456-7e0d28d7fe84>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Beer bubble math helps to unravel some mysteries in materials science
Before downing your next beer, pause to contemplate the bubbles. You’ll find that they grow and shrink in odd, hard-to-predict ways. A mathematician and an engineer have found a simple and surprising
equation to describe this process, using a field of mathematics no one expected to be relevant.
Now, new simulations are building on that result to illuminate more than just foams. Metals and ceramics are made of crystals that grow and shrink the same way that beer bubbles do, affecting the
properties of the materials. The new work may thus lead to more resilient airplane wings, more reliable computer chips and stronger steel beams.
Over time, the bubbles in foam tend to consolidate, becoming fewer and larger. The physics driving this process has long been understood: When two bubbles adjoin one another, gas tends to pass from
the bubble with higher pressure to the one with lower pressure. The higher-pressure bubble bulges into the lower-pressure one, so the shape of a bubble reflects the pressure in all the bubbles around
it. As a result, the shape of the bubble should be enough to determine how fast it’s going to grow or shrink.
In 1952, the mathematician John von Neumann used this principle to find an elegant equation that predicts the growth rate of a bubble in a two-dimensional spread of bubbles. But the three-dimensional
case—which is the one scientists and beer drinkers most care about—proved far more difficult. Some researchers even believed the problem unsolvable.
What was needed, it turned out, was to introduce another field of mathematics: topology. Topology is essentially the study of connections. For a topologist, two objects are the same if one can be
shrunk or stretched into the same shape as the other, but without punching holes or gluing anything together, since that would change the way the parts of the object connect to one another. So, for
example, a doughnut and a coffee cup are the same shape to a topologist: Squish down the cup part of the coffee cup, and you’ll end up with a doughnut-shaped ring. A doughnut is topologically
different, however, from a beach ball.
Topology would seem to offer little help in determining how quickly bubbles in foam grow or shrink, because squishing a bubble into a different shape will change its growth rate. But several years
ago researchers found an equation involving the Euler characteristic, a property of a shape that stays the same no matter how the shape is stretched or smushed. “It’s really beautiful,” says Robert
MacPherson of the Institute for Advanced Study in Princeton, N.J., the mathematician on the project, “because the Euler characteristic shouldn’t have anything to do with it.”
To calculate the Euler characteristic, first imagine slicing a bubble in different directions. Usually, the resulting shape would be roughly circular. But suppose that the bubble had a couple of
little bumps on its surface. If your slice went through these bumps, you could end up with two disconnected circles. Or, if the bubble had a small divot in it and your slice went through the divot,
you could end up with a little hole inside your slice. The Euler characteristic of the slice is the number of disconnected pieces it contains, minus the number of holes within it.
The equation that MacPherson and David Srolovitz of the Agency for Science, Technology and Research in Singapore developed shows that bubbles grow quickly when the beer is warm and when the bubbles
have divots in them rather than bumps, or when they are connected to lots of other bubbles.
Now, the team is taking the equation one step further, using it to create computer simulations of foams, metals and ceramics. The work is a way to expose a material’s inner structures that are not
normally visible, but that influence the material’s overall properties. “If you beat an egg white enough, it becomes almost a paste and you can make peaks out of it,” MacPherson says. “If you looked
at an individual egg white bubble, you wouldn’t expect that. We want to understand these collective properties.”
“Topology is going to be a very powerful tool for understanding these structures,” says Jeremy Mason, a materials scientist at the Institute for Advanced Study who has joined the research team. He
points out that the behavior of foam as a whole is unlikely to change dramatically if you stretch or squish the bubbles a bit without changing the way they’re connected to one another, even though
changing the shape of an individual bubble will affect its growth rate. Focusing on the connections—that is, the foam’s topology—may then allow researchers to home in on its most crucial aspects.
“We’re awash in data, and the challenge is to identify what is meaningful and compare it between two different structures,” Mason says. “Measuring topological characteristics may give us a language
for that.”
Note: To comment, Science News subscribing members must now establish a separate login relationship with Disqus. Click the Disqus icon below, enter your e-mail and click “forgot password” to reset
your password. You may also log into Disqus using Facebook, Twitter or Google.
|
{"url":"https://www.sciencenews.org/article/beer-bubble-math-helps-unravel-some-mysteries-materials-science","timestamp":"2014-04-16T13:10:26Z","content_type":null,"content_length":"77074","record_id":"<urn:uuid:31ce99d5-69cc-4eeb-b3ff-cb009717a608>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Block Diagrams
1 The Language of Blocks
Figure 1 shows a block diagram. It is probably the world's simplest block diagram, but it is complete, none the less. The signal u feeds into a “Thing”, which transforms it into the signal y. The
behavior of “Thing” is unspecified, so it could denote any signal processing function that accepts one signal and transforms it into another. What is specified in this block diagram is a single input
signal, a single output signal, and the dependence of the output signal on the input signal and it's past values, and possibly on the initial conditions of the block “Thing” at some specified point
in time.
Figure 2 is an example of a block diagram showing a simple closed-loop control system. The diagram details a controller, and shows a plant with no detail. In the controller the command, u, comes in
from the left, and has the feedback subtracted from it to form the error signal e. This error signal is applied to the k[p] gain block and to an integrator through the k[i] gain block, then the
results are added and applied to the plant, which responds with the output to the right of the diagram.
Before launching into this discussion I should mention that there are no agreed-upon formal rules of block diagram structure unless one is using a system simulation tool. There appears to be a
consensus when one is representing systems entirely in the Laplace or z domains, but on the whole block diagramming is an imprecise science. It is the responsibility of the system engineer to craft a
block diagram that communicates the intended aspects of the system, and to accompany it with enough text to clarify its less common aspects.
Block diagrams consist of three main types of elements: signals, unary operator blocks, such as the “Thing” block in , and m-ary operator blocks such as the two summation blocks in Figure 21. In
addition there are other elements such as sample-and-hold blocks, complex hierarchical blocks, signal sources, etc., that don't fit in the three main categories yet still need to be expressed if the
block diagram is to be complete.
Every block has, at a minimum, one signal coming out of it. In general a block will have one or more input signal and one output signal, but blocks can be easily defined with more than one output as
Block diagrams can be hierarchical. Indeed, the entire block diagram detailed in Figure 21 could easily be the block labeled “Thing” in . Furthermore, the plant block in Figure 2 could (and in a
practical application probably would) be expanded into a block diagram of it's own.
1.1 Types of Elements
1.1.1 Signals
A signal is represented by a line with an arrowhead at the end to denote the signal direction as shown in Figure 3. At your discretion it can also be named (the signal below is 'u'). A signal can, in
general, be either a scalar signal that carries just one quantity or a vector that contains a number of individual signals. The choice of whether to bundle signals together into a vector is up to
you. To avoid confusion the elements of a vector signal should be closely related and dissimilar signals should be kept separate.
1.1.2 Unary Operator Blocks
A unary operator block is represented as a rectangle with signals going into and out of it, and with a symbol or some text to indicate its function. is an example of a unary operator block that
operates on the input signal with the transfer function H(z) to create the output signal. Unary operator blocks are almost invariably shown with the signals traveling horizontally, either right to
left or left to right. As with signals it can be named by placing a name above or below it.
A unary operator block indicates that the output signal at any given time is entirely a function of the input signal. Unary blocks can be memoryless, where the the output signal at any given time is
a function of the input signal at that moment. Unary blocks can also have memory, where the output signal at any given time is a function of the history of the input signal.
1.1.3 M-ary Operator Blocks
An m-ary operator block is represented as a circle, often containing a symbol or graphics to indicate it's function. It will have two or more input signals and an output signal. The output signal is
always a memoryless combination of the input signals. Figure 5 shows a binary summation block, where z = x – y.
1.1.4 Hierarchical Blocks
A hierarchical block can look much like a unary operator block, and when it does can function as one in a larger block diagram. Hierarchical blocks can also indicate more general functions: they can
have multiple input signals, multiple output signals, and they can be shown with “auxiliary” I/O where there is some input to or output from the block that is depreciated or designated as being
fundamentally different from the “main” signals.
Figure 6 shows some hierarchical blocks. The leftmost one shows how you might reduce a large single-input, single-output subsystem down to one block to show it in a larger block diagram. The center
block shows how you might represent a system with multiple inputs and multiple outputs, while the last block shows a system with an “auxiliary” input, such a mode input or an on/off input.
1.2 A Dictionary of Blocks
Table 1 lists a number of blocks along with the function that they denote. It is by no means comprehensive, but gives a notion of the kind of things that blocks can represent.
│ Block │ Function │ Comment │ │
│ │ Implements the transfer function │ │ │
│ │ H(z): │ Generic form for a transfer function. │ (1) │
│ │ Y(z)=H(z)U(z). │ │ │
│ │ Implements a discrete integrator: │ This is (1) with │ (2) │
│ │ . │ │ │
│ │ Implements the function f: y(t)=f(u(t)). │ The function should be memoryless; this is how you show system nonlinearities. │ (3) │
│ │ Implements the function │ A specific case of (3). │ (4) │
│ │ y(t)=u(t)^2 │ │ │
│ │ │ A way to show memory in the time domain. │ (5) │
│ │ │ Sampled-time version of (5) │ (6) │
│ │ │ A graphical way of showing (3), in this case a limiter. │ (7) │
│ │ │ Simple summation │ (8) │
│ │ │ Multiplication │ (9) │
│ │ │ Division │ (10) │
│ │ │ Pick the minimum of the two signals. │ (11) │
│ │ │ Sample, see Figure 7 │ (12) │
│ │ │ Zero-order hold, see Figure 8 │ (13) │
Table 1: Some Blocks and their Meanings.
1.2.1 Sample and Hold
A sample block is denoted by a switch with a descending arrow as shown in Figure 7. The period or frequency of the sample should be denoted underneath the arrowhead, or a 'T' inserted to denote a
generic sample-and-hold. The function of a sample block is to sample a signal at an even rate. It is generally used to show continuous-time signals being converted to discrete-time signals for
processing in a digital control system. It can also indicate a conversion from one discrete-time domain to another in systems where more than one sample rate prevails.
A zero-order hold is a device which transforms a discrete-time signal into a continuous-time one. Figure 8 shows a zero-order hold, with the graphic in the block reflecting the “staircase” appearance
of a signal that's been sampled and reconstructed.
1.3 Block diagram dialects
As mentioned previously, there are no formal definitions of the block diagram “language” outside of proprietary simulation tools, and the usage of such tools tends to diverge from common “book
usage”. This gives one the freedom to invent one's own dialect of the language for one's own purposes, but it places the responsibility for writing a clear diagram on your shoulders.
Indeed, the blocks presented here are the invention of the author; they follow what I have seen to be common usage, but I have made changes from common usage where I feel that my way is more clear or
The major change that I've made to the common control system block diagramming language is in the multiple-input block. Common control system usage would show a summation block as in Figure 9; the
center figure is the older form, but you will see both of them.
The reason for the change is that in radio usage one will often see a frequency mixer block denoted as a circle with an 'x' inside, similar to a summation block, yet a frequency mixer is generally a
type of multiplication, which is certainly a different function than addition! Figure 9 shows an example, with a mixer/multiplication block on the right and an unambiguous multiplication block on the
2 Analyzing Systems with Block Diagrams
Block diagrams would be quite useful enough if all they were used for were to communicate system structure. They can, however, also be used to analyze system behavior, under certain sets of
If you have a system that can be represented entirely in the z domain (or Laplace), and the transfer functions of all the blocks are known, then the overall transfer function of the system can be
determined from the transfer functions of the individual blocks and their interconnections.
Note that the above rule is not always altogether clear, because it is common practice to show a block diagram with a mix of z- or Laplace-domain blocks and nonlinear blocks, or with a mix of
Laplace-domain blocks (indicating a continuous-time system) and z-domain blocks. Such systems can have their differential equations extracted, but they cannot be directly analyzed with the methods
shown here.
2.1 Direct Equation Extraction
The most obvious method of analyzing a block diagram is to extract the relevant equations from it by inspection, then solve them directly. If the block diagram is fully specified this technique will
always deliver a mathematical model of the system. As we'll see this is not always the best way to proceed, but sometimes it is the only way.
Figure 11 is a quite simplified block diagram of a heater control system. The plant is modeled as a heating coil that is driven by a controlled voltage, who's power output will be proportional to the
square of the drive voltage. The temperature of the plant is modeled as a low-pass filtered version of the ambient temperature plus a temperature difference that's being driven by the heater coil.
The controller is a simple PI controller.
Developing the equations for the diagram in Figure 11 is tedious but straightforward. First, find the integrator voltage:
then the heater offset temperature:
then the plant temperature:
Since the system of equations given in (14) and (16) are nonlinear they are not amenable to easy reduction, so in practice one would solve this equation by time-stepping, or by linearizing (16)
around some operating point and solving the resulting linear difference equation.
2.2 Manipulating Block Diagrams
Initially one usually draws a structural block diagram. a diagram that shows how a system is put together. At some point one will wish to reduce this structural block diagram into a behavioral one.
While this can be done by the techniques shown in section 2.1 above, such techniques immediately sever the connection between the block diagram and the behavioral model, and can be very
counter-intuitive to use. It is often better to reduce a block diagram using the manipulation rules presented here.
There are four tools that you have to manipulate block diagrams. Given a block diagram that is described fully in the z domain or the Laplace domain these tools will allow you to fully analyze the
block diagram to extract the overall system behavior. If you observe their limitations you can also use these tools on a variety of other block diagrams. The four tools that you have are: cascading
gain blocks, moving summing junctions, combining summing junctions, and reducing loops.
The block diagram manipulations shown here only work if the blocks in question represent linear operations. All of the examples below show block manipulations on blocks containing transfer functions
in the z domain, however these manipulations can be carried out on any linear operation.
2.2.1 Loop Reduction
When a block diagram indicates a feedback loop you can reduce the loop to a single transfer function block as shown in Figure 12. If you look at the equations that govern the behavior of this block
diagram, you can see that the output signal is a function of the forward gain G and the error signal e:
The error signal, in turn, is a function of the input and output signals and the feedback gain:
By substituting expressions, we get
which reduces to
Example 1: Using Loop Reduction.
A feedback control system has a forward gain
and negative feedback with a gain of
Draw it's block diagram, and find the overall transfer function for the system.
The block diagram is a simple feedback loop with the specified gains:
From the formula for loop reduction, the transfer function for the system is
This reduces down to the block diagram:
2.2.2 Cascading Gains
When two blocks are cascaded directly the transfer function of the combination is simply the product of the two transfer functions, as in Figure 14. Looking at the equations that govern the behavior
of the block diagram you can see that
From this, it is easy to see that
Example 2: Cascading Gains.
A feedback control system has a plant with a gain of
a controller with a gain of
and unity feedback. Draw it's block diagram, and find the overall transfer function for the system.
The block diagram is:
From the gain cascade rule the forward gain is
Using the loop reduction rule with , the transfer function for the system is
This reduces down to the block diagram:
2.2.3 Summing Junctions
If a loop contains more than one summing junction it cannot be reduced by loop reduction, and cascading gains will not eliminate the extra junction. In order to reduce such a loop the summing
junctions must be moved around until the loop contains only one. This is done by propagating a transfer function backward through the junction, or it's inverse forward through the junction, as shown
in Figure 15.
In the top case of Figure 15 the input/output relationship on the left is
Using the commutative property of multiplication, this translates to
which corresponds to the input/output relationship on the right.
In the bottom case of Figure 15 the input/output relationship on the left is
By using the inverse of the transfer function, the right side becomes
which is functionally the same as the left. Note that this manipulation only works if the operation in the block is invertible: if the block contained a zero gain, a sample or a hold, then the
operation would not be valid.
When a block diagram contains a pair of signal paths that originate from the same signal and terminate on a summing junction, the set can be reduced to a single block as shown in Figure 16. In this
case it can be seen that
From this it is easy to get
Example 3: Cascading Gains.
Feedforward is often used in control systems to increase the system response speed without unduly pushing the control loop bandwidth. Figure 17 shows one such system. Reduce the block diagram in
Figure 17 down to a single block.
The loop in Figure 17 contains two summing junctions, which prevents the use of the loop reduction rule. We can either move the G[1] leg back to the left summing junction, cascading G[1] with 1/G[2],
or we can move the feedback leg forward to the right summing junction, putting a G[2] block in the feedback path. I'll do the latter, to avoid the inverse:
Using loop reduction for the right half and the parallel summation rule for the left gives us a transfer function of:
2.2.4 Multiple-Input Systems
So far I've presented systems with just one input and one output (often called SISO systems for “Single Input, Single Output”). Real systems aren't restricted to having just one input and one output,
and there is no reason that a block diagram has to be restricted in this way either. Even when you're working with a system that you'd like to treat as a SISO system, you'll find that reality may
place additional requirements on your analysis.
It is often very useful to model a disturbance to a system in a block diagram. Figure 18 shows a block diagram with the plant split up into an actuator and a mechanism, and a disturbance force added
to the force of the actuator on the mechanism. When dealing with such a system you are often interested in it's ability to reject disturbances, so you often want to find the transfer function from
the intended input (u in this case) as well as the disturbance input (u[d]).
Both the transfer functions in question can be found almost by inspection from Figure 18. This is done by appealing to the fact that we're using a linear system model, which means that we can use
superposition. To find the transfer function from the intended input to the output we assume that the disturbance is set to zero and solve the system normally:
Similarly, to find the transfer function from the disturbance input to the output we set the intended input to zero and solve the resulting system:
Notice that the denominator in (37) and (38) is the same. This is a general characteristic of feedback control systems: no matter how many inputs or outputs the system has, no matter how the system
behavior may vary when you choose different points to inject signals or observe them, the underlying behavior of the system in terms of the poles of the transfer function will remain the same. The
only times that you will see two different characteristic polynomials in one system will either be because you have two disjoint systems that happen to be on the same page, or because the numerator
and denominator of the system share some roots.
There is one other thing to notice about the system: it is often not necessary to derive all of the transfer functions directly. In this case one could have observed that by moving the summing
junction for the disturbance back to the input summing junction the disturbance transfer function could be found from (37):
If you do the math you'll see that (39) and (38) are equal. In many cases it is much easier to derive the desired function by this method rather than solving the block diagram twice.
Example 4: A Heater with PI Control.
Figure 19 shows a temperature control system where a plant's temperature is kept constant by controlling the power applied to a heater, but is disturbed by the ambient temperature (u[d]) The plant
transfer function is
The controller is a PI, with transfer function
where the controller gains have been adjusted for a stable, rapidly settling plant.
Find the system's response to a change in ambient temperature and solve for the steady-state component of this response.
Borrowing from (38) the disturbance transfer function is
Substituting in (40) and (41) this becomes
Let k[pp] = (1-d[1])(1-d[2]) and simplify to find the response to a change in ambient temperature:
To find the response to a steady state change in ambient temperature it in necessary to make a couple of observations: First, a steady-state change in ambient temperature will have a number of
components that go to zero as time goes to infinity, plus a unit step response. Second, we specified that the PI controller would be tuned such that the system is stable. This insures that all of the
system poles lie in the set of z such that
With the first of these two observations we can apply the final value theorem to get an expression for the steady state error:
With the second of these two observations we can deduce that the denominator of T[d](z) will be a finite number as z goes to 1; thus (45) can be simplified to
where k[0] is known to be non-zero. At this point it is easy to see that the steady-state error is equal to zero.
2.2.5 Multiple Output Systems
A dual of the case with multiple-input, singe output (MISO) systems are systems with single inputs and multiple outputs (SIMO). As with the multiple-input case one often finds the multiple output
case when we would rather be doing a simpler analysis.
Take the case of the command-following system in Figure 20 (which is just Figure 18 with the extra summing block removed and some extra signal labels). With such systems we're often concerned not
only with how well the output is going to follow the input, but in what size the control signals will be for a given input.
Assuming that we know the input, we only need to find the transfer function to have the output signal. As with the MISO case finding the transfer function from the input to the output can be done
nearly by inspection:
Example 5: Positioning System Drive Requirements.
A system as shown in Figure 20 is built with a voltage-driven, geared motor with time constant of approximately 20ms. The effective transfer function of the motor is
for per-unit drive and a motor output in radians. The controller has the transfer function
Find the transfer function from the system command input to the motor drive input. Find the maximum step command that can be given to the system without the motor's per-unit drive exceeding an
absolute value of 1. Find the maximum ramp command that can be given to the system without the motor's per-unit drive exceeding an absolute value of one. Think of a way that the motor could be
commanded to move 50% of its range without exceeding the maximum motor drive.
From (47) the transfer function from the command to the motor drive is
Then the response to a unit step is found by multiplying (51) by a step:
This response can be seen in Figure 21. As shown in the figure, the maximum value of the response occurs at the initial timestep, so the height of the response can be found by applying the initial
value theorem to (52), which gives a value of 9.5. Thus, the maximum step command that can be given is equal to 1/9.5 = 0.105.
The motor drive response to a unit ramp input can be found in a similar manner. Multiplying (51) by a ramp gives
shows the time-domain response. Clearly the final value of this response is also it's maximum, so the maximum motor drive can be found using the final value theorem. The final value of U[cramp] is:
Thus, the maximum ramp that can be imposed is 1/49.9, or approximately 2%/step, or 200%/second.
There are a number of ways that the motor can be driven to a position without exceeding it's maximum drive. One such way is shown in Figure 23, where the rate of change of the position command is
limited to a 2% per sample ramp. The resulting position command, motor drive command, and motor position response are all shown, and as can be seen the motor command stays within appropriate limits.
|
{"url":"http://www.wescottdesign.com/articles/BlockDiagrams/BlockDiagrams.html","timestamp":"2014-04-19T14:29:02Z","content_type":null,"content_length":"66388","record_id":"<urn:uuid:8642ed54-7bdd-452c-8666-55073b41af2f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What are the applications of immanants?
up vote 18 down vote favorite
Definitions of determinant:
$det(A) = \sum_{\sigma \in S_n}(-1)^{\operatorname{sgn} \sigma}\prod_{i}a_{i, \sigma(i)}$
and permanent:
$perm(A) = \sum_{\sigma \in S_n}\prod_{i}a_{i, \sigma(i)}$
admit a generalization in the form of immanant:
$Imm_{\lambda}(A) = \sum_{\sigma \in S_n}\chi_{\lambda}(\sigma)\prod_{i}a_{i, \sigma(i)}$
where $\lambda$ labels irreducible representations of $S_n$ and $\chi_{\lambda}$ is the character. Determinant and permanent are easily seen to be special cases of $Imm_{\lambda}$.
While determinants are ubiquitous in mathematics and permanents also have many application, esp. in combinatorial problems, other kinds of immanants seem to be rarely used. Are there any problems
where use of $Imm_{\lambda}$ other than $det$ and $perm$ is natural?
Side question: where do the names permanent and immanent come from? Determinant was introduced by Gauss when dealing with quadratic forms, in in his context it does determine something... –
Mariano Suárez-Alvarez♦ May 28 '11 at 16:30
In the conclusion of the paper where the immanants are introduced, Littlewood and Richardson state that "the immanants of special matrices can be used to give results relating to the numbers of
BERNOULLI, STIRLING and EULER". Does anyone know more about this ? – Emmanuel Briand May 28 '11 at 21:05
@Mariano Suárez-Alvarez: The name "Permanent" comes from Cauchy's paper "Mémoire sur les fonctions qui ne peuvent obtenir que deux valeurs égales et de signes contraires par suite des
2 transpositions opérées entre les variables qu’elles renferment.". There he introduced the "fonctions symétriques permanentes" = invariant under column permutations. "Immanants" is introduced in
the paper "Group characters and invariants" by Littlewood and Richardson, with as a footnote: "A name suggested to us by Professor A. R. Forsyth". But I do not know any explanation for this
choice. – Emmanuel Briand May 28 '11 at 22:20
@Emmanuel: thanks! – Mariano Suárez-Alvarez♦ May 29 '11 at 4:27
1 Formally "immanant" is a participle of a made-up compound verb, "in+mano" (like emano from ex+mano), with not clear meaning. Otherwise it is a mispelling of immanent (from in+maneo, like permanent
from per+maneo), maybe to rhyme with determinant. – Pietro Majer May 29 '11 at 6:24
show 1 more comment
4 Answers
active oldest votes
Yes, the immanants of positive definite matrices are very natural and their studies go back to I. Schur (1918). Start with "Positivity Problems and Conjectures in Algebraic Combinatorics"
survey by R. Stanley and follows the references, notably "Immanants of combinatorial matrices" by I.P. Goulden and D.M. Jackson, and "Hecke Algebra Characters and Immanant Conjectures" by
up vote 7 M. Haiman, and several papers by J. Stembridge. I should mention that in recent years there have been various advances in this direction (see e.g. this paper by B. Rhoades and M. Skandera,
down vote and a number of followup papers).
add comment
I think Immanants are also important in complexity theory because they provide a spectrum of problems with extreme cases from computing the determinant (polynomial time) to computing the
permanent (#P-complete). P. Burgisser has some articles in this direction, for instance "The computational complexity of Immanants". See also "Complexity and completeness of Immanants" by
up vote 5 J-L. Brylinski and R. Brylinski.
down vote
add comment
A problem with applying immanants is that after expressing something in terms of immanants, you might not be closer to calculating them and you can't easily manipulate expressions of
immanants. For example, suppose you want to count Hamiltonian cycles in a graph (positivity solves the Hamiltonian cycle problem). You notice that this is expressible as a linear
combination of immanants of the adjacency matrix:
$$\sum_{\sigma \in S_n} f(\sigma)\prod_{i}a_{i, \sigma(i)} $$
up vote 4
down vote where $f(\sigma) = 1$ when $\sigma$ is an $n$-cycle and $0$ otherwise. Since $f$ is constant on conjugacy classes, we can decompose it as a linear combination of characters. However, the
Hamiltonian cycle problem is NP-complete, so it would be surprising if there were a quick way to compute this linear combination of immanants. Either there are many terms to compute, or
these terms are hard to compute, or both, even if you are just trying to determine whether the result is positive.
add comment
This is a comment, but too long, so have put it down here.
I found the following link that lists several papers discussing Immanants and conjectures related to them. Most of the applications mentioned therein seem to be graph theoretic.
Link to entry on Immanants
up vote 2 down vote The only place where I have previously seen Immanants is in the famous *Permanent on Top" conjecture, which states that for any positive semidefinite matrix $A$, we have
$$\text{per}(A) \ge \overline{\text{Imm}}_\chi(A),$$ where $\overline{\text{Imm}}_\chi(A)$ denotes the normalized Immanant.
But I don't think that counts as an "application."
add comment
Not the answer you're looking for? Browse other questions tagged determinants symmetric-group rt.representation-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/66284/what-are-the-applications-of-immanants/66333","timestamp":"2014-04-17T01:06:46Z","content_type":null,"content_length":"70822","record_id":"<urn:uuid:fd771bad-3bd6-45a7-ae05-0901f0313d3d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IACR News
Verifiable security is an emerging approach in cryptography that
advocates the use of principled tools for building machine-checked
security proofs of cryptographic constructions. Existing tools
following this approach, such as EasyCrypt or CryptoVerif, fall short
of finding proofs automatically for many interesting constructions. In
fact, devising automated methods for analyzing the security of large
classes of cryptographic constructions is a long-standing problem
which precludes a systematic exploration of the space of possible
designs. This paper addresses this issue for padding-based encryption
schemes, a class of public-key encryption schemes built from hash
functions and trapdoor permutations, which includes widely used
constructions such as RSA-OAEP.
Firstly, we provide algorithms to search for proofs of security
against chosen-plaintext and chosen-ciphertext attacks in the random
oracle model. These algorithms are based on domain-specific logics
with a computational interpretation and yield quantitative security
guarantees; for proofs of chosen-plaintext security, we output
machine-checked proofs in EasyCrypt. Secondly, we provide a crawler
for exhaustively exploring the space of padding-based encryption
schemes under user-specified restrictions (e.g. on the size of their
description), using filters to prune the search space. Lastly, we
provide a calculator that computes the security level and efficiency
of provably secure schemes that use RSA as trapdoor permutation.
Using these three tools, we explore over 1.3 million encryption
schemes, including more than 100 variants of OAEP studied in the
literature, and prove chosen-plaintext and chosen-ciphertext security
for more than 250,000 and 17,000 schemes, respectively.
|
{"url":"https://www.iacr.org/news/index.php?p=detail&id=1977","timestamp":"2014-04-16T04:27:00Z","content_type":null,"content_length":"23280","record_id":"<urn:uuid:60ed8f32-7ce3-4cb0-afe5-329eeaaab3e7>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics and
The departmental major provides a thorough grounding in the core areas of physics. It is suitable either as a preparation for careers in science and engineering, or as a spring-board for applying
technical knowledge in such fields as business, medicine, law, public policy and education.
[4 or 5 hours; note that only the second course in any sequence counts toward the physics major; as appropriate, any combination of 116a/118a + 116b/118b or 116a/118a + 121b or 121a + 121b or 121a +
116b/118b satisfies the Introductory Course requirement]:
• first semester introductory, calculus-based physics (pre-requisite for the major):
□ PHYS 116a (General Physics I) with PHYS 118a (General Physics I Lab) [offered in both Spring and Fall] [4] or
□ PHYS 121a: Principles of Physics I with lab [Fall, 5]
• second semester introductory, calculus-based physics [4 or 5]:
□ PHYS 116b (General Physics II) with PHYS 118b (General Physics II Lab) [offered in both Spring and Fall, 4] or
□ PHYS 121b: Principles of Physics II with lab [Fall, 5]
Note: Students may combine one semester of the 116/118 sequence with the complementary semester of the 121 sequence. The first semester of either introductory sequence is a pre-requisite for the
major, while the second semester begins the formal requirements of the major. Physics 105 and 110 are intended for students without strong backgrounds in mathematics or science. Neither course is
recommended as preparation for further study in a natural science; neither is appropriate for engineering, premedical, predental, or pre-physical therapy students. Neither counts toward the
physics major or minor.
Core Curriculum [19 hours]
□ PHYS 225 or 225W: Concepts and Applications of Quantum Physics I [Fall, 4]
□ PHYS 226 or 226W: Concepts and Applications of Quantum Physics II [Spring, 4]
□ PHYS 223: Thermal and Statistical Physics [3] or PHYS 223C: Computational Thermal and Statistical Physics [Fall, 3]
□ PHYS 227a: Classical Mechanics I [Fall, 3]
□ PHYS 229a: Electricity, Magnetism and Electrodynamics I [Spring, 3]
□ PHYS 250: Physics Undergraduate Seminar [1]
□ ASTR 250: Astronomy Undergraduate Seminar [1]
Nine Hours of Electives in Physics and/or Astronomy
With a cap of 6 of these 9 hours that can be earned from any combination of Directed Study (Phys 289 or Astr 289), Independent Study (Phys 291 or Astr 291), and/or Honors research (Phys =296 or
Astr 296) [9 hours]
The electives required by the major may be satisfied by any combination of courses offered by the department that are at the 200 level or above, with the exception of the seminar courses Physics
250 and Astronomy 250 (one hour of each is already required for the major). A cap of 6 of these 9 hours can be earned from any combination of directed study (289), independent study (291), and/or
Honors research (296).
Other courses may count as an elective, such as courses offered by the engineering school (or other departments and schools) that are particularly relevant, such as a course in health physics,
optics, or materials science. Such exceptions must be approved by the department’s Undergraduate Program Committee. Other courses, such as 100-level courses in the physics department or
additional hours of the Physics or Astronomy seminar (250) will be considered with sufficient justification. The purpose of the above policy is to allow relevant courses to count without having
to specify them in advance, since it is expected that the relevant courses offered by other departments and schools will change and it is not practical to attempt to maintain a list of approved
electives. Majors should seek approval of an elective from their advisor prior to their taking the course and, if applicable, from the Department's Undergraduate Program Committee.
A description of courses may be found in the Undergraduate Catalog
Recommended Courses in other departments
The study of physics and astronomy requires a solid background in mathematics. We recommend that physics majors take the following mathematics courses:
□ Math 155a,b: Accelerated Single-Variable Calculus I and II [4-4] --- we strongly recommend Math 155a, 155b over the parallel sequence Math 150a, 150b, 170
□ Math 175: Multivariable Calculus [3]
□ Math 194: Methods of Linear Algebra [3] or Math 204:Linear Algebra [3]
□ note: well-prepared students can also take Math 205a [4] + 205b [4] to learn multivariable calculus and linear algebra
□ Math 198: Methods of Differential Equations [3] or Math 208: Introduction to Ordinary Differential Equations [3]
□ Math 229: Advanced Engineering Mathematics [3]
All fields of physics require the power of computers, and as a consequence we strongly recommend that students majoring in physics learn computer programming. Physics majors without previous
experience (i.e., AP credit) in Computer Science should start with (note: we do not recommend CS 103 for Physics majors):
□ CS 101: Programming and Problem Solving [3]
Advanced physics, astronomy, and mathematics courses that have significant computational components (and that also count toward the minor in Scientific Computing) include:
□ PHYS 223C: Computational Thermal and Statistical Physics [3]
□ PHYS 257: Computational Physics [3]
□ ASTR 253: Structure and Evolution of Galaxies [3]
□ ASTR 254: Structure Formation in the Universe [3]
□ MATH 226: Introduction to Numerical Mathematics
Research Activities
All majors are encouraged to participate in research projects under the direction of faculty and research staff. Students can undertake Directed Study, which allows a student to work together
with faculty on their research projects, or Independent Study, which involves research projects conceived and undertaken by individual students. Students majoring in physics may apply, at the
beginning of senior year, for admission to the departmental Honors Program.
Achieving Excellence in Liberal Education (AXLE)
(for Classes of 2009 and later)
Physics majors in the College of Arts and Science must complete the Achieving Excellence in Liberal Education (AXLE) program, which consists of 13 courses that must be taken at Vanderbilt. Of the
13 courses, one will be fulfilled through a first year writing seminar and two more through writing intensive (W) courses. The '200-level W course' can be satisfied with PHYS 225W or 226W.
Courses required for the Physics major will also 'double count' in satisfaction of the requirement for three 'Mathematics and Natural Science courses.'
□ Humanities and the Creative Arts (3 courses)
□ International Cultures (3 courses)
□ History and Culture of the United States (1 course)
□ Mathematics and Natural Sciences (3 courses, one of which must be a laboratory science)
□ Social and Behavioral Sciences (2 courses)
□ Perspectives (1 course)
Copyright 2010, Vanderbilt University
|
{"url":"http://www.vanderbilt.edu/physics/ugrad/ug-courses.php","timestamp":"2014-04-19T09:39:51Z","content_type":null,"content_length":"13363","record_id":"<urn:uuid:9dd1137f-7828-4c54-b9ef-8b2c4dfc9821>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding My Summer Vacation's Shortest Path
August 21, 2012
Think of a map as an instance of a graph, the cities are the nodes and the roads between cities are the edges.
Back when gas prices were below a dollar per gallon, my family would take cross-country driving trips. When we were planning how to get from point A to point B, we consulted a paper map, especially
if we hadn't been there before. Besides allowing you to trace possible routes, a map usually had a lower triangular matrix with the precomputed distances between a set of cities found on the map (and
some that might be just off the edge of the paper). Looking at the intersection of rows/columns for, say, Albuquerque and Boston, I would find that the shortest route known to the cartographers was
2232 miles. Since there isn't one road that connects these two cities, how did those mapmakers compute the shortest distance between these two places? (Remember that Google Maps wasn't even the
glimmer in anyone's eye back then.)
Think of a map as an instance of a graph, the cities are the nodes and the roads between cities are the edges. The length of the road is the weight of the corresponding edge. The All-Pairs Shortest
Path problem asks you to compute the minimum length path (shortest distance) between all pairs of nodes within the graph. Thus, besides computing the shortest distance needed to drive between
Albuquerque to Boston, you can also find the minimum driving distance between Charlotte and Detroit, Ellensburg and Frankfort, or Greenwich and Hilton Head.
The All-Pairs Shortest Path problem takes a graph of n nodes represented by an n x n weight matrix, W. The result is an n x n matrix, D (for distance), where the D[i][j] entry holds the minimum
weight of the path from node i to node j. Entries in the W matrix can be zero, positive (if a direct edge lies between the two nodes), or the infinity value (∞) where there is no direct edge between
the nodes. The main diagonal of W is filled with zeroes to denote that it takes no time or distance to travel from a node to itself. It is okay to have negative weights in the W matrix as long as
there is no negative length cycle. This condition will assure that only simple paths are found. (For a graph that models physical distances, you won't have negative values. However, if the edge
weights are for something like tolls on the roads, a negative weight might represent a payment you receive for travelling between the two nodes.)
Floyd's Algorithm is a simple solution to this problem. The key point to the algorithm is to find the shortest path between node i and node j that includes (or excludes) node k on the path. For each
pair of nodes, each possible node k in the graph is tried in turn to find these shortest paths. The algorithm computes a series of successive D[k] matrices, one for each individual k, with the
previous D[k-1] matrix used in the computation. In formal notation, we compute the value of an [i][j] entry from the previous matrix (i.e., look for a shorter path that uses node k) with the
following formula:
Dk [i][j] = min(Dk-1 [i][j], Dk-1 [i][k] + Dk-1 [k][j])
After k iterations, the D[k] matrix will hold the lengths of the shortest paths that use the first k nodes as intermediate nodes in the path. Once the algorithm has iterated over all n nodes in the
graph, the D[n] graph will contain the shortest path lengths between all pairs of nodes with some combination of other nodes as intermediate nodes. This is the result that we are looking for.
From the above description, I can see that there will be a nested loop pair to compute the minimum path between each i,j pair. This computation is going to be the body of the loop that iterates over
all the choices for k. If the initial D[0] matrix is simply the original weight matrix, W, a serial code implementation of Floyd's Algorithm is shown here.
void Floyds(float **D, int N)
int i, j, k;
for (k = 0; k < N; k++) {
for (i = 0; i < N; i++) {
for (j = 0; j < N ; j++)
D[i][j] = min(D[i][j], D[i][k] + D[k][j]);
I've elected to use a float array for both the weight and distance matrices. The constant FLT_MAX from the float.h include file (not shown, but assumed to be part of the initialization of W) is used
to play the role of infinity for nodes not directly connected by an edge.
If you're in a hurry to find the distance between your starting point and somewhere exotic that is within your travel time budget, you might want to implement and run a parallel version of Floyd's
Algorithm on multiple cores. One point to consider in such a conversion is that the iterations of the outer loop are similar to a time series computation. What I mean is that the algorithm must
finish all updates to the D matrix entries for each k before going to the computation using k+1. Thus, the outer loop must be run in serial and the two inner loops are the only possible candidates
that can be made to run concurrently.
|
{"url":"http://www.drdobbs.com/parallel/finding-my-summer-vacations-shortest-pat/240005912","timestamp":"2014-04-20T10:52:46Z","content_type":null,"content_length":"96598","record_id":"<urn:uuid:1d5bf86a-ee6b-402e-9236-014b121fca41>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Scientific Determinism, Part II: Determinism, Defined
Having defined science, I should revisit first a prerequisite to applying this definition of "science" to the construction of problems and their solutions. It is necessary to understand what is
implied by ex-post "certainty", which may be more accurately described as determinism.
Determinism simply describes the concept that a process (X) will lead to some outcome (Y). X may be concrete, such as in the case of an object striking another. X may also be abstract, such as the
calculation of an arithmetic expression. In either of these cases, the result will be Y chosen from a range of all possible outcomes (the possibility set). Note that, insofar as it is a concept, Y
may be "nothing" and still would constitute a valid outcome for X, as would the complete end of existence for all objects and concepts related to X. In short: Define a process X to be an event in
time, space, or mental construct which must result in an outcome Y. We can assume that only one outcome Y can possibly occur, creating a one-to-one mapping from X to Y (more on this below).
The possibility set for Y may be finite, as in the case of a deck of cards, where Y is a single card chosen from the deck. Another example is the result of a fair die thrown at random. However, the
possibility set for Y may be infinite as well. If, instead of throwing a six-sided die, we define the possibility set for Y to be any real number on the range (0.5, 6.5), we would have an infinite
possibility set, the uniform distribution of which would result in the same arithmetic mean. It is not difficult to see that the vast majority of problems involve infinite possibility sets for y. In
the simple case of a billiard ball striking another (X), for instance, we can think of countless possible (if improbable) outcomes (Y): perhaps one or the other ball will explode, or perhaps they are
made of liquid and will merge together, or perhaps the collision will expel their adherence to gravity and they will begin to float away. We could conceivably continue in this manner infinitely -
there is no finite list of possible outcomes. (Note that, conceivably, one could argue that my examples of finite possibility sets could in fact be infinite possibility sets: perhaps some unforeseen
phenomena will cause an outcome that appears highly implausible but is nonetheless conceivable; I would not argue this point, as it has no bearing on further topics.)
I have mentioned above that only one outcome Y is possible, such that X to Y is a one-to-one mapping. If you doubt this to be the case, consider that sets of outcomes {Y1, Y2} can be contained in the
possibility set. Then Y1, Y2, and the set {Y1, Y2} would all be in the possibility set for Y and would thus be valid outcomes. So long as Y1 and Y2 are not mutually exclusive, their joint occurrence
does not preclude us from stating that "only one outcome has occurred."
Determinism, then, is the idea that the single outcome Y results from X and comes from the possibility set for Y. This is subject to some interpretation because of the notion of causality. How can we
know that Y results from X? For determinism to be useful as a concept to build upon, we should not require proof of causality between X and Y. Causality can be loosely inferred from the knowledge
that X occurred and that the subsequent state of the objects and concepts related to X was Y. We cannot know with exactitude that Y resulted from X (in much the same way that we cannot know with
exactitude X and Y themselves!), but we may take it as a hypothesis that this is so based on knowledge about X and Y and their ordering.
Taken in this way, if we are presented with a process X, we may use our knowledge of X to form expectations of some future state of related objects and concepts Y. Our confidence in forming these
expectations depends on many things: our prior knowledge of the objects and concepts related to X, our experience with processes similar to X (including the observability of the results of those
processes), the complexity of X, the consistency of the information related to X, the breadth of the possibility set for Y, and still other factors.
It will be important to firm up my definition of "knowledge" at this point to complete the groundwork for further examination of the nature of problem solving. I have used it here to mean "believing
with certainty". If the thing to be known cannot be proven, as many things cannot be, then we will need to take care to ensure that true knowledge of the thing is not implied before continuing on.
Items in grey text above are topics of further consideration which will be linked back to this article (and this article updated to reflect the discussion of those topics as they are available here).
|
{"url":"http://oddsandmeans.blogspot.com/2013/08/scientific-determinism-part-ii.html","timestamp":"2014-04-19T17:00:59Z","content_type":null,"content_length":"88145","record_id":"<urn:uuid:7f959059-f5b1-4d69-95a2-ca81b15879b3>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
You are not logged in.
• Index
• » Help Me !
• » logarthimic operations : order of precedance?
Post a reply
Topic review (newest first)
2005-08-26 11:20:16
right, I missed that ambiguity!
correct, mikau: it doesn't matter which comes first, not even the exponent rule... the only ordering you have to obey is PEMDAS, as MathsIsFun said.
2005-08-26 09:52:40
Parentheses, Exponents, Multiply and Divide, Add and Subtract
And I would say, for example, that "log x²" would really mean "log (x²)", not "(log x)²"
2005-08-26 09:45:26
ok, I guess I messed up when that happend. So it doesn't matter which somes first? Not even the exponant rule?
2005-08-26 09:39:43
if you use them correctly, there is no preferred order; you should be able to work any equation using any approach (i.e. order) you wish. just make sure that you always follow
the order of operation rules for arithmetic!
if you would post a specific question where you ran into trouble, I would be more than happy to explain what I'm talking about using your situation as an example...
oh and btw the second one should be: (log m) - (log n) = log (m/n)
2005-08-26 09:27:56
The rules for logarithms are:
log m + log n = log mn
log m - log n = log n/n
x log m = log of m to the x'th power.
But which comes first? Sometimes you get long logarithmic equations and it doesn't come out because I didn't use these simplifications in the correcto order.
Which comes first?
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
right, I missed that ambiguity!correct, mikau: it doesn't matter which comes first, not even the exponent rule... the only ordering you have to obey is PEMDAS, as MathsIsFun said.
Parentheses, Exponents, Multiply and Divide, Add and Subtract And I would say, for example, that "log x²" would really mean "log (x²)", not "(log x)²"
ok, I guess I messed up when that happend. So it doesn't matter which somes first? Not even the exponant rule?
if you use them correctly, there is no preferred order; you should be able to work any equation using any approach (i.e. order) you wish. just make sure that you always follow the order of operation
rules for arithmetic!if you would post a specific question where you ran into trouble, I would be more than happy to explain what I'm talking about using your situation as an example...oh and btw the
second one should be: (log m) - (log n) = log (m/n)
The rules for logarithms are:log m + log n = log mnlog m - log n = log n/nx log m = log of m to the x'th power.But which comes first? Sometimes you get long logarithmic equations and it doesn't come
out because I didn't use these simplifications in the correcto order.Which comes first?
|
{"url":"http://www.mathisfunforum.com/post.php?tid=1439&qid=13337","timestamp":"2014-04-17T10:12:35Z","content_type":null,"content_length":"17291","record_id":"<urn:uuid:e997c721-8de1-4786-be48-44938641ddd6>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[with patch] Don't factor discriminant for quadratic number fields
Reported by: robertwb Owned by: robertwb
Priority: major Milestone: sage-2.8.12
Component: number theory Keywords:
Cc: Merged in:
Authors: Reviewers:
Report Upstream: Work issues:
Branch: Commit:
Dependencies: Stopgaps:
The current implementation of quadratic number fields calculates the discriminant on initialization, which can be expensive and is unnecessary.
Elements are represented as a+b sqrt(D) / denom. I don't believe that we require D to be the discriminant, but this needs to be verified before a change is made. For efficiency reasons, it might be
worth doing trial division to reduce squares of small prime powers from D, as smaller D yields faster arithmetic.
Attachments (1)
Change History (6)
• Milestone set to sage-2.9
• Milestone changed from sage-2.9.1 to sage-2.8.12
• Owner changed from was to robertwb
• Status changed from new to assigned
• Summary changed from Don't factor discriminant for quadratic number fields to [with patch] Don't factor discriminant for quadratic number fields
• Resolution set to fixed
• Status changed from assigned to closed
|
{"url":"http://trac.sagemath.org/ticket/1055","timestamp":"2014-04-16T10:11:15Z","content_type":null,"content_length":"23143","record_id":"<urn:uuid:9eb286f9-18e7-4efe-a1b6-3a8d967d7ca3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This module is similar to Language.Syntactic.Sharing.Reify, but operates on AST (HODomain dom p) rather than a general AST. The reason for having this module is that when using HODomain, it is
important to do simultaneous sharing analysis and HOLambda reification. Obviously we cannot do sharing analysis first (using reifyGraph from Language.Syntactic.Sharing.Reify), since it needs to be
able to look inside HOLambda. On the other hand, if we did HOLambda reification first (using reify), we would destroy the sharing.
This module is based on the paper Type-Safe Observable Sharing in Haskell (Andy Gill, 2009, http://dx.doi.org/10.1145/1596638.1596653).
:: (Syntactic a, Domain a ~ HODomain dom p pVar)
=> (forall a. ASTF (HODomain dom p pVar) a -> Bool) A function that decides whether a given node can be shared
-> a
-> IO (ASG (FODomain dom p pVar) (Internal a), VarId)
Reifying an n-ary syntactic function to a sharing-preserving graph
This function is not referentially transparent (hence the IO). However, it is well-behaved in the sense that the worst thing that could happen is that sharing is lost. It is not possible to get false
|
{"url":"http://hackage.haskell.org/package/syntactic-1.8/docs/Language-Syntactic-Sharing-ReifyHO.html","timestamp":"2014-04-16T17:42:32Z","content_type":null,"content_length":"8134","record_id":"<urn:uuid:95cd90d1-1662-48e1-b413-38bb3a46d398>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
express in simplest form
You can use latex to make your question easier to understand, Click on my representation of the question. $\frac{\frac{2}{x^2} + \frac{2}{y^2}}{\frac{4}{xy}}$ I would go about this by simplifying the
numerator on it's own Cross multiply to give just one denominator Spoiler: Remember when multiplying by fractions that $\frac{\frac{a}{b}}{\frac{c}{d}} = \frac{ad}{bc}$. In other words flip the
fraction and multiply. Spoiler:
|
{"url":"http://mathhelpforum.com/algebra/102991-express-simplest-form.html","timestamp":"2014-04-16T18:11:53Z","content_type":null,"content_length":"40458","record_id":"<urn:uuid:749fd87d-c0ef-4f0c-9c3a-3979ccc8bafc>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Russell Gardens, NY Math Tutor
Find a Russell Gardens, NY Math Tutor
...Reading comprehension is one of the trickiest skills to impart to a student. At the same time, if anything will work, it has to be tutoring, a program developed for the particular student to
go their pace and meet their individualized needs. In my method, it begins with a careful assessment of how that student absorbs information and what their stumbling blocks seem to be.
55 Subjects: including geometry, algebra 1, algebra 2, English
...The writing section assesses whether you can concisely write two 30-minute organized essays. I am confident that I can help you significantly improve your scores in all three areas. You can
show the most improvement on the Writing Section because the SAT tests certain grammatical concepts and re-uses a lot of them.
26 Subjects: including trigonometry, algebra 1, algebra 2, calculus
...We can improve your (or your beloved ones) understanding on math, fill the gaps of your knowledge, and make you a great test taker. I am an experienced Turkish private tutor and also a native
speaker of Turkish. Do you need some basic Turkish knowledge for your visit to Turkey?
25 Subjects: including algebra 1, SAT math, statistics, logic
...Please reach out and see if I can help you meet your educational goals. I have many years of extensive education and application in the business world. I would like to bring that help and
insight to you and help you find out what you really need in the "real world." Book knowledge is great, but knowing how to apply it is the most important thing.
4 Subjects: including statistics, SPSS, writing, biostatistics
...Finally, as a dancer and choreographer, I have 8 years experience teaching dance (ballet, modern dance, & yoga). I have a professional, outgoing demeanor and am committed to improving your
students' SAT scores and grades! I have worked for tutoring companies in the past but am advertising here ...
31 Subjects: including geometry, ACT Math, SAT math, reading
Related Russell Gardens, NY Tutors
Russell Gardens, NY Accounting Tutors
Russell Gardens, NY ACT Tutors
Russell Gardens, NY Algebra Tutors
Russell Gardens, NY Algebra 2 Tutors
Russell Gardens, NY Calculus Tutors
Russell Gardens, NY Geometry Tutors
Russell Gardens, NY Math Tutors
Russell Gardens, NY Prealgebra Tutors
Russell Gardens, NY Precalculus Tutors
Russell Gardens, NY SAT Tutors
Russell Gardens, NY SAT Math Tutors
Russell Gardens, NY Science Tutors
Russell Gardens, NY Statistics Tutors
Russell Gardens, NY Trigonometry Tutors
Nearby Cities With Math Tutor
Glen Oaks Math Tutors
Great Nck Plz, NY Math Tutors
Great Neck Math Tutors
Great Neck Estates, NY Math Tutors
Great Neck Plaza, NY Math Tutors
Harbor Hills, NY Math Tutors
Kensington, NY Math Tutors
Lake Success, NY Math Tutors
Little Neck Math Tutors
Manhasset Math Tutors
Plandome, NY Math Tutors
Saddle Rock Estates, NY Math Tutors
Saddle Rock, NY Math Tutors
Thomaston, NY Math Tutors
University Gardens, NY Math Tutors
|
{"url":"http://www.purplemath.com/russell_gardens_ny_math_tutors.php","timestamp":"2014-04-20T23:47:51Z","content_type":null,"content_length":"24493","record_id":"<urn:uuid:715bc390-68f8-4df0-aae3-2c64eac0c310>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The apparent change in position of a relatively close object compared to a more distant background as the location of the observer changes. Annual parallax is the change in the apparent position of a
star resulting from Earth's annual motion around the Sun; it is defined as the angle subtended at the object by the semimajor axis of Earth's orbit.
The largest parallax known for an object outside the Solar System is that of the nearest star, Proxima Centauri, 0.762"; the first stellar parallax to be measured was that of 61 Cygni. Knowing the
parallax p of a star in arcseconds, it is a simple matter to work out the star's distance d in parsecs:
d = 1/p
In the case of Proxima Centauri, d = 1/0.762 = 1.31 pc. Multiplying this by 3.26 gives the distance in light-years (4.28 light-years).
There are other ways to measure a stellar distances, which, though not involving trigonometric parallax, are still referred to as varieties of parallax.
Spectroscopic parallax is the most widely used technique for determining the distances of stars that are too distant for their trigonometric parallaxes to be measured. From an analysis of a star's
spectrum, its position is determined on the Hertzsprung-Russell diagram, which gives the absolute magnitude. By comparing the star's absolute magnitude with its apparent magnitude, its distance
follows directly. Dynamical parallax is a method for determining the distance to a visual binary star. The angular diameter of the orbit of the stars around each other and their apparent brightness
are observed. The distance comes from an application of Kepler's laws of planetary motion and the mass-luminosity relation.
Related entry
lunar parallax
Related category
|
{"url":"http://www.daviddarling.info/encyclopedia/P/parallax.html","timestamp":"2014-04-17T01:23:48Z","content_type":null,"content_length":"8417","record_id":"<urn:uuid:d53762e4-370d-40a8-9709-bbf9475fed6c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Amazon.com: The Shape of Inner Space: String Theory and the Geometry of the Universe's Hidden Dimensions (9780465020232): Shing-Tung Yau, Steve Nadis: Books
String theory says we live in a ten-dimensional universe, but that only four are accessible to our everyday senses. According to theorists, the missing six are curled up in bizarre structures known
as Calabi-Yau manifolds. In
The Shape of Inner Space
, Shing-Tung Yau, the man who mathematically proved that these manifolds exist, argues that not only is geometry fundamental to string theory, it is also fundamental to the very nature of our
Time and again, where Yau has gone, physics has followed. Now for the first time, readers will follow Yau’s penetrating thinking on where we’ve been, and where mathematics will take us next. A
fascinating exploration of a world we are only just beginning to grasp, The Shape of Inner Space will change the way we consider the universe on both its grandest and smallest scales.
|
{"url":"http://www.amazon.com/The-Shape-Inner-Space-Dimensions/dp/0465020232","timestamp":"2014-04-24T16:35:17Z","content_type":null,"content_length":"359377","record_id":"<urn:uuid:046c88c1-dda2-4175-94f7-e20aafa83e70>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding My Summer Vacation's Shortest Path
August 21, 2012
Think of a map as an instance of a graph, the cities are the nodes and the roads between cities are the edges.
Back when gas prices were below a dollar per gallon, my family would take cross-country driving trips. When we were planning how to get from point A to point B, we consulted a paper map, especially
if we hadn't been there before. Besides allowing you to trace possible routes, a map usually had a lower triangular matrix with the precomputed distances between a set of cities found on the map (and
some that might be just off the edge of the paper). Looking at the intersection of rows/columns for, say, Albuquerque and Boston, I would find that the shortest route known to the cartographers was
2232 miles. Since there isn't one road that connects these two cities, how did those mapmakers compute the shortest distance between these two places? (Remember that Google Maps wasn't even the
glimmer in anyone's eye back then.)
Think of a map as an instance of a graph, the cities are the nodes and the roads between cities are the edges. The length of the road is the weight of the corresponding edge. The All-Pairs Shortest
Path problem asks you to compute the minimum length path (shortest distance) between all pairs of nodes within the graph. Thus, besides computing the shortest distance needed to drive between
Albuquerque to Boston, you can also find the minimum driving distance between Charlotte and Detroit, Ellensburg and Frankfort, or Greenwich and Hilton Head.
The All-Pairs Shortest Path problem takes a graph of n nodes represented by an n x n weight matrix, W. The result is an n x n matrix, D (for distance), where the D[i][j] entry holds the minimum
weight of the path from node i to node j. Entries in the W matrix can be zero, positive (if a direct edge lies between the two nodes), or the infinity value (∞) where there is no direct edge between
the nodes. The main diagonal of W is filled with zeroes to denote that it takes no time or distance to travel from a node to itself. It is okay to have negative weights in the W matrix as long as
there is no negative length cycle. This condition will assure that only simple paths are found. (For a graph that models physical distances, you won't have negative values. However, if the edge
weights are for something like tolls on the roads, a negative weight might represent a payment you receive for travelling between the two nodes.)
Floyd's Algorithm is a simple solution to this problem. The key point to the algorithm is to find the shortest path between node i and node j that includes (or excludes) node k on the path. For each
pair of nodes, each possible node k in the graph is tried in turn to find these shortest paths. The algorithm computes a series of successive D[k] matrices, one for each individual k, with the
previous D[k-1] matrix used in the computation. In formal notation, we compute the value of an [i][j] entry from the previous matrix (i.e., look for a shorter path that uses node k) with the
following formula:
Dk [i][j] = min(Dk-1 [i][j], Dk-1 [i][k] + Dk-1 [k][j])
After k iterations, the D[k] matrix will hold the lengths of the shortest paths that use the first k nodes as intermediate nodes in the path. Once the algorithm has iterated over all n nodes in the
graph, the D[n] graph will contain the shortest path lengths between all pairs of nodes with some combination of other nodes as intermediate nodes. This is the result that we are looking for.
From the above description, I can see that there will be a nested loop pair to compute the minimum path between each i,j pair. This computation is going to be the body of the loop that iterates over
all the choices for k. If the initial D[0] matrix is simply the original weight matrix, W, a serial code implementation of Floyd's Algorithm is shown here.
void Floyds(float **D, int N)
int i, j, k;
for (k = 0; k < N; k++) {
for (i = 0; i < N; i++) {
for (j = 0; j < N ; j++)
D[i][j] = min(D[i][j], D[i][k] + D[k][j]);
I've elected to use a float array for both the weight and distance matrices. The constant FLT_MAX from the float.h include file (not shown, but assumed to be part of the initialization of W) is used
to play the role of infinity for nodes not directly connected by an edge.
If you're in a hurry to find the distance between your starting point and somewhere exotic that is within your travel time budget, you might want to implement and run a parallel version of Floyd's
Algorithm on multiple cores. One point to consider in such a conversion is that the iterations of the outer loop are similar to a time series computation. What I mean is that the algorithm must
finish all updates to the D matrix entries for each k before going to the computation using k+1. Thus, the outer loop must be run in serial and the two inner loops are the only possible candidates
that can be made to run concurrently.
|
{"url":"http://www.drdobbs.com/parallel/finding-my-summer-vacations-shortest-pat/240005912","timestamp":"2014-04-20T10:52:46Z","content_type":null,"content_length":"96598","record_id":"<urn:uuid:1d5bf86a-ee6b-402e-9236-014b121fca41>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
|
There are three steps to solving a math problem.
1. Figure out what the problem is asking.
2. Solve the problem.
3. Check the answer.
Sample Problem
Some of the world's most expensive television sets cost $140,000 each.
If you deposit $400 in the bank and earn 10% interest every year, how many years will it be before you're able to buy this television set?
1. Figure out what the problem is asking.
The problem is describing a geometric sequence. If you earn 10% interest per year, each year you'll have 1.1 times as much money as you did the previous year.
After one year you'll have
after two years you'll have
and after n years you'll have
a[n] = 400(1.1)^n.
We want to know what value of n makes a[n] ≥ 140,000.
2. Solve the problem.
Solve the inequality.
140,000 < a[n]
140,000 < 400(1.1)^n
350 < (1.1)^n
log [1.1]350 < n
61.46 < n
It would take 62 years before you had enough money to buy the television.
3. Check the answer.
We'll check the answer by finding a[61] and a[62].
After 61 years, the amount of money you have is
a[61] = 400(1.1)^61 = 133971.92.
That's not quite enough. After 62 years you have
a[62] = 400(1.1)^62 = 147369.11.
That is enough.
|
{"url":"http://www.shmoop.com/sequences/solving-math-problem.html","timestamp":"2014-04-18T18:22:48Z","content_type":null,"content_length":"26894","record_id":"<urn:uuid:c25268f0-d236-4ec1-9ed6-080b5e161193>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mystery bug in simple program? - I am beginner
04-07-2005 #1
Join Date
Apr 2005
I am just beginning to write C++ and have come accross a bug in a simple MS-DOS program. The program is a basic calculator. when it is run and a non-numeric entry is entered into a 'float' field,
the text goes haywire and keeps on repeating itself.
Please, Please, Please will somebody help me as this is my first real program. Thank You, Archie
Here is the program, created using Bloodshed Dev-C++:
#include <iostream>
using namespace std;
int main()
for ( float x = 0; x < 1; x + 0 ) {
float number;
cout<<"CALCULATOR\nArchie Lodge\n\nTo multiply, type '1'\nTo divide, type '2'\nTo add, type '3'\nTo subtract, type '4'\nType number and then press Enter...";
cin>> number;
if ( number == 1 ) {//MULTIPLY
float thisisanumber;
float thisisnumbertwo;
cout<<"Please enter the first number to be multiplied: ";
cin>> thisisanumber;
cout<<"Please enter the number to multiply the above number by: ";
cin>> thisisnumbertwo;
cout<<"The numbers you entered, multiplied together are: "<< thisisanumber * thisisnumbertwo<<"\n";
cout<< thisisanumber <<" times " <<thisisnumbertwo<<" is "<< thisisanumber * thisisnumbertwo<<"\n\nPlease press enter twice to perform another calculation.\n";
else if ( number == 2 ) {//DIVIDE
float thisisanumber;
float thisisnumbertwo;
cout<<"Please enter the first number to be divided: ";
cin>> thisisanumber;
cout<<"Please enter the number for the above number to be divided by: ";
cin>> thisisnumbertwo;
cout<<"The numbers you entered, divided are: "<< thisisanumber / thisisnumbertwo<<"\n";
cout<< thisisanumber <<" divided by " <<thisisnumbertwo<<" is "<< thisisanumber / thisisnumbertwo<<"\n\nPlease press enter twice to perform another calculation.\n";
else if ( number == 3 ) {//ADD
float thisisanumber;
float thisisnumbertwo;
cout<<"Please enter the first number to be added: ";
cin>> thisisanumber;
cout<<"Please enter the number to add to the above number: ";
cin>> thisisnumbertwo;
cout<<"The numbers you entered, added together are: "<< thisisanumber + thisisnumbertwo<<"\n";
cout<< thisisanumber <<" plus " <<thisisnumbertwo<<" is "<< thisisanumber + thisisnumbertwo<<"\n\nPlease press enter twice to perform another calculation.\n";
else if ( number == 4 ) {//SUBTRACT
float thisisanumber;
float thisisnumbertwo;
cout<<"Please enter the starting number: ";
cin>> thisisanumber;
cout<<"Please enter the number to be subtracted from the above number: ";
cin>> thisisnumbertwo;
cout<<"The first number you entered, minus the second number are: "<< thisisanumber - thisisnumbertwo<<"\n";
cout<< thisisanumber <<" minus " <<thisisnumbertwo<<" is "<< thisisanumber - thisisnumbertwo<<"\n\nPlease press enter twice to perform another calculation.\n";
else {//ERROR
cout<<"Please enter either 1, 2, 3 or 4. Press enter to continue.\n";
Well, that's simple enough. Don't enter anything other than a number when asking for a number
Seriously, though, if you want to make sure that doesn't happen, you can read in the input as a string and then parse that to get your float. That way, if the user decides to enter their favorite
color, you can throw an error and tell the user to enter a number.
Couple of tips:
'thisisanumber' and 'thisisnumber2' are declared more than once. You don't need that. Also, it'd be much neater if you separated your tasks into user-defined functions and indented your code
right. Also, switch/case statements are better to use than a bunch of else ifs.
And don't forget to return 0.
....Also, your program won't allow someone to "press Enter twice to do another calculation" because you're not using any kind of loop to bring it back.
Last edited by Krak; 04-07-2005 at 01:45 PM.
'thisisanumber' and 'thisisnumber2' are declared more than once.
Not exactly. Those variables are declared within each branch of the program, so what he's doing isn't illegal since each thisisanumber is in a different block. However, it would be best, I think,
to just declare those variables at the beginning of main().
When you enter non-numeric input where numeric input is expected, that non-numeric input sets the input stream into a bad state and the data doesn't get extracted from the stream. All attempts to
read from the stream will automatically fail until you reset the stream's error flags and clear out the bad data. Only then will you be able to continue to process input as normal.
To make this easy, it might make sense to create a function to wrap all this code:
#include <limits>
bool GotAFloat(char* msg,float& f)
cout << msg;
if( !(cin >> f) )
// Clear error flags
// Pass over any bad data in the stream
return false;
return true;
And then instead of calling the stream extractor directly in your code whenever you want to get a float, you would instead call the function:
//Instead of following
cout<<"Please enter the first number to be multiplied: ";
cin>> thisisanumber;
// Do this instead
while( !GotAFloat("Please enter the first number to be multiplied:",thisisanumber) );
for ( float x = 0; x < 1; x + 0 ) {
An easier way to do that is:
while(1) {
You should probably also have an option 5 to exit the program as well that would call break to exit the loop and the program.
Last edited by hk_mp5kpdw; 04-08-2005 at 10:46 AM.
"Owners of dogs will have noticed that, if you provide them with food and water and shelter and affection, they will think you are god. Whereas owners of cats are compelled to realize that, if
you provide them with food and water and shelter and affection, they draw the conclusion that they are gods."
-Christopher Hitchens
hk_mp5kpdw, where do you put the block of code?
I have reieved a reccomendation from hk_mp5kpdw to put a block of code into my program. I have fixed all of the other issues brought up. Thank You Everybody!!!
This is the confusing piece of code:
To make this easy, it might make sense to create a function to wrap all this code:
#include <limits> //I know that this is at the top.
bool GotAFloat(char* msg,float& f) //But where does this go?
cout << msg;
if( !(cin >> f) )
// Clear error flags
// Pass over any bad data in the stream
return false;
return true
And then instead of calling the stream extractor directly in your code whenever you want to get a float, you would instead call the function:
//Instead of following
cout<<"Please enter the first number to be multiplied: ";
cin>> thisisanumber;
// Do this instead
while( !GotAFloat("Please enter the first number to be multiplied:",thisisanumber) );
for ( float x = 0; x < 1; x + 0 ) {
An easier way to do that is:
while(1) {
You should probably also have an option 5 to exit the program as well that would call break to exit the loop and the program.
In your case I would think that you could put the GotAFloat function in between the using namespace std; line and the int main() line. You could put it after the end of the main function but then
you would need to prototype it.
"Owners of dogs will have noticed that, if you provide them with food and water and shelter and affection, they will think you are god. Whereas owners of cats are compelled to realize that, if
you provide them with food and water and shelter and affection, they draw the conclusion that they are gods."
-Christopher Hitchens
Thank you so much... It works!
04-07-2005 #2
04-07-2005 #3
People Love Me
Join Date
Jan 2003
04-07-2005 #4
04-07-2005 #5
Registered User
Join Date
Jan 2002
Northern Virginia/Washington DC Metropolitan Area
04-08-2005 #6
Join Date
Apr 2005
04-08-2005 #7
Registered User
Join Date
Jan 2002
Northern Virginia/Washington DC Metropolitan Area
04-08-2005 #8
Join Date
Apr 2005
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/64054-mystery-bug-simple-program-i-am-beginner.html","timestamp":"2014-04-17T14:01:40Z","content_type":null,"content_length":"77004","record_id":"<urn:uuid:82ae3885-fd1e-4639-8d12-a2c5e05acaef>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
September 1, 2010
By omegahat
Over the past 10 years, I have been torn between building a new stat. computing environment
or trying to overhaul R. There are many issues on both sides. But the key thing is to
enable doing new and better things in stat. computing rather than just making the existing things
easier and more user-friendly.
If we are to continue with R for the next few years, it is essential that it get faster.
There are many aspects to this. One is compiling interpreted R code into something faster.
LLVM is a toolkit that facilitates the compilation of machine code. So in the past few days
I have looked into this and developed an R package that provides R-bindings to some of
the LLVM functionality.
The package is available from http://www.omegahat.org/Rllvm, as are several examples
of its use.
I used the package to implement a compiled version of one of Luke Tierney’s compilation examples
which uses a loop in R to add 1 to each element of a vector. The compiled version gives a speedup
of a factor of 100, i.e. 100 times faster than interpreted R code. This is slower than x + 1
in R which is implemented in C and does more. But it is a promising start. The compiled version is also faster than bytecode interpreter approaches. So this is reasonably promising.
Of course, it would be nicer to leverage an existing compiler! (Think SBCL and building on top of LISP).
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
|
{"url":"http://www.r-bloggers.com/rllvm/","timestamp":"2014-04-18T20:46:09Z","content_type":null,"content_length":"35083","record_id":"<urn:uuid:fc78c074-35ef-496d-8a1e-0a4848b7258f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Could someone please check my math on this question involving equivalent rates?
August 20th 2012, 08:40 PM
Could someone please check my math on this question involving equivalent rates?
**Funny side note: While typing this up, as a question, half way through I think I figured it out, which is why this is now a "check my math" post** :D
I'm currently getting my grade 11 college math (Functions and Applications, MCF3M) through a learn from home program. Unfortunately I ran into a little trouble with this problem, which had me
stumped :p. I would be very thankful to anyone who could check my math :)
By the way, sorry if this is not a question involving equivalent rates but it's the title of the section this is in, in my book, and I wasn't sure what it is considered. :P
Nader plans to buy a used car. He can afford to pay $280 at the end of each month for three years. The best interest rate he can find is 9.8%/a, compounded monthly. For this interest rate, the
most he could spend on a vehicle is $8702.85.
Determine the amount he could spend on the purchase of a car if the interest rate is 9.8%/compounded annually.
First of all, I understand the first part about him being able to spend $8702.85. I know this can be figured out using the present value formula:
Which, rounded, gives you $8702.85
And I think this is how you figure out the question (though like I said I'm unsure):
1) Find the (monthly?) interest rate (of the compounded annually plan?)
2)Use this new interest rate in the present value formula used earlier
Which, rounded, gives you $9954.64
Therefore Nate could spend $9954.64 on the purchase of a car.
*NOTE: I am also unsure whether I should be using the future value formula in step 2 instead of the present value formula.*
Like I said, if someone could look this over and tell me if I did it right or what I did wrong I would be EXTREMELY thankful! :D
- Lliam
August 26th 2012, 05:01 PM
Re: Could someone please check my math on this question involving equivalent rates?
This question has been solved here: Could someone please check my math? :)
- Lliam
|
{"url":"http://mathhelpforum.com/business-math/202388-could-someone-please-check-my-math-question-involving-equivalent-rates-print.html","timestamp":"2014-04-19T08:08:45Z","content_type":null,"content_length":"5989","record_id":"<urn:uuid:7f5922b8-4198-4bda-83ef-85358b8927a7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter 10: Advanced hypercontractivity
In this chapter we complete the proof of the Hypercontractivity Theorem for uniform $\pm 1$ bits. We then generalize the $(p,2)$ and $(2,q)$ statements to the setting of arbitrary product probability
spaces, proving the following:
The General Hypercontractivity Theorem Let $(\Omega_1, \pi_1), \dots, (\Omega_n, \pi_n)$ be finite probability spaces, in each of which every outcome has probability at least $\lambda$. Let $f \
in L^2(\Omega_1 \times \cdots \times \Omega_n, \pi_1 \otimes \cdots \otimes \pi_n)$. Then for any $q > 2$ and $0 \leq \rho \leq \frac{1}{\sqrt{q-1}} \cdot \lambda^{1/2-1/q}$, \[ \|\mathrm{T}_\rho
f\|_q \leq \|f\|_2 \quad\text{and}\quad \|\mathrm{T}_\rho f\|_2 \leq \|f\|_{q'}. \] (In fact, the upper bound on $\rho$ can be slightly relaxed to the value stated in Theorem 17 of this chapter.)
We can thereby extend all the consequences of the basic Hypercontractivity Theorem for $f : \{-1,1\}^n \to {\mathbb R}$ to functions $f \in L^2(\Omega^n, \pi^{\otimes n})$, except with quantitatively
worse parameters depending on “$\lambda$”. We also introduce the technique of randomization/symmetrization and show how it can sometimes eliminate this dependence on $\lambda$. For example, it’s used
to prove Bourgain’s Sharp Threshold Theorem, a characterization of boolean-valued $f \in L^2(\Omega^n, \pi^{\otimes n})$ with low total influence which has no dependence at all on $\pi$.
Is $q’$ defined here?
|
{"url":"http://www.contrib.andrew.cmu.edu/~ryanod/index.php?p=1517","timestamp":"2014-04-19T22:35:39Z","content_type":null,"content_length":"63306","record_id":"<urn:uuid:f19e52b9-ff45-4826-911c-43a290dd8b8a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Teterboro Calculus Tutor
Find a Teterboro Calculus Tutor
...I've been teaching 3D-animation with Maya for over 4 years. Prior to that, I taught my robotics teams to create 3D-animations for competitions using Maya and 3D-Studio Max. I'm also proficient
with GNU 3D programs such as Blender.
83 Subjects: including calculus, chemistry, physics, statistics
...It is vital that the math fundamentals are present for any student in order to become successful in any capacity. Obviously, most people will not go on to become rocket scientists, but a solid
understanding of fractions, decimals, and percentages is essential to functioning in everyday life. I ...
26 Subjects: including calculus, writing, statistics, geometry
...I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year. I have a lot of experience tutoring physics and math at all levels. I have been
tutoring since high school so I have more than 10 years of experience, having tutored students of all ages, starting from elementary school all the way to college-level.
11 Subjects: including calculus, Spanish, physics, geometry
...I tutor mainly in mathematics, and I hold a Bachelor's in Mathematics from Rutgers University. I find that the most detrimental thing to a mathematics student is lacking a core foundation. In
math, everything you learn builds on top of what you learned in previous years, and without that strong foundation, students can fall behind.
21 Subjects: including calculus, geometry, statistics, accounting
I have taught Mathematics in courses that include basic skills (arithmetic and algebra), probability and statistics, and the full calculus sequence. My passion for mathematics and teaching has
allowed me to develop a highly intuitive and flexible approach to instruction, which has typically garnere...
7 Subjects: including calculus, geometry, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/Teterboro_Calculus_tutors.php","timestamp":"2014-04-18T21:48:42Z","content_type":null,"content_length":"24049","record_id":"<urn:uuid:63b9b7d4-3e22-41eb-b14b-e227378b818f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
QIC 890/CO 781:
Theory of Quantum Communication, Fall 2012
Debbie Leung
Email: wcleung(at)uwaterloo(dot)ca
Starting Sept 11: Tue/Thur 1:05-1:25pm, RAC1 2009 (until IQC moves to QNC)
Instructor office hours:
In general: you can just drop by my RAC office Mon/Tue/Thur afternoons. Note exceptions due to colloquium, Tue/Thur seminars, and of course, if you want to be sure I'm in, email me first.
Textbook by Nielsen and Chuang
7 assignments (70%)
1 term project resulting in a term paper and a presentation (30%)
Posted Sept 04, 2012, 01:00
Webpage was set up.
Posted Sept 11, 2012, 02:45
This concerns the reference from QIC 710:
Besides F2008 Lec 1-3 p10-22, F2009 Lec 14-15 (stated in the handout) the following can be particularly helpful:
F2008 Lec 1-3 p20-22, p33-37 describe in detail what happens if one unitarily transforms or measures one out of two systems, p31-43 has a much longer version of teleportation, and p44-47
discusses no cloning.
F2008 Lec 10-14 p66-69 covers trace and partial trace, p71-77 covers general quantum operations.
Posted Sept 13, 2012, 01:00
Assignment 1 was posted.
Posted Sept 18, 2012, 01:50
Preskill's Ph229 lecture notes, Chapter 5 p20-28 has a nice description of quantum data compression.
Posted Sept 21, 2012, 02:18
Assignment 2 was posted.
Course outline, tentative schedule of lectures and assignments:
Topic 1 -- Converting noiseless resources [2 lectures]
Primitives & axioms for quantum communication.
Superdense coding (SD) and teleportation (TP), optimality and duality.
Concepts of simulation and resource inequalities.
Cobits, SD and TP as inverses of one another.
Cryptographic properties of SD and TP (optional)
~ Assignment 1 out Sept 13, due Sept 20.~
Topic 2 -- Data Compression [1.5 lectures]
Asymptotic equipartition theorem (AEP)
Data compression (Shannon's noiseless coding theorem)
Shannon entropy
Quantum ensembles and quantum data compression
Von Neumann entropy
Topic 3 -- Quantifying information [2.5 lectures]
Conditional entropy and mutual information in the classical and quantum cases.
~ Assignment 2 out Sept 20, due Sept 27 ~
Strong subadditivity and monotonicity of quantum mutual information, Araki-Lieb inequality
Accessible information
Holevo information, Holevo bound
Changes in entropy, accessible information, and Holevo information caused by communication, locking
Bounds on the Holevo information (and a bit of reflection)
Entanglement and back communication cannot increase communication rates (beyond SD)
~ Assignment 3 out Sept 27, due Oct 4 ~
Topic 4 -- Classical communication via classical channels [2 lectures]
Classical iid channels
Shannon's noisy coding theorem.
Converse, and the direct coding theorem
Topic 5 -- Classical communication via quantum channel [4 lectures]
The HSW theorem for the classical capacity of a quantum channel
(Tools: pretty good measurement, gentle measurement lemma, conditional typicality, random codes)
Additivity issues, the existence of counterexamples.
~ Assignment 4 out Oct 9, due Oct 18 ~
Topic 6 -- Quantum communication via quantum channel [4 lectures, starting week 7]
Definition of the quantum capacity and different measures for transmission with vanishing error.
Quantum error correcting codes
Coherent information of quantum states and quantum channels
The LSD theorem for the quantum capacity of a quantum channel
(Tools: Fannes inequality (converse), decoupling lemma (direct coding), Ulhmann's theorem, random codes)
~ Assignment 5 out Oct 25, due Nov 01 ~
Different approaches and coding methods for the LSD theorem
Isometric extensions and complementary channels
Degradable and antidegradable channels (where LSD works)
Upper and lower bounds of quantum capacities, additive extensions, zero capacity conditions
Degenerate codes and statement of superactivation
~ Assignment 6 out Nov 01, due Nov 08 ~
Topic 7 -- Assisted capacities [5 lectures, starting week 8]
Entanglement assisted quantum capacity
Quantum capacity assisted by free classical communication
Mixed state entanglement purification and error correction (optional)
The Brady bunch
No-go for catalysis of capacities by noiseless resources
Separations of capacities
Rocket channel
~ Assignment 7 out Nov 13, due Nov 20
Topic 8 -- Selected topics [2 lectures], from:
Gaussian channels
The Quantum reverse Shannon theorem
Continuity of channel capacities
Network coding
Two-way channels
Zero-error communication
NB. Nov 15 class cancelled.
NB. Timing and plans for topics 7 and 8 more tentative, until further notice.
Student presentations -- Nov 27, 29. Term paper due Nov 29.
Notes for some of the lectures:
Basic principles in communication theory, optimality of superdense coding and teleportation, cobit, duality of superdense coding and teleportation with cobits.
Asymptotic equipartition theorem, Shannon entropy, data compression, and quantum sources.
von Neumann entropy, Schumacher compression, properties of Shannon entropy, conditional entropy, relative entropy, and mutual information.
Quantum conditional entropy, quantum relative entropy, and quantum mutual information, and their properties. Strong subadditivity, monotonity of quantum mutual information, Araki-Lieb inequality,
Holevo information for an ensemble (as the quantum mutual information of a classical-quantum system).
Accessible information, progress towards finding optimal measurements, Peres-Wootters example, lower and upper bounds, Holevo bound. [Lec 7: Communication bounds with two-way quantum
communication, locking of accessible information.]
Classical channels, definition of achievable rates and channel capacity, Shannon noisy channel coding theorem. [Lec 9: Fanos inequality and data processing inequality]
Q-boxes, statement of classical capacity of Q-boxes, gentle measurement lemma, and pretty good measurement.
The packing lemma, and the direct coding theorem for the capacity of the Q-boxes.
The converse for the capacity theorem for Q-boxes. The HSW theorem for the classical capacity of a quantum channel. Consequences of the HSW theorem, and discussion on the "1-shot capacity".
Additivity and violation of additivity of the Holevo information. Equivalences of 4 additivity conjectures.
Definition of quantum capacities, and different measures of faithful transmission of quantum states.
Further discussion in distance measures of channels.
Quantum error correcting codes, brief discussion of degenerate codes and the stabilizer formalism, coherent information of quantum states.
Properties of coherent information, statement of the LSD (Lloyd-Shor-Devetak) theorem for the quantum capacity of a quantum channel, the converse, and the decoupling lemma.
Direct coding half of the LSD theorem. Variations on the codes and proofs (not covered in class). Complementary channel.
Degradable and antidegradable channels. Zero quantum capacity for antidegradable channels. Additivity of the 1-shot quantum capacity for degradable channels. Quantum capacity of the erasure
channel and the dephasing channel. Characterization of degradable channels. Random Pauli channels, depolarizing channels, non-degradability, and their 1-shot capacity.
Lecture 17, Nov 06, 2012 (as given in class) .
Corresponding lecture notes from Spring 2010 (with more mathematical details) Part I Part II. Note the slightly different conventions (for example, q here is p above) compared to Graeme's guest
Depolarizing channels, upper bound on rate achievable by non-degenerate codes (Hamming bound), degenerate codes that extend the nonzero capacity region (thus also showing the nonadditivity of the
1-shot quantum capacity). Additive extension, and upper bounds of quantum capacity of the depolarizing channel by convex decomposition into degradable channels.
Lecture 18, Nov 08, 2012 (as given in class) .
Corresponding lecture notes from Spring 2010
Symmetric side channel assisted quantum capacity of a quantum channel, additivity of this capacity, private capacity of a quantum channel, PPT states and privacy, superactivation of quantum
capacity, and the 50-50 erasure channel as a special symmetric side channel.
Assisted capacities. Entanglement assisted classical capacity of a quantum channel, additivity, converse and direct coding.
Entanglement assisted classical capacities for specific channels. Properties of entanglement assisted classical capacities, and relation to entanglement assisted quantum capacities, reverse
Shannon theorem (both quantum and classical). Quantum capacities assisted by free classical communication. Free forward classical communication does not increasing quantum capacity of a quantum
Quantum capacities assisted by free back classical communication, or free 2-way classical communication. Hierarchy of capacities. Rocket channel demonstrated much larger assisted capacities than
unassisted capacities, and large violations of additivity.
Lecture 22, Nov 27, 2012
Kent Fisher: all about superactivation
Michal Kotowski: entanglement spread and non-maximally entangled states
Entanglement purification and 1-way assisted quantum capacities of quantum channel.
Lecture 23, Nov 29, 2012
Young Han: Universal Gate Construction with Quantum Teleportation
Takafumi Nakano: (Part of) equivalence of additivity conjectures
Family of protocols (the Brady bunch)
Assignment 1, due Sept 20, 2012 in class.
Assignment 2, due Sept 27, 2012 in class.
Assignment 3, due Oct 04, 2012 in class.
Assignment 4, due Oct 18, 2012 in class.
Assignment 5, due Nov 01, 2012 in class.
Assignment 6, due Nov 08, 2012 in class.
Assignment 7, due Nov 15, 2012 in mailbox.
Possible project topics (more and more will be added throughout the term):
Visible communication
Capacity of unitary two-way channels
Optimal measurements for accessible information
Hypothesis testing and relative entropy
Codes for data compression
Efficient high rate codes
Optimality of pretty good measurements
Equivalences of the 4 additivity conjectures
Non-additivity of Holevo information
Randomized arguments in quantum information
Quantum error correcting codes
Private capacity
The above is not an exhaustive list. In particular, many items in the "selected topics" will not be covered in class in the end, and can be used for projects.
Each student should choose a project topic by early Nov, and discuss the suitability with the instruction. The projects are intended to further investigate subjects relevant to the course, and
students are encouraged to choose a topic relevant to their research. A project should not be something the student already knows.
Presentation should be given in class last week of the term, along with a term paper between 6-15 pages.
|
{"url":"http://www.math.uwaterloo.ca/~wcleung/co781-f2012.html","timestamp":"2014-04-16T13:21:43Z","content_type":null,"content_length":"19865","record_id":"<urn:uuid:b125582d-49a5-45ff-afab-fddb2d2e7d1e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Long, Philip M. (P. Krishnan, Philip M. Long and Jeffrey Scott Vitter) -- Learning to make rent-to-buy decisions with systems applications - 1995
Long, Philip M. (Philip M. Long) -- On-line evaluation and prediction using linear functions - 1997
Long, Philip M. (David P. Helmbold, Nicholas Littlestone and Philip M. Long) -- On-Line Learning with Linear Loss Constraints - September 2000
Long, Philip M. (Yi Li, Philip M. Long and Aravind Srinivasan) -- Improved Bounds on the Sample Complexity of Learning - 2001
Long, Philip M. (Peter L. Bartlett and Philip M. Long) -- More theorems about scale-sensitive dimensions and learning - 1995
Long, Philip M. (David P. Helmbold and Philip M. Long) -- Tracking drifting concepts by minimizing disagreements - 1994
Long, Philip M. (Philip M. Long and Lei Tan) -- PAC learning axis-aligned rectangles with respect to product distributions from multiple-instance examples - 1998
Long, Philip M. (David P. Helmbold, Nicholas Littlestone and Philip M. Long) -- Apple Tasting - September 2000
Long, Philip M. (Philip M. Long) -- On the sample complexity of PAC learning half-spaces against the uniform distribution - 1995
Long, Philip M. (Peter L. Bartlett, Philip M. Long and Robert C. Williamson) -- Fat-shattering and the learnability of real-valued functions - 1996
Long, Philip M. (Philip M. Long) -- Improved bounds about on-line learning of smooth-functions of a single variable, - 2000
Long, Philip M. (Philip M. Long) -- The Complexity of Learning According to Two Models of a Drifting Environment - 1999
Long, Philip M. (Philip M. Long) -- On the sample complexity of learning functions with bounded variation - 1998
Long, Philip M. (Peter Auer and Philip M. Long) -- Structural Results About On-line Learning Models With and Without Queries - 1999
Long, Philip M. (Philip M. Long) -- On Agnostic Learning with {0,ast,1}-Valued and Real-Valued Hypotheses - 2001
Long, Philip M. (Rakesh D. Barve and Philip M. Long) -- On the complexity of learning from drifting distributions - 1996
Long, Philip M. (Shai Ben-David, Philip M. Long and Yishay Mansour) -- Agnostic Boosting - 2001
Long, Philip M. (Yi Li and Philip M. Long) -- The Relaxed Online Maximum Margin Algorithm - 2002
Long, Philip M. (Peter Auer, Philip M. Long and Aravind Srinivasan) -- Approximating hyper-rectangles: Learning and pseudorandom sets - 1998
Long, Philip M. (Peter L. Bartlett and Philip M. Long) -- Prediction, learning, uniform convergence, and scale-sensitive dimensions - 1998
Long, Philip M. (Shai Ben-David, Nicolò Cesa-Bianchi, David Haussler and Philip M. Long) -- Characterizations of learnability for classes of {0,dots, n}-valued functions - 1995
Long, Philip M. (Naoki Abe and Philip M. Long) -- Associative reinforcement learning using linear probabilistic concepts - 1999
Long, Philip M. (Shai Ben-David, Nadav Eiron and Philip M. Long) -- On the Difficulty of Approximately Maximizing Agreements - 2000
Long, Philip M. (Philip M. Long) -- Improved bounds about on-line learning of smooth functions of a single variable - 1996
Long, Philip M. (Peter Auer, Philip M. Long, W. Maass and Gerhard J. Woeginger) -- On the complexity of function learning - 1995
Long, Philip M. (Philip M. Long) -- Guest Editor's Introduction - 1997
Long, Philip M. (Wee Sun Lee and Philip M. Long) -- A Theoretical Analysis of Query Selection for Collaborative Filtering - 2001
|
{"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cltbibZz-e--00-1----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-home---00-3-1-00-0--4--0--0-0-11-10-0utfZz-8-00&a=d&c=cltbib-e&cl=CL2.12.156","timestamp":"2014-04-19T15:09:32Z","content_type":null,"content_length":"30577","record_id":"<urn:uuid:012c4e24-5a6e-4fc3-9942-04b05487c08c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weak Arithmetic Progressions
up vote 4 down vote favorite
I am studying a special type of a sequence on the naturals which I am calling a weak arithmetic progression.
Formally I call a k-sequence $x_1< x_2 \cdots< x_k$ a weak arithmetic progression (WAP) if $\exists d\in \mathbb{N}$ such that $x_{i+1}-x_i\in\{1,d\}$. Now supposing all positive integers have been
partitioned into two parts is there an upper bound on how many times a segment of $t$ consecutive numbers ($0\leq t\leq \frac{k-1}{2}$) in one part can occur without a WAP occurring in any one part?
co.combinatorics ramsey-theory
Why can't I see the whole question? :S – Shahab Jun 5 '12 at 11:15
When I click edit I can see the whole question but otherwise I am seeing only one & a half sentences. Does anyone know why is this happening? – Shahab Jun 5 '12 at 11:20
1 You have to leave a space after any < character, otherwise markdown will interpret it as a start of an HTML tag. Also, you need to double backslashes in commands like \{, \}, \\. – Emil Jeřábek
Jun 5 '12 at 11:24
Ok, thanks for the information. – Shahab Jun 5 '12 at 11:38
1 Is WAP also known as "bounded gaps"? If so, search that term for lots of info. – Gerald Edgar Jun 5 '12 at 12:01
show 5 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged co.combinatorics ramsey-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/98861/weak-arithmetic-progressions","timestamp":"2014-04-16T07:30:18Z","content_type":null,"content_length":"51591","record_id":"<urn:uuid:c1edf887-bc65-4b68-a0a0-f63c8648cde9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why are there 24 hours in a day?
We divide the day into 24 hours, one hour into 60 minutes and one minute into 60 seconds. What other comparably accurate methods have been used throughout history?
While each country has (in broad terms) historically had distinct measurements for distance, weights etc the method of splitting the day into 24 hours, one hour into 60 mins and one minute into
60 seconds seems to be the only one in use, and indeed to me the only one I know of. This non-metric measurement of time is far from ideal, but what other comparably accurate methods have been
used historically?
"The origin of our time system of 24 hours in a day with each hour subdivided into 60 minutes and then 60 seconds is complex and interesting," says Dr Nick Lomb, consultant curator of astronomy, from
the Sydney Observatory.
Our 24-hour day comes from the ancient Egyptians who divided day-time into 10 hours they measured with devices such as shadow clocks, and added a twilight hour at the beginning and another one at the
end of the day-time, says Lomb.
"Night-time was divided in 12 hours, based on the observations of stars. The Egyptians had a system of 36 star groups called 'decans' — chosen so that on any night one decan rose 40 minutes after the
previous one.
"Tables were produced to help people to determine time at night by observing the decans. Amazingly, such tables have been found inside the lids of coffins, presumably so that the dead could also tell
the time."
In the Egyptian system, the length of the day-time and night-time hours were unequal and varied with the seasons.
"In summer, day-time hours were longer than night-time hours while in winter the hour lengths were the other around," says Lomb.
Ancient Babylonians: hours and minutes
The subdivision of hours and minutes into 60 comes from the ancient Babylonians who had a predilection for using numbers to the base 60. For example, III II (using slightly different strokes) meant
three times 60 plus two or 182.
"We have retained from the Babylonians not only hours and minutes divided into 60, but also their division of a circle into 360 parts or degrees," says Lomb.
"What we have not retained is their division of a day into 360 parts called 'ush' that each equalled four of minutes in our time system."
Lomb says it's likely that the Babylonians were interested in 360 because that was their estimate for the number of days in a year. Their adoption of a base 60 system was probably allowed them to
make complex calculations using fractions.
Ancient Chinese
The ancient Chinese used a dual time system where they divided the day into 12 so-called, 'double hours', originally with the middle of the first double hour being at midnight.
They also had a separate system in which a day was divided into 100 equal parts called 'ke', that are sometimes translated as 'mark' into English.
"What complicated this arrangement was that the two systems did not mesh well since there were a non-integral number of ke in each double hour, specifically 8 1/3. Because of this inconvenience, much
later on, in the year 1628 of our era, the number of ke in a day was reduced to 96," says Lomb.
Other cultures
While many cultures had their own calendars, there doesn't appear to be evidence for equivalent methods for keeping time.
"There is a lot of information available on the Mayan calendar, but I have not seen anything to indicate if, and how, they subdivided the day," says Lomb.
"Similarly, though it is well known that the Australian Aboriginal people had seasonal calendars and used the sky to indicate the seasons, I have not seen anything on how they kept time."
Metric time?
In 1998, the Swiss watch company Swatch introduced the concept of a decimal Internet Time in which the day is divided into 1000 'beats' so that each beat is equal to 1 minute 26.4 seconds. The beats
were denoted by the @ symbol, so that, for example, @250 denotes a time period equal to six hours.
"So far this system has not caught on," says Lomb.
"For each country the immense cost and difficulty in switching over to this or another metric time system would be enormous, possibly as great, if not greater, than it was for Australia to switch to
decimal currency back in 1966," he says.
"The insurmountable difficulty though would be the prior hurdle of getting each country in the world both to agree to change and to agree on a common system of decimal time. I think that I am safe in
stating that there will be no change from the present system of time measurement in the foreseeable future."
Keeping timeWhile our units for measuring time seem to be here to stay, the way we measure time has changed significantly over the centuries. The Ancient Egypitians used sundials and waterclocks, as
did several civilisations after them. Hourglasses were also an important time-keeping device before the invention of mechanical and pendulum clocks. The development of modern quartz watches and
atomic clocks has enabled us to measure time with increasing accuracy.
Today, the standard definition for time is no longer based on the rotation of the Earth around the Sun, but on atomic time. A second is defined as: "9,192,631,770 periods of the radiation
corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom."
Read more about keeping time
Dr Nick Lomb was interviewed by Kylie Andrews.
Published 15 November 2011
|
{"url":"http://www.abc.net.au/science/articles/2011/11/15/3364432.htm?site=science&topic=tech&listaction=unsubscribe","timestamp":"2014-04-16T14:54:29Z","content_type":null,"content_length":"68722","record_id":"<urn:uuid:68ee7459-742e-45aa-a41b-451568d5c5be>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Golden Ratio of 1.618 - diyAudio
There is no such thing as a non-resonant box.
The only thing achieved by using different dimensions for height, length and width is to spread the resonances so they coincidence less. Which dimensions that actually work depends, and should be
calculated for each case. And one should look at both axial, tangential and oblique resonant modes.
dipoles dipoles dipoles dipoles dipoles dipoles dipoles dipoles and dipoles
|
{"url":"http://www.diyaudio.com/forums/multi-way/221126-golden-ratio-1-618-a.html","timestamp":"2014-04-16T17:54:59Z","content_type":null,"content_length":"79804","record_id":"<urn:uuid:d976d819-03a6-4e57-a760-e004daac72d0>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Re: Factoring
Replies: 0
Re: Factoring
Posted: Apr 15, 1998 12:54 PM
Jerry Uhl wrote:
>Mathematica does plot implicit functions.
>At 10:13 PM -0400 4/13/98, Dave Slomer wrote:
Actually, so does Derive, even if the implicitness seems 'insolveable=
'. I
just did:
SIN(x=B7y) + COS(x=B7y) =3D 0.2
SIN(x=B7y) + COS(y) =3D 0.2
LN(x + y) + SIN(x + y) + =EA^y + COS(x) =3D 0
=2E..the last several of which produced very interesting plots!
=46rom Derive's 'help' on 'implicit plots':
"Functions implicitly defined by an equation may be plotted using a
relatively efficient algorithm of linear interpolation upon triangles=
The implicit plotting algorithm starts in the top left corner of the =
window, plotting points in a top to bottom, left to right fashion. U=
most explicit plots, implicit plots may not be automatically scaled; =
nor may
an implicit plot line be traced.
Explicit plots are faster and somewhat more accurate than implicit pl=
Before plotting, when possible, DERIVE automatically converts a funct=
implicitly defined by an equation into a function explicitly defined =
by an
equation of the form y =3D u where y is a variable and u is a univari=
expression in another variable...
An alternate method for generating explicit solutions for implicitly =
equations is to substitute polar coordinates for the variables and so=
lve for
the distance from the origin in terms of the angle...
Implicit 2D plots of a family of plot lines can also be used to make =
of functions of two variables. Simplifying and then plotting an expre=
of the form
VECTOR (z =3D u, z, m, n, s)
produces a contour plot of the function z =3D u where u is a function=
of x and
y as z varies from m to n in steps of size s. The Calculus Vector co=
is an easy way to enter such expressions."
Dave Slomer
AP Calculus and computer science teacher
Winton Woods HS, Cinti, OH 45240
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=160662","timestamp":"2014-04-20T07:04:39Z","content_type":null,"content_length":"15724","record_id":"<urn:uuid:f5b51550-c213-49a0-a46a-109e9fbf06dd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
|
find each sum or difference simplify all answer
Best Results From Wikipedia Yahoo Answers
From Wikipedia
Digit sum
In mathematics, the digit sum of a given integer is the sum of all its digits, (e.g.: the digit sum of 84001 is calculated as 8+4+0+0+1 = 13). Digit sums are most often computed using the
decimal representation of the given number, but they may be calculated in any other base; different bases give different digit sums, with the digit sums for binary being on average smaller than those
for any other base.
Let S(r,N) be the digit sum for radix r of all non-negative integers less than N. For any 2 ≤ r[1] < r[2] and for sufficiently large N, S(r[1],N) < S(r[2],N).
The sum of the decimal digits of the integers 0, 1, 2, ... is given by in the On-Line Encyclopedia of Integer Sequences. use the generating function of this integer sequence (and of the analogous
sequence for binary digit sums) to derive several rapidly-converging series with rational and transcendental sums.
The concept of a decimal digit sum is closely related to, but not the same as, the digital root, which is the result of repeatedly applying the digit sum operation until the remaining value is only a
single digit. The digital root of any non-zero integer will be a number in the range 1 to 9, whereas the digit sum can take any value. Digit sums and digital roots can be used for quick divisibility
tests: a natural number is divisible by 3 or 9 if and only if its digit sum (or digital root) is divisible by 3 or 9, respectively.
Digit sums are also a common ingredient in checksum algorithms and were used in this way to check the arithmetic operations of early computers. Earlier, in an era of hand calculation, suggested using
sums of 50 digits taken from mathematical tables of logarithms as a form of random number generation; if one assumes that each digit is random, then by the central limit theorem, these digit sums
will have a random distribution closely approximating a Gaussian distribution.
The digit sum of the binary representation of a number is known as its Hamming weight or population count; algorithms for performing this operation have been studied, and it has been included as a
built-in operation in some computer architectures and some programming languages. These operations are used in computing applications including cryptography, coding theory, and computer chess.
Harshad numbers are defined in terms of divisibility by their digit sums, and Smith numbers are defined by the equality of their digit sums with the digit sums of their prime factorizations. studies
the question of how many integers are mapped to the same value by the sum-of-digits function.
6% of the numbers below 100000 have a digit sum of 23, which is along with 22 the most common digit sum within this limit.
From Yahoo Answers
Question:What's (2a+8) + (5a-5) ? I know how to do it but I'm not sure if you add 2+8 then add the 'a' ... Please help! Thank you so much!
Answers:a is a variable. It's different from the constant (#), though it adds and subtracts the same. You just don't group it with the constants. so.... split them up like this (2a+ 5a) + (8+ (-5))=
7a+ 3
Question:2. Find the indicated sums and differences. x+5/3 - x-3/2. 3. Find the indicated sums and differences. 3/a-5 + a+2/a. 4. Find the indicated sums and differences. 3k/k-2 + 6/2-k. 5. Find the
indicated sums and differences. b+1/b+2 - b+3/b+4. Thanks so much.
Answers:Okay: 1. Multiply each fraction by the missing denominator. In other words, for the first one, multiply top and bottom by (x-3). Your common denominator here is (x+5)(x-3). So, you get
3x-9-2x-10 all over (x+5)(x-3). So, combine like terms and you get x-19 / (x+5)(x-3) 2. Similar: Instead, multiply each fraction by the missing constant that makes it a common denominator, this case
6. So, multiply the first one top and bottom by 2 and the second one by 3. You get 2x+10-3x+9 all over 6. You get in the end: -x+19 or 19-x / 6. 3. Similar again: multiply the first fraction top and
bottom by a and the second one by a-5. So you get 3a+a^2-3a-10 all over a(a-5). Combining like terms, you get a^2-10 over a^2-5a. Notice here if you factor out an a from top and bottom, look what
happens: a(a-5) over a(a-5) so these both cancel and you get 1 as your answer. 4. For this one, multiply the second term first by -1. This will make them identical denominators, and then combine:
3k-6 / k-2. Then simplify by factoring out a 3: 3(k-2)/k-2 or 3. 5. Multiply each fraction, as in the earlier problems, by the missing denominator: b^2+5b+4-b^2-5b-6 all over (b+2)(b+4). You can
cancel a lot here and you are left with -2 / (b+2)(b+4)
Question:to its right how many such numbers are there? One of my numbers are chosen at random. What is the probability that it is a 1? I circle the middle digit of all the numbers, and add them up.
What is the sum? HARDD, PLEASE HELP than the one to its right*
Answers:9876543 8765432 7654321 6543210 The answer is four (4) not 5 not 3 as BILL illustrated but did not answer. The probability (at random) is 1 in 10 (0-9) (however it would only work in one
position of two numbers and the odds of that are worse than the lottery like 10000000^10 to 1) 6 + 5 + 4 + 3 + 2 = 20
Question:I worked this one out but wasn't sure which one was correct... (1/p^2 - 2p + 1) - (1/p^2 + p - 2) Is the answer 3/(p-1)^2(p+2) OR p+1/(p-1)(p+2) *I wasn't sure if the LCD of (p-1)(p-1) and
(p-1) was just (p-1) or (p-1)^2 Also, what would be the final answer for [(a+b) / (a-b)] + [(a-b) / (a+b)] - [(b-a) / (a-b)] + [(b-a) / (a+b)] Last one... Is this simplified? 2u^2 + 5uv - 2v^2 /
(2u+v)(2u-v) THANKS!
Answers:for the first one, you have to get the fractions down to the same denominator before you can add. the least common multiple is (p-1)^2 since (p-1) is not even a multiple of (p-1)^2. all that
said, your first answer is right :) for the second one, the important thing to notice is that (a-b) = -(b-a). so the 2nd term cancels with the 4th and we can add the other two since they have the
same denominator. final answer = 2a / (a-b) third one looks good :)
|
{"url":"http://www.edurite.com/kbase/find-each-sum-or-difference-simplify-all-answer","timestamp":"2014-04-19T04:22:08Z","content_type":null,"content_length":"74166","record_id":"<urn:uuid:b4b2f4ca-904d-4bc4-b806-8c4324d01cba>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In my thirty-six years of teaching, I’ve learned a few valuable lessons. A recent article in The Chronicle of Higher Education, “You Will Be Tested on This,”[1] reminded me of two of them: What
you test is what they will learn, and the best way to help students learn is to test them early and often
I learned the first lesson very painfully in a differential equations class early in my career. This was before the age of easy access to computers, back in the days when this course was built
almost entirely from examples with exact solutions. I thought I had done a pretty good job of motivating these examples, describing a variety of situations that could be modeled by the various
differential equations we encountered. The emphasis, of course, was how to turn an initial value problem into a solution. That is what the homework had been about, and that is what I tested on
the midterms.
But for the final exam, I decided that I wanted to see how well my students had grasped the larger picture. Could they take a situation similar to those I’d talked about, model it with an
appropriate differential equation, and then solve that equation to gain insight into the original situation? Without much trouble, I found a several good problems that tested their understanding
at this deeper level.
The final exam was, of course, a disaster. I had never required my students to construct differential equations from a description of a situation. When I had modeled this for them in the
classroom, they had taken it as an interesting aside, but not really what they were there to learn. I had never tested them on it before. They not only did very poorly on my exam, they were angry
that I had dared in the final exam to change the contract of what they were supposed to learn in this course.
In succeeding years, I tried setting out my expectations at the beginning of the class: This is what I expect you to be able to do by the time you finish my class. I learned that most students
paid exactly as much attention to these course descriptions as I did. If there was something on my list that I never quite got around to covering, no one complained. If there was something else
that I inserted, and made it clear to them during the course of the semester that it was important, they accepted it as important. My actions spoke much louder than the words I had used on the
first day. In fact, as I came to realize, the whole exercise of laying out course expectations was really for my sake, not for theirs. It served to remind me where I had intended to go when I
first conceived this course, what I had hoped to accomplish with these students. My students paid attention to my homework assignments, quizzes, and exams. Those told them what they needed to
learn, what this course was really about.
It gradually dawned on me that there is a corollary to “What you test is what they learn.” Students arrive in math class with a set of expectations of what will happen. Most expect to be
presented with an assortment of procedures for finding answers. Their task is to learn how to choose the right procedure to apply to a given problem and then to execute it correctly. For most of
them, this is what mathematics has always been. So when I began to probe their understanding, asking them to explain what they did and why they did it, challenging them to investigate and solve
difficult problems that did not fit any of their templates, they were wrong-footed and upset. Even if I did this in their first midterm exam, even if I had warned them that this was what I would
expect, they did not really believe me until they saw it on the exam, and they felt cheated. I was changing the rules on them in mid-semester.
And so I learned to introduce assignments, quizzes, and projects, earlier and earlier in the semester. The first ones did not need to carry much weight, but they did need to count so that
students would see that this is what I really wanted them to be able to do. I had finally learned my second lesson: “Test early, and test often.”
I particularly like projects, difficult problems that are started in class, completed outside of class, that require some original thinking and a synthesis of seemingly disparate ideas. I find it
very instructive to require the students to write up careful and complete solutions, both to help me understand their thinking and to help them learn how to express themselves in the clear,
precise language of mathematics. And, so that they pay attention to my comments, I require at least one draft that I critique heavily. Early in the semester, these are always group projects. This
facilitates the formation of study groups, and it helps students who have never done this style of mathematics to get support from their peers. By the end of the semester, the projects may be
started in groups, but usually are completed individually. But what is especially important is that the first of these projects always gets started late in the first week or early in the second
week of class, as soon as my class list has stabilized. Students learn very quickly what I will be expecting.
In addition, I now will never give the first midterm exam later than the fourth week of class, and occasionally I sneak it into the third week. This gives my students some very quick feedback on
how they are doing and a very clear signal of what I expect.[2]
When I decided that I wanted my students to read the chapter before each class, I did not just tell them that this was what I expected. I made part of their grade dependent on it. I now have a
general policy of requiring each student to submit a short paragraph that I get at least one hour before class [3] that answers two of the three questions: What was the main point of this
reading? What did you find surprising in this reading? What did you find confusing in this reading? Submitting their answers to at least 95% of the readings earns them all possible points for the
semester, usually 5% of the total grade. Not much, you might say, but enough that my students take it seriously. I am rewarded with students who have read the chapter before class, who have
thought at least a little about what it says, and who have clued me in on where the difficulties lie for them.
The point applies to whatever your goals for a course might be. Once you have clarified to yourself what you want your students to learn in this class, you need to think about how it can be
evaluated, how to hold your students accountable for achieving it, and how to communicate this expectation early and forcefully. Of course, you also need to think about what it will take to get
your students to succeed, the intermediate steps that will help them build the expertise you want. I have found that knowing how I will assess their achievement also helps me plot how to help
them get there.
[1] David Glenn. You Will Be Tested on This. The Chronicle of Higher Education. June 8, 2007. pp. A15–17. http://chronicle.com/weekly/v53/i40/40a01401.htm
[2] Especially for this first midterm exam, it is very important to give students a chance to redeem themselves if they have badly misjudged how to prepare. I usually let the students earn back
half the points they have lost on a given problem if they give me a written explanation of where they went wrong and how to solve the problem correctly. This forces students to confront their
mistaken understandings and learn something from them. Letting students earn back points they have lost also enables me to avoid falling back on the grading curve, one of the most pernicious
practices in education.
[3] Initially, I had the students send these answers via email. I now use the online text assignment in Moodle, an open source web-based course management system, which makes the entire process
easy for them and painless for me. They click on the appropriate icon, a window appears in which they write their answers, and they click to submit. I see a list with time stamps of students who
have submitted answers, I click on the student name to see the response, click on the appropriate grade which is automatically recorded and available on an Excel spreadsheet (I grade all or
nothing), submit, and go to the next name.
|
{"url":"https://www.maa.org/external_archive/columns/launchings/launchings_08_07.html","timestamp":"2014-04-19T10:05:39Z","content_type":null,"content_length":"13495","record_id":"<urn:uuid:9661fae6-f08e-43ea-b0d8-eb6400f1884d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Alpharetta Math Tutor
Find an Alpharetta Math Tutor
...Major topics include algebraic properties and the real number system, functions and their graphs, linear equations and inequalities, linear regression and modeling, systems of linear equations
and inequalities, polynomial and factoring, algebraic fractions and real world applications. Students g...
5 Subjects: including linear algebra, algebra 1, algebra 2, geometry
...I have myself prepared and given many competitive exams to reach and complete a PhD program in computational biochemistry in one of the top universities in the country. That is the reason I
understand from a student's point of view where he/she encounters difficulty in solving problems and solvi...
58 Subjects: including calculus, ACT Math, GRE, grammar
...Make the right choice, hire me! Physics is my strongest subject and I enjoy it the most. Particularly kinematics, Newton's second law of motion, and rotational motion.
11 Subjects: including algebra 1, algebra 2, calculus, geometry
...I received my MBA with a concentration in Human resources and I achieved all A's in these classes. I have served as a manager for more than five years and I have seen many of those I have
coached and tutored move forward in their careers and schooling to reach their goals. I am very passionate ...
42 Subjects: including algebra 2, SAT math, trigonometry, English
...I have tutored k-12 math, and my students have significantly improved their grades. I have just graduated Magna Cum Laude with an Honors Distinction from College with a Bachelor's degree in
Pre-Med Biology and a Minor in Spanish. I speak Spanish fluently.
29 Subjects: including algebra 1, SAT math, elementary (k-6th), TOEFL
Nearby Cities With Math Tutor
Atlanta Math Tutors
Berkeley Lake, GA Math Tutors
Decatur, GA Math Tutors
Duluth, GA Math Tutors
Dunwoody, GA Math Tutors
Johns Creek, GA Math Tutors
Lawrenceville, GA Math Tutors
Marietta, GA Math Tutors
Milton, GA Math Tutors
Norcross, GA Math Tutors
Roswell, GA Math Tutors
Sandy Springs, GA Math Tutors
Smyrna, GA Math Tutors
Snellville Math Tutors
Woodstock, GA Math Tutors
|
{"url":"http://www.purplemath.com/alpharetta_math_tutors.php","timestamp":"2014-04-18T13:48:30Z","content_type":null,"content_length":"23666","record_id":"<urn:uuid:fb3d5659-1033-426e-9fdc-cae3f870ffff>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integration Theorem
December 15th 2009, 03:09 PM
Integration Theorem
I have this in my notes:
Define continuous functions on [a,b] by $f_+ (t) = max(f(t),0)$ and $f_- (t) = max(-f(t),0).$ Then $f(t) = f_+ (t) - f_- (t)$ and $|f(t)| = f_+ (t) + f_- (t).$. The result follows from linearity
and the triangle inequality by the following calculation:
$|\int_a^b f(t)dt | = | \int_a^b (f_+ (t) - f_- (t))dt | \le \int_a^b f_+(t)dt + \int_a^b f_- (t) dt= \int_a^b |f(t)| dt$.
I understand that $| \int_a^b (f_+ (t) - f_- (t))dt | \le |\int_a^b f_+(t)dt| + |\int_a^b f_- (t) dt|$ but not why $| \int_a^b (f_+ (t) - f_- (t))dt | \le \int_a^b f_+(t)dt + \int_a^b f_- (t) dt$
Any help in this will be much appreciated thank you :)
December 16th 2009, 03:09 AM
It uses the fact that $\left|\int f(x)dx\right|\le \int |f(x)|dx$.
December 16th 2009, 03:15 AM
What I have written is the proof for that fact but its one of the steps that I don't understand :(
December 16th 2009, 05:20 AM
Oh, of course, I completely misread.
Do you see that if f(x) is non-negative on (a,b) then $\left|\int_a^b f(x)dx\right|= \int_a^b f(x)dx= \int_a^b |f(x)| dx$?
Thats true because both f(x) and the integral are never negative so you can just "discard" the absolute values.
Now, by its definition, $f_+(x)$ is non-negative. By its definition, $f_-(x)$ is non-positive so is $-f_-(x)$ is non-negative.
Clearly, $f(x)= f_+(x)+ f_-(x)$ while $|f(x)|= f_+(x)- f_-(x)$. If that's not clear, think about what happens for each x if (1) $f(x)\ge 0$, (2) $f(x)< 0$.
Now, $\int_a^b |f(x)|dx= \int_a^b (f_+(x)- f_-(x))dx= \int_a^b f_+(x)dx- \int_a^b f_-(x) dx$ while $\left|\int_a^b f(x)dx\right|= \left|\int_a^b f^+(x)dx+ \int_a^b f_-(x)dx\right|$.
Finally, use the fact that, for numbers x and y, $|x- y|\le |x|+ |y|$.
December 21st 2009, 03:40 AM
thank you very much
|
{"url":"http://mathhelpforum.com/differential-geometry/120653-integration-theorem-print.html","timestamp":"2014-04-20T19:28:32Z","content_type":null,"content_length":"9560","record_id":"<urn:uuid:f8d36787-e531-4417-b19c-fc16fb533b80>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Radon Transform
Results 1 - 10 of 106
- CONSTRUCTIVE APPROXIMATION , 1986
"... Among other things, we prove that multiquadric surface interpolation is always solvable, thereby settling a conjecture of R. Franke. ..."
- ACM Transactions on Graphics , 2002
"... this paper, we propose and analyze a method for computing shape signatures for arbitrary (possibly degenerate) 3D polygonal models. The key idea is to represent the signature of an object as a
shape distribution sampled from a shape function measuring global geometric properties of an object. The pr ..."
Cited by 192 (0 self)
Add to MetaCart
this paper, we propose and analyze a method for computing shape signatures for arbitrary (possibly degenerate) 3D polygonal models. The key idea is to represent the signature of an object as a shape
distribution sampled from a shape function measuring global geometric properties of an object. The primary motivation for this approach is to reduce the shape matching problem to the comparison of
probability distributions, which is simpler than traditional shape matching methods that require pose registration, feature correspondence, or model fitting
"... Measuring the similarity between 3D shapes is a fundamental problem, with applications in computer vision, molecular biology, computer graphics, and a variety of other fields. A challenging
aspect of this problem is to find a suitable shape signature that can be constructed and compared quickly, whi ..."
Cited by 172 (7 self)
Add to MetaCart
Measuring the similarity between 3D shapes is a fundamental problem, with applications in computer vision, molecular biology, computer graphics, and a variety of other fields. A challenging aspect of
this problem is to find a suitable shape signature that can be constructed and compared quickly, while still discriminating between similar and dissimilar shapes. In this paper, we propose and
analyze a method for computing shape signatures for arbitrary (possibly degenerate) 3D polygonal models. The key idea is to represent the signature of an object as a shape distribution sampled from a
shape function measuring global geometric properties of an object. The primary motivation for this approach is to reduce the shape matching problem to the comparison of probability distributions,
which is a simpler problem than the comparison of 3D surfaces by traditional shape matching methods that require pose registration, feature correspondence, or model fitting. We find that the
dissimilarities be...
, 2000
"... We consider a model problem of recovering a function f(x1,x2) from noisy Radon data. The function f to be recovered is assumed smooth apart from a discontinuity along a C2 curve – i.e. an edge.
We use the continuum white noise model, with noise level ɛ. Traditional linear methods for solving such in ..."
Cited by 50 (14 self)
Add to MetaCart
We consider a model problem of recovering a function f(x1,x2) from noisy Radon data. The function f to be recovered is assumed smooth apart from a discontinuity along a C2 curve – i.e. an edge. We
use the continuum white noise model, with noise level ɛ. Traditional linear methods for solving such inverse problems behave poorly in the presence of edges. Qualitatively, the reconstructions are
blurred near the edges; quantitatively, they give in our model Mean Squared Errors (MSEs) that tend to zero with noise level ɛ only as O(ɛ1/2)asɛ → 0. A recent innovation – nonlinear shrinkage in the
wavelet domain – visually improves edge sharpness and improves MSE convergence to O(ɛ2/3). However, as we show here, this rate is not optimal. In fact, essentially optimal performance is obtained by
deploying the recentlyintroduced tight frames of curvelets in this setting. Curvelets are smooth, highly anisotropic elements ideally suited for detecting and synthesizing curved edges. To deploy
them in the Radon setting, we construct a curvelet-based biorthogonal decomposition
, 1999
"... We derive a formula connecting the derivatives of parallel section functions of an origin-symmetric star body in Rn with the Fourier transform of powers of the radial function of the body. A
parallel section function (or (n − 1)-dimensional X-ray) gives the ((n − 1)-dimensional) volumes of all hyp ..."
Cited by 34 (7 self)
Add to MetaCart
We derive a formula connecting the derivatives of parallel section functions of an origin-symmetric star body in Rn with the Fourier transform of powers of the radial function of the body. A parallel
section function (or (n − 1)-dimensional X-ray) gives the ((n − 1)-dimensional) volumes of all hyperplane sections of the body orthogonal to a given direction. This formula provides a new
characterization of intersection bodies in Rn and leads to a unified analytic solution to the Busemann-Petty problem: Suppose that K and L are two origin-symmetric convex bodies in Rn such that the
((n − 1)dimensional) volume of each central hyperplane section of K is smaller than the volume of the corresponding section of L; is the (n-dimensional) volume of K smaller than the volume of L? In
conjunction with earlier established connections between the Busemann-Petty problem, intersection bodies, and positive definite distributions, our formula shows that the answer to the problem depends
on the behavior of the (n − 2)-nd derivative of the parallel section functions. The affirmative answer to the Busemann-Petty problem for n ≤ 4 and negative answer for n ≥ 5 now follow from the fact
that convexity controls the second derivatives, but does not control the derivatives of higher orders.
- European Journal Applied Mathematics
"... The paper presents a survey of mathematical problems, techniques, and challenges arising in the Thermoacoustic (also called Photoacoustic or Optoacoustic) Tomography. 1 ..."
Cited by 29 (6 self)
Add to MetaCart
The paper presents a survey of mathematical problems, techniques, and challenges arising in the Thermoacoustic (also called Photoacoustic or Optoacoustic) Tomography. 1
- Inverse Problems , 2000
"... . In many areas of science and engineering it is of interest to find the shape of an object or region from indirect measurements which can actually be distilled into moments of the underlying
shapes we seek to reconstruct. In this paper, we describe a theoretical framework for the reconstruction of ..."
Cited by 26 (10 self)
Add to MetaCart
. In many areas of science and engineering it is of interest to find the shape of an object or region from indirect measurements which can actually be distilled into moments of the underlying shapes
we seek to reconstruct. In this paper, we describe a theoretical framework for the reconstruction of a class of planar semi-analytic domains from their moments. A part of this class, known as
quadrature domains, can approximate, arbitrarily closely, any bounded domain in the complex plane, and is therefore of great practical importance. We provide an exact reconstruction algorithm of
quadrature domains. Some numerical demonstrations of the proposed algorithms will be presented. In addition, relations of the present theory to computer-assisted tomography and a geophysical inverse
problem will be briefly discussed. 1. Introduction The theoretical subject of this paper is the truncated L problem of moments in two variables and some of its ramifications. The practical aspects of
the paper are ...
, 1999
"... We derive a formula connecting the derivatives of parallel section functions of an origin-symmetric star body in R n with the Fourier transform of powers of the radial function of the body. A
parallel section function (or (n - 1)- dimensional X-ray) gives the ((n - 1)-dimensional) volumes of a ..."
Cited by 21 (0 self)
Add to MetaCart
We derive a formula connecting the derivatives of parallel section functions of an origin-symmetric star body in R n with the Fourier transform of powers of the radial function of the body. A
parallel section function (or (n - 1)- dimensional X-ray) gives the ((n - 1)-dimensional) volumes of all hyperplane sections of the body orthogonal to a given direction. This formula provides a new
characterization of intersection bodies in R n and leads to a unified analytic solution to the Busemann-Petty problem: Suppose that K and L are two origin-symmetric convex bodies in R n such that the
((n - 1)-dimensional) volume of each central hyperplane section of K is smaller than the volume of the corresponding section of L; is the (n-dimensional) volume of K smaller than the volume of L? In
conjunction with earlier established connections between the Busemann-Petty problem, intersection bodies, and positive definite distributions, our formula shows that the answer to the problem
"... The transform considered in the paper averages a function supported in a ball in R n over all spheres centered at the boundary of the ball. This Radon type transform arises in several
contemporary applications, e.g. in thermoacoustic tomography and sonar and radar imaging. Range descriptions for suc ..."
Cited by 17 (10 self)
Add to MetaCart
The transform considered in the paper averages a function supported in a ball in R n over all spheres centered at the boundary of the ball. This Radon type transform arises in several contemporary
applications, e.g. in thermoacoustic tomography and sonar and radar imaging. Range descriptions for such transforms are important in all these areas, for instance when dealing with incomplete data,
error correction, and other issues. Four different types of complete range descriptions are provided, some of which also suggest inversion procedures. Necessity of three of these (appropriately
formulated) conditions holds also in general domains, while the complete discussion of the case of general domains will require another publication.
- Journal of Multivariate Analysis , 1995
"... The Gini index and the Gini mean difference of a univariate distribution are extended to measure the disparity of a general d-variate distribution. We propose and investigate two approaches, one
based on the distance of the distribution from itself, the other on the volume of a convex set in (d + 1) ..."
Cited by 15 (6 self)
Add to MetaCart
The Gini index and the Gini mean difference of a univariate distribution are extended to measure the disparity of a general d-variate distribution. We propose and investigate two approaches, one
based on the distance of the distribution from itself, the other on the volume of a convex set in (d + 1)-space, named the lift zonoid of the distribution. When d = 1, this volume equals the area
between the usual Lorenz curve and the line of zero disparity, up to a scale factor. We get two definitions of the multivariate Gini index, which are different (when d#1) but connected through the
notion of the lift zonoid. Both notions inherit properties of the univariate Gini index, in particular, they are vector scale invariant, continuous, bounded by 0 and 1, and the bounds are sharp. They
vanish if and only if the distribution is concentrated at one point. The indices have aceteris paribus property and are consistent with multivariate extensions of the Lorenz order. Illustrations with
data conclude...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=628397","timestamp":"2014-04-21T06:05:32Z","content_type":null,"content_length":"37520","record_id":"<urn:uuid:c4a1ae5a-a988-4a61-b4ec-6cffb24d0db9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lagranges Theorem
January 5th 2010, 03:18 PM
Lagranges Theorem
Prove Lagranges Theorem - the order of every subgroup H of a finite group G of order n is a divisor of n.
Show that the set G = {1,-1,i,-i} (i - (sqrt of -1))
forms a group with respect to multiplication of complex numbers.Obtain a non trivial subgroup h, justifying your choice and use it to illustrate Lagranges Theorem
January 5th 2010, 03:56 PM
Look in a math book? Show that cosets partition the group $G$ and then show that $f:H\mapsto aH$ given by $h\mapsto ah$ is a bijection. Then conclude that $\left|G\right|=\left|H\right|\left[G:H\
right]$ where $\left[G:H\right]$ is the number of cosets formed by $H$.
Show that the set G = {1,-1,i,-i} (i - (sqrt of -1))
forms a group with respect to multiplication of complex numbers.Obtain a non trivial subgroup h, justifying your choice and use it to illustrate Lagranges Theorem
With equal ease show that $\mathcal{Z}_n=\left\{z\in\mathbb{C}:z^n=1\right\}$ is a group (looking at it like that is even easier). If this group were to have a group it would have to be a divisor
of $4$ and thus is either $1,2,4$...but since the subgroup a non-trivial (assumed proper) it must be $2$...finish it.
|
{"url":"http://mathhelpforum.com/advanced-algebra/122564-lagranges-theorem-print.html","timestamp":"2014-04-20T18:53:23Z","content_type":null,"content_length":"6801","record_id":"<urn:uuid:548767a6-f738-4b96-9c2d-b1e9b9868191>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Floor, Ceiling, and Round
Move the sliders to plot the cubic equation , then pick one of the three functions.
Floor, or the greatest integer function, gives the greatest integer less than or equal to a given value. Some examples: , , , , and .
Round, or the nearest integer function, gives the nearest integer. For an integer and a half, rounding goes to the nearest even number. Some examples: , , , , and .
Ceiling, or the least integer function, gives the least integer greater than or equal to a given value. Some examples: , , , , and .
|
{"url":"http://demonstrations.wolfram.com/FloorCeilingAndRound/","timestamp":"2014-04-21T12:22:58Z","content_type":null,"content_length":"44754","record_id":"<urn:uuid:638ff66d-9dc2-4d37-809c-03e5b67d1553>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Q&A: Plotting Trig Functions in Degrees
Mathematica Q&A: Plotting Trig Functions in Degrees
March 31, 2011 — Andrew Moylan, Technical Communication & Strategy
Got a question about Mathematica? The Wolfram Blog has answers! We’ll regularly answer selected questions from users around the web. You can submit your question directly to the Q&A Team using this
This week’s question comes from Brian, who is a part-time math teacher:
How do you plot trigonometric functions in degrees instead of radians?
Trigonometric functions in Mathematica such as Sin[x] and Cos[x] take x to be given in radians:
To convert from degrees to radians, multiply by π ⁄ 180. This special constant is called Degree in Mathematica.
The symbol ° is a handy shorthand for Degree and is entered as Esc-d-e-g-Esc. You can also find this symbol in the Basic Math Assistant palette in the Palettes menu of Mathematica.
Using either Degree or °, you can plot trigonometric functions in degrees:
That answers the main question, but here’s a related hint.
When plotting trigonometric functions in degrees, you might also want to manually specify exactly where Mathematica draws tick marks. You can do this using the Ticks option:
(Here, Range[0, 360, 45] specifies the tick marks on the x axis, and Automatic uses the default tick marks on the y axis.)
The Ticks option is very flexible. You can specify where tick marks are drawn, what labels they should have, how long they are, and even colors and styles.
Download the Computable Document Format (CDF) file for this post to see how to get the custom tick marks used in this plot:
If you have a question you’d like to see answered in this blog, you can submit it to the Q&A Team using this form.
7 Comments
Posted by Isaac Abraham March 31, 2011 at 5:17 pm
Posted by Pedro Vagner March 31, 2011 at 6:21 pm
Posted by Aji Raditya March 31, 2011 at 9:33 pm
Posted by Dawid Dworzak April 10, 2011 at 2:20 am
Posted by Brian McCallum July 12, 2011 at 8:10 am
|
{"url":"http://blog.wolfram.com/2011/03/31/mathematica-qa-plotting-trig-functions-in-degrees/","timestamp":"2014-04-20T03:22:03Z","content_type":null,"content_length":"85547","record_id":"<urn:uuid:2689b86a-75aa-43c9-963c-19b49bd8b070>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Applications and problem solving@
August 28th 2008, 06:55 PM #1
Applications and problem solving@
I do not know how to get the following answer
reloading image.
The answer that is tehre i think is the right one, but i have no clue...
You have the system $\left\{\begin{array}{lcr}x-y&=&6\\x+y&=&10\end{array}\right.$
Use the method of elimination to solve the system.
Adding the two equations together, we see that the y term canceled out.
So we get $2x=16\implies x=\dots$
Then substitute this value of x into either equation in the system to find y.
Can you take it from here?
Last edited by Chris L T521; August 28th 2008 at 07:03 PM. Reason: Sorry...typo on my part... >_>
I'm terribly sorry. I'm so ticked i put the wrong one. the electricity is ging on and off.. hehres the one i don't know...
Determine the equation that represents the area of the picture:
The dimensions are given to you: $(39-2x)~cm$ by $(26-2x)~cm$
So we see that area is $(39-2x)(26-2x)~cm^2$
However, the area has a value of $440~cm^2$
Thus, $(39-2x)(26-2x)=440$
Factor out the left side, and then the equation becomes
Solve this quadratic equation.
Can you take it from here?
Thank you so much Chris!!!! the answer is 5.3
August 28th 2008, 06:58 PM #2
August 28th 2008, 07:02 PM #3
August 28th 2008, 07:09 PM #4
August 28th 2008, 07:12 PM #5
|
{"url":"http://mathhelpforum.com/math-topics/47050-applications-problem-solving.html","timestamp":"2014-04-19T06:53:13Z","content_type":null,"content_length":"45648","record_id":"<urn:uuid:ca287c9f-25c5-474c-95d5-190f420afa23>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solving polynomial systems for the kinematics analysis and synthesis of mechanisms and robot manipulators
Results 1 - 10 of 27
, 2001
"... In engineering and applied mathematics, polynomial systems arise whose solution sets contain components of different dimensions and multiplicities. In this article we present algorithms, based
on homotopy continuation, that compute much of the geometric information contained in the primary decomposi ..."
Cited by 56 (26 self)
Add to MetaCart
In engineering and applied mathematics, polynomial systems arise whose solution sets contain components of different dimensions and multiplicities. In this article we present algorithms, based on
homotopy continuation, that compute much of the geometric information contained in the primary decomposition of the solution set. In particular, ignoring multiplicities, our algorithms lay out the
decomposition of the set of solutions into irreducible components, by finding, at each dimension, generic points on each component. As by-products, the computation also determines the degree of each
component and an upper bound on itsmultiplicity. The bound issharp (i.e., equal to one) for reduced components. The algorithms make essential use of generic projection and interpolation, and can, if
desired, describe each irreducible component precisely as the common zeroesof a finite number of polynomials.
, 1997
"... The last decade has witnessed the rebirth of resultant methods as a powerful computational tool for variable elimination and polynomial system solving. In particular, the advent of sparse
elimination theory and toric varieties has provided ways to exploit the structure of polynomials encountered in ..."
Cited by 45 (17 self)
Add to MetaCart
The last decade has witnessed the rebirth of resultant methods as a powerful computational tool for variable elimination and polynomial system solving. In particular, the advent of sparse elimination
theory and toric varieties has provided ways to exploit the structure of polynomials encountered in a number of scientific and engineering applications. On the other hand, the Bezoutian reveals
itself as an important tool in many areas connected to elimination theory and has its own merits, leading to new developments in effective algebraic geometry. This survey unifies the existing work on
resultants, with emphasis on constructing matrices that generalize the classic matrices named after Sylvester, Bézout and Macaulay. The properties of the different matrix formulations are presented,
including some complexity issues, with an emphasis on variable elimination theory. We compare toric resultant matrices to Macaulay's matrix and further conjecture the generalization of Macaulay's
exact ratio...
- J. ACM , 1999
"... Multivariate resultants generalize the Sylvester resultant of two polynomials and characterize the solvability of a polynomial system. They also reduce the computation of all common roots to a
problem in linear algebra. ..."
Cited by 33 (7 self)
Add to MetaCart
Multivariate resultants generalize the Sylvester resultant of two polynomials and characterize the solvability of a polynomial system. They also reduce the computation of all common roots to a
problem in linear algebra.
, 2004
"... For many mechanical systems, including nearly all robotic manipulators, the set of possible configurations that the links may assume can be described by a system of polynomial equations. Thus,
solving such systems is central to many problems in analyzing the motion of a mechanism or in designing a m ..."
Cited by 16 (8 self)
Add to MetaCart
For many mechanical systems, including nearly all robotic manipulators, the set of possible configurations that the links may assume can be described by a system of polynomial equations. Thus,
solving such systems is central to many problems in analyzing the motion of a mechanism or in designing a mechanism to achieve a desired motion. This paper describes techniques, based on polynomial
continuation, for numerically solving such systems. Whereas in the past, these techniques were focused on finding isolated roots, we now address the treatment of systems having higher-dimensional
solution sets. Special attention is given to cases of exceptional mechanisms, which have a higher degree of freedom of motion than predicted by their mobility. In fact, such mechanisms often have
several disjoint assembly modes, and the degree of freedom of motion is not necessarily the same in each mode. Our algorithms identify all such assembly modes, determine their dimension and degree,
and give sample points on each.
, 2001
"... In this article, we study the residual resultant which is the necessary and sufficient condition for a polynomial system F to have a solution in the residual of a variety, defined here by a
complete intersection G. We show that it corresponds to an irreducible divisor and give an explicit formula fo ..."
Cited by 10 (4 self)
Add to MetaCart
In this article, we study the residual resultant which is the necessary and sufficient condition for a polynomial system F to have a solution in the residual of a variety, defined here by a complete
intersection G. We show that it corresponds to an irreducible divisor and give an explicit formula for its degree in the coefficients of each polynomial. Using the resolution of the ideal (F : G) and
computing its regularity, we give a method for computing the residual resultant using a matrix which involves a Macaulay and a Bezout part. In particular, we show that this resultant is the gcd of
all the maximal minors of this matrix. We illustrate our approach for the residual of points and end by some explicit examples.
"... How hard is it to solve a system of bilinear equations? No solutions are presented in this report, but the problem is posed and some preliminary remarks are made. In particular, solving a system
of bilinear equations is reduced by a suitable transformation of its columns to solving a homogeneous sys ..."
Cited by 6 (0 self)
Add to MetaCart
How hard is it to solve a system of bilinear equations? No solutions are presented in this report, but the problem is posed and some preliminary remarks are made. In particular, solving a system of
bilinear equations is reduced by a suitable transformation of its columns to solving a homogeneous system of bilinear equations. In turn, the latter has a nontrivial solution if and only if there
exist two invertible matrices that, when applied to the tensor of the coefficients of the system, zero its first column. Matlab code is given to manipulate three-dimensional tensors, including a
procedure that finds one solution to a bilinear system often, but not always. Contents 1 Introduction 3 2 Bilinear Systems with m = 1 6 2.1 Homogeneous Systems with m = n = 1 : : : : : : : : : : : :
: : : : : : : : : : : : : : 6 2.2 Homogeneous Systems with m = 1 : : : : : : : : : : : : : : : : : : : : : : : : : : : : 7 2.3 Non-Homogeneous Systems with m = 1 : : : : : : : : : : : : : : : : : : :
: : : ...
, 1998
"... The problem of computing zeros of a system of polynomial equations has been well studied in the computational literature. Anumber of algorithms have been proposed and many computer algebra and
public domain packages provide the capability of computing the roots of polynomial equations. Most of these ..."
Cited by 5 (0 self)
Add to MetaCart
The problem of computing zeros of a system of polynomial equations has been well studied in the computational literature. Anumber of algorithms have been proposed and many computer algebra and public
domain packages provide the capability of computing the roots of polynomial equations. Most of these implementations are based on Grobner bases which can be slow for even small problems. In this
paper, we present a new system, MARS, to compute the roots of a zero dimensional polynomial system. It is based on computing the resultant of a system of polynomial equations followed by
eigendecomposition of a generalized companion matrix. MARS includes a robust library of Maple functions for constructing resultant matrices, an e cient library of Matlab routines for numerically
solving the eigenproblem, and C code generation routines and a C library for incorporating the numerical solver into applications. We illustrate the usage of MARS on various examples and utilize di
erent resultant formulations. 1
- ASME J. Mech. Des , 2002
"... In this paper, the geometric design problem of serial-link robot manipulators with three revolute (R) joints is solved for the first time using an interval analysis method. In this problem, five
spatial positions and orientations are defined and the dimensions of the geometric parameters of the ..."
Cited by 5 (1 self)
Add to MetaCart
In this paper, the geometric design problem of serial-link robot manipulators with three revolute (R) joints is solved for the first time using an interval analysis method. In this problem, five
spatial positions and orientations are defined and the dimensions of the geometric parameters of the 3-R manipulator are computed so that the manipulator will be able to place its end-effector at
these pre-specified locations. Denavit and Hartenberg parameters and 4x4 homogeneous matrices are used to formulate the problem and obtain the design equations and an interval method is used to
search for design solutions within a predetermined domain. At the time of writing this paper, six design solutions within the search domain and an additional twenty solutions outside the domain have
been found. KEYWORDS Geometric Design, Robot Manipulators, Interval Analysis
, 1997
"... In this dissertation, a novel parallel manipulator is investigated. The manipulator has three degrees of freedom and the moving platform is constrained to only translational motion. The main
advantages of this parallel manipulator are that all of the actuators can be attached directly to the base, c ..."
Cited by 4 (0 self)
Add to MetaCart
In this dissertation, a novel parallel manipulator is investigated. The manipulator has three degrees of freedom and the moving platform is constrained to only translational motion. The main
advantages of this parallel manipulator are that all of the actuators can be attached directly to the base, closed-form solutions are available for the forward kinematics, the moving platform
maintains the same orientation throughout the entire workspace, and it can be constructed with only revolute joints.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=112640","timestamp":"2014-04-18T12:49:11Z","content_type":null,"content_length":"36839","record_id":"<urn:uuid:9bc170e9-9672-40e4-a0e8-5d9952818d7a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: MULTIPLE COMPARISISONS OF MEAN VECTORS
Replies: 0
Posted: Jun 21, 1996 4:53 AM
Hallo overthere,
I need a procedure to compare several mean vectors simultaneously. We have
e.g. three multivariate normal distributed random vectors with mean m1,m2,m3
and the same covariance matrix Sigma. Each random vector has 4 variables.
I want to implement a test on Mathematica.
Now we want to test simultaneously
H1: m1=m2
H2: m1=m3
H3: m2=m3
of these three random vectors and not of each variable. In univariate case it
is the Scheffe, Tukey and so on, but in multivariate case I don`t know.
Anyway, which teststatistic, I would need the teststatitic and the quantiles
or how to compute p-values for that test.
So, please if this, is not a problem for you, please help me and send your
answer directly to me:
Best wishes
hope to hear from you soon
Wolfgang Hitzl
Institute for Mathematics,
University of Innsbruck, Austria
So all I want to know is which test, a
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=494230","timestamp":"2014-04-20T14:30:15Z","content_type":null,"content_length":"14615","record_id":"<urn:uuid:cad9937b-929b-40d0-99a6-4af5f9ef7802>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
One of the tools I use for drawing graphs is dot from Graphviz. Recently I was drawing a series of diagrams that have mostly identical parts. It was error-prone and a pain to keep the identical parts
in sync between the multiple *.dot files. I found Haskell’s Data.GraphViz package to be the solution.
The documentation for that package is great, but it needs a few more examples. The purpose of this post is to add examples to the pool.
|
{"url":"http://haroldcarr.com/","timestamp":"2014-04-16T10:17:53Z","content_type":null,"content_length":"47131","record_id":"<urn:uuid:fe5ca6b8-83ce-4c1c-b4cf-e3872dfd46e4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Oakton Calculus Tutor
Find an Oakton Calculus Tutor
...It is formula heavy and requires more memorization than the previous year of Algebra 2. Performing well on exams like the SAT is partially about the mathematics and partially about the
strategies needed to do well. The SAT includes math equations that range all the way to Algebra 2.
24 Subjects: including calculus, reading, geometry, ASVAB
...If you’re interested in working with me, feel free to send me an email and inquire about my availability. Currently, I do all of my tutoring at local libraries within 5 miles of Rockville.I
took a semester of Discrete Math in College and earned an A in the class. Discrete math accompanies a degree in computer science.
9 Subjects: including calculus, physics, geometry, algebra 1
...I was on Dean's List in Spring 2010 and hope to be on it again this semester. I would like to help students understand better course materials and what is integral in extracting information
from problems and solving them. I would like to see students try solving problems on their own first and treat me with respect so that it can be reciprocated.
17 Subjects: including calculus, chemistry, physics, geometry
Hello! My name is Kaitlyn, and I am a government contractor (engineer) who graduated from Virginia Tech with a BS in Industrial and Systems Engineering and a BA in Economics with a 3.5+ GPA. While
in college, I tutored 1-3 students a semester, mostly in business calculus.
25 Subjects: including calculus, chemistry, physics, statistics
...I am a biological physics major at Georgetown University and so I have a lot of interdisciplinary science experience, most especially with mathematics (Geometry, Algebra, Precalculus,
Trigonometry, Calculus I and II). Additionally, I have tutored people in French and Chemistry, even though they a...
11 Subjects: including calculus, chemistry, French, geometry
|
{"url":"http://www.purplemath.com/oakton_calculus_tutors.php","timestamp":"2014-04-17T13:01:16Z","content_type":null,"content_length":"23876","record_id":"<urn:uuid:3e1cc9e7-e8ee-4455-bdef-b6026c5372a8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
boundary detection
From: snehal d on 31 Mar 2010 13:11 I want to put boundary depending on min and max value of co-ordinates to detect object can any one help me.... below code is not working can anyone help...
for i=1:1:row
for j=1:1:col
if ero(i,j)==1
if i<i++ || j<j++
From: ImageAnalyst on 31 Mar 2010 17:12 On Mar 31, 1:11 pm, "snehal d" <shruti8...(a)gmail.com> wrote:
> I want to put boundary depending on min and max value of co-ordinates to detect object can any one help me.... below code is not working can anyone help...
> [row,col]=size(ero);
> for i=1:1:row
> for j=1:1:col
> if ero(i,j)==1
> if i<i++ || j<j++
> min=ero(i,j);
> else
> max=ero(i,j);
> end
> end
> end
> end
I don't understand your question, nor your code. It's not even MATLAB
code - there is no ++ operator in MATLAB. Plus all it does is
repeatedly overwrite and override max and min with the latest pixel
value. Finally it overrides MATLAB's max() and min() functions so
they won't work anymore. This is so messed up that I don't even know
where to start.
From: snehal d on 1 Apr 2010 10:28 I hve converted rgb to gray and detected edges of the object.... now i should put boundary to this object so that i can recognise object shape...help me in finding
out min and max coordinates value so that using this values i can draw boundary...
From: ImageAnalyst on 1 Apr 2010 12:52 On Apr 1, 10:28 am, "snehal d" <shruti8...(a)gmail.com> wrote:
> I hve converted rgb to gray and detected edges of the object.... now i should put boundary to this object so that i can recognise object shape...help me in finding out min and max coordinates
value so that using this values i can draw boundary...
Not enough information. Show some code that actually works, and post
your image to http://drop.io. Finally explain what you want to do -
"recognise" is WAY to vauge of a term, as is "put boundary to this
object". Then we can talk.
From: snehal d on 1 Apr 2010 13:51 ImageAnalyst <imageanalyst(a)mailinator.com> wrote in message <ad830ff4-1f2d-4ab5-9b10-9faa7fb5626a(a)z7g2000yqb.googlegroups.com>...
> On Apr 1, 10:28 am, "snehal d" <shruti8...(a)gmail.com> wrote:
> > I hve converted rgb to gray and detected edges of the object.... now i should put boundary to this object so that i can recognise object shape...help me in finding out min and max coordinates
value so that using this values i can draw boundary...
> ------------------------------------------------
> Not enough information. Show some code that actually works, and post
> your image to http://drop.io. Finally explain what you want to do -
> "recognise" is WAY to vauge of a term, as is "put boundary to this
> object". Then we can talk.
suppose consider their is a square in an image, i used canny edge detector to point out the edges now i want to put an boundary to that object(square) so that i can recognise that is a square... help
me in putting boundary to object...
|
{"url":"http://www.adras.com/boundary-detection.t110589-80.html","timestamp":"2014-04-16T13:03:05Z","content_type":null,"content_length":"10045","record_id":"<urn:uuid:041b48ee-7468-4295-8c49-fa17b8827ea7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: How to choose consistent starting values
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: How to choose consistent starting values
From Maarten buis <maartenbuis@yahoo.co.uk>
To statalist@hsphsun2.harvard.edu
Subject Re: st: How to choose consistent starting values
Date Mon, 31 Aug 2009 15:11:32 +0000 (GMT)
--- On Mon, 31/8/09, amatoallah ouchen wrote:
> I'm using a tobit model with friction, and the parameters
> have to be estimated by the maximum likelihood procedure,
> the number of iterations are exceeded, because the
> starting points i'm using are not good, is there is any
> way to choose consistent starting points?
I am a bit confused about what you mean with "a tobit model
with friction". Is this a special type of tobit model, or
does that mean that you are haveing difficulty with
estimating a regular tobit model (using -tobit-).
In the former case: can you tell us exactly what commands
you used?
In the latter case: -tobit- is a reasonably well behaved
model, where the starting values provided by Stata should
work. If it doesn't converge than that indicates that
there is something fundamentally wrong with your model,
e.g. (virtually) all your observations are censored, a
(set of) variable(s) perfectly predcits the probability
of being consored, etc. In this can easily remain
unnoticed during univariate or bivariate inspection of
your data, as missing values on other variables in the
model will cause these observations to be dropped from
the analysis.
I would do the following. Say that your dependent
variable is y and censoring happens at y=0, and your
independent variables are x1 and x2.
First create a dummy indicating whether an observation
is used in the analysis:
gen byte touse = !missing(y, x1, x2)
Check the number of censored and non-censored obs:
gen byte censored = y == 0 if touse == 1
tab consored if touse == 1
Check if there is perfect prediction of probability
of being censored:
probit censored x1 x2 if touse == 1
Do other checks to see if your model can be estimated
with your data.
Hope this helps,
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-08/msg01578.html","timestamp":"2014-04-18T13:16:37Z","content_type":null,"content_length":"7664","record_id":"<urn:uuid:31174e3d-2790-4c88-a6f6-719881af9234>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculating Area & Perimeter
This page shows a set of two-dimensional shapes that have their sides labeled, and the student's task is to compute the area and/or perimeter. For complex shapes, you can also choose to have all the
sides labeled so that they do not have to do the subtraction to find out the unlabeled sides. The student must remember to write the units for their answer! However, if you don't want to burden them
with that task, you can just stick with numeric values.
|
{"url":"http://www.worksheetworks.com/math/geometry/measuring-figures/perimeter-area.html","timestamp":"2014-04-16T04:12:01Z","content_type":null,"content_length":"4224","record_id":"<urn:uuid:82088d0c-788d-4831-a0d8-cddbb6024d01>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Highland Park, NJ Trigonometry Tutor
Find a Highland Park, NJ Trigonometry Tutor
...I have been a mathematics instructor for the past eleven years. I have taught in middle school and presently teach high school mathematics. I have taught in various programs such as
after-school and weekend school.
7 Subjects: including trigonometry, geometry, algebra 2, SAT math
...I also have been the Head Coach of a Girls Softball team in my borough. It is very rewarding for me to work with kids and they always seem to respond positively to my methods and motivational
techniques. My tutoring methods are based on determining and assessing the source of a student's needs and focusing on positive reinforcement.
41 Subjects: including trigonometry, English, chemistry, physics
...I also spend more time helping students visualize what is happening at the molecular level to explain concepts (ex. Why is the pH at a certain level at every level of a weak acid – strong base
titration curve) Organic Chemistry --> I spend a significant amount of time reminding students of a...
34 Subjects: including trigonometry, chemistry, calculus, physics
...Therefore, I have seen certain topics and concepts covered in Algebra 2 in one school, but in Precalculus in another. I have experience tutoring all three subjects and am familiar with the
vast majority of topics and concepts that may be covered in any of the courses. Furthermore, if I am not f...
9 Subjects: including trigonometry, geometry, algebra 2, SAT math
...Algebra II is often difficult for students because they have trouble seeing the link between the equations (which are abstract) and the real world. I try to bridge the gap to show how the math
represents the real world and also to look at alternatives in teaching and visualizing the key concepts...
17 Subjects: including trigonometry, calculus, physics, statistics
Related Highland Park, NJ Tutors
Highland Park, NJ Accounting Tutors
Highland Park, NJ ACT Tutors
Highland Park, NJ Algebra Tutors
Highland Park, NJ Algebra 2 Tutors
Highland Park, NJ Calculus Tutors
Highland Park, NJ Geometry Tutors
Highland Park, NJ Math Tutors
Highland Park, NJ Prealgebra Tutors
Highland Park, NJ Precalculus Tutors
Highland Park, NJ SAT Tutors
Highland Park, NJ SAT Math Tutors
Highland Park, NJ Science Tutors
Highland Park, NJ Statistics Tutors
Highland Park, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Highland_Park_NJ_trigonometry_tutors.php","timestamp":"2014-04-20T02:24:36Z","content_type":null,"content_length":"24493","record_id":"<urn:uuid:b9b059d8-c359-4114-b19c-8bedb8391f8b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Large Graph Representation in C++
up vote 5 down vote favorite
There might be similar questions but I still have some parts that I couldn't figure out. I'm trying represent an undirected graph with no weights but just 1 for connected and 0 for not connected. I'm
trying to represent a graph (reading from a file) which have 80500 nodes and over 5.5 million edges. I was wondering;
1. Is it going to be a huge impact if I change my adjacency matrix(the one that I'm currently using) to a adjacency list. I have no problem with the implementation just asking will it worth the time
to convert it to list?
2. Since I just store 1 and 0 is there a special data type no store this. I'm using in and I guess a byte data type would save a lot of time.
3. Any other structure than adjacency matrix or list that could be better for this typical problem?
c++ data-structures graph
What are you using the graph for? – Jakub Zaverka Mar 28 '12 at 17:08
I'm writing a friend recommendation algorithm and using the graph for the data – Ali Mar 28 '12 at 17:11
add comment
2 Answers
active oldest votes
Adjacency lists are wayyyy more better space-wise. Because then you just need to save 5.5 million * 2 numbers = 11 000 000 integers. Assuming you save short integers (2 bytes), then
you need 22 000 000 bytes.
If you represent it using adjacency matrix, then you need to save 80500 * 80500 = 6 480 250 000 elements. Even is you save them as bytes, having 22 million bytes is much better than
up vote 4 down having over 6 billion of them.
vote accepted
EDIT: If you save eges as two 4-byte integers, then you have 44 000 000 bytes. If you save the matrix very efficiently with bit fiddling, then you can save 8 elements in one byte.
But is means you still need to have 810 031 250 bytes. Not that large difference now, but it's still 20 times more.
Thanks a lot. So for the data type part is there anything more efficient than the int ? – Ali Mar 28 '12 at 17:12
Consider: (1) One can pack the booleans in the adjacency matrix very tightly - you need some bit fiddling, but you can end up using each bit efficently (2) For the adjacency
lists, you'll need something larger than 16 bit integers, as 80500 > 2^16. – delnan Mar 28 '12 at 17:13
Every edge is represented by two integers. So we take 5.5 million edges = 11 million integers. If we use short int for saving, then each number will take two bytes. 11 million
numbers * 2 bytes = 22 million bytes. You might be confused, I actually corrected a mistake I made previously. – Jakub Zaverka Mar 28 '12 at 17:16
You indeed fixed that before I finished my comment. I noticed later and adapted :) Nevertheless, two issues remain. – delnan Mar 28 '12 at 17:17
@delnan see edit – Jakub Zaverka Mar 28 '12 at 17:19
show 2 more comments
If your data is not sparse then you may not get as much space savings with an adjacency list. You could use an adjacency matrix with compressed or encoded rows (or columns, but your graph
up vote 1 is unidirected, so compressing rows is probably more natural for lookups). With compression you will reduce space, at the time cost of decompressing rows on lookup.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged c++ data-structures graph or ask your own question.
|
{"url":"http://stackoverflow.com/questions/9912352/large-graph-representation-in-c/9912415","timestamp":"2014-04-19T00:22:14Z","content_type":null,"content_length":"76024","record_id":"<urn:uuid:f4f97115-4cf3-46d6-b9cd-a69911474a39>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
|
word problem...
August 5th 2007, 12:05 PM #1
Jun 2007
word problem...
A sphere shrinks at a rate of 1 cm^3 per second. At what rate does the diameter decrease when the diameter is 10 cm? And yes, this is all the info I was given.
Well, for a sphere
$V = \frac{4}{3} \pi r^3 = \frac{4}{3} \pi \left ( \frac{d}{2} \right ) ^3$
$V = \frac{1}{6} \pi d^3$
$\frac{dV}{dt} = \frac{1}{2} \pi d^2 \frac{dd}{dt}$
The rest you can do yourself.
I have little time more before I have to go for work. I just want to practice on this. I'm groping right now, rusty.
To find: rate of decrease of diameter
So dD/dt, where D means diameter
V = (4/3)(pi)(r^3)
in terms of D,
V = (4/3)(pi)[(D/2)^3]
V = (1/6)(pi)(D^3)
Differentiate both sides with respect to time t,
dV/dt = (pi/6)[3 D^2 dD/dt]
dV/dt = (pi/2)(D^2) dD/dt ----------**
So when D = 10cm, and since dV/dt = -1 cu.cm,
-1 = (pi/2)(10^2) dD/dt
-1 = 50pi dD/dt
dD/dt = -1/(50pi)
Therefore, when the diameter is 10cm, the diameter is decreasing at the rate of 1/(50pi) cm/sec. ---------------answer.
No time to review. If there are mistakes, I'll rectify after work---about 11 hrs from now.
August 5th 2007, 12:15 PM #2
August 5th 2007, 12:22 PM #3
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/calculus/17519-word-problem.html","timestamp":"2014-04-17T07:36:10Z","content_type":null,"content_length":"37330","record_id":"<urn:uuid:b209a464-30ef-45d3-a6f3-0699bc7ac820>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IOR(I, J)
XL Fortran for AIX 8.1
Language Reference
Performs an inclusive OR.
must be of type integer.
must be of type integer with the same kind type parameter as I.
Elemental function
Result Type and Attributes
Same as I.
Result Value
The result has the value obtained by combining I and J bit-by-bit according to the following truth table:
I J IOR (I,J)
The bits are numbered 0 to BIT_SIZE(I)-1, from right to left.
IOR (1, 3) has the value 3. See Integer Bit Model.
Specific Name Argument Type Result Type Pass As Arg?
IOR (1) any integer same as argument yes
OR (1) any integer same as argument yes
1. IBM Extension.
[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]
|
{"url":"http://sc.tamu.edu/IBM.Tutorial/docs/Compilers/xlf_8.1/html/lr297.HTM","timestamp":"2014-04-16T10:10:53Z","content_type":null,"content_length":"4047","record_id":"<urn:uuid:9012235a-150a-4871-8fbd-3462d8fd029a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Common Core Standards : CCSS.Math.Content.HSF-TF.A.4
Common Core Standards: Math
4. Use the unit circle to explain symmetry (odd and even) and periodicity of trigonometric functions.
Students should be able to determine the symmetry of trigonometric functions. They should also know that when we say "symmetry," we aren't talking mirrors and palindromes.
By comparing the values of trigonometric functions in quadrants I and IV, students can determine whether the function is odd or even. To do that, though, they should really know what even and odd
functions are, first.
Students should know that cosine and secant are even functions and are symmetric with respect to the y-axis. We know this is true because of the negative angle identities for cosine and secant.
cos(-θ) = cosθ
sec(-θ) = secθ
As expected, the rest of 'em (sine, cosecant, tangent, and cotangent) are odd functions and are symmetric to the origin. These also have negative angle identities.
sin(-θ) = -sinθ
csc(-θ) = -cscθ
tan(-θ) = -tanθ
cot(-θ) = -cotθ
Students might get these identities confused with All Students Take Calculus (ASTC). Remind them that these identities apply to angles, while ASTC applies to quadrants. Either way, they both still
You might also want to take this opportunity to talk about these functions as lines on the unit circle. It's not necessary, but it may be helpful for students to see the trigonometric functions
represented in this way so they can better understand the relationships of these functions and make predictions (about what might happen to the tangent function when θ = ^π⁄[2], for example).
Students should also know that the six trigonometric functions are periodic functions. Specifically, the periods of sine, cosecant, cosine, and secant are 2π, and the periods of tangent and cotangent
are π. To understand why, move θ from 0 all the way to 2π on the unit circle and check how each function changes.
After every period, the function repeats itself, which means that sin(θ + 2π) = sinθ. The same applies to the five other trigonometric functions and their respective periods. That's just the nature
of periodicity.
|
{"url":"http://www.shmoop.com/common-core-standards/ccss-hs-f-tf-4.html","timestamp":"2014-04-18T00:15:26Z","content_type":null,"content_length":"47749","record_id":"<urn:uuid:37bb22a5-0026-40db-b486-4d673a099c8d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Boca Raton Math Tutor
Find a Boca Raton Math Tutor
...Though I am a student myself, I have a year of professional teaching experience with middle-school aged students from a Montessori School where I worked during my last year of college, as a
Mathematics Teaching Assistant and Model United Nations Coordinator. Independently, I have tutored middle-...
10 Subjects: including calculus, precalculus, trigonometry, probability
...I believe I am qualified to tutor this subject because I have been tutoring the TEAS for many years now. Also, I am currently teaching it at a nursing school. And, I have, in the past, prepared
many pre-nursing students for the test through WyzAnt.
34 Subjects: including precalculus, psychology, biochemistry, physical science
...I am currently pursuing my bachelor's degrees in English and Criminal Justice, and hope to graduate by December 2013. I was a Big Sister in the Big Brothers Big Sisters Program, and I love
helping kids to really understand their material. I try to use methods and little tricks to help remember ...
15 Subjects: including algebra 1, reading, English, writing
...I have a very flexible schedule and am willing to meet at any location convenient for the student. I am most proud of one girl I have recently tutored for two years. She home-schools herself,
and has recently passed all of her EOC and AP exams with my help.
65 Subjects: including calculus, SAT math, precalculus, prealgebra
...I am Florida State Certified in school psychology, reading, learning disabilities and a number of K-12 specialties. I have decades of experience, with a library of materials, and a network of
other teaching professionals to refer to. And I am relentless, energetic, yet affable and I do not give up easily.
19 Subjects: including algebra 1, prealgebra, reading, English
|
{"url":"http://www.purplemath.com/boca_raton_fl_math_tutors.php","timestamp":"2014-04-16T04:32:12Z","content_type":null,"content_length":"23897","record_id":"<urn:uuid:8147e704-0594-4353-b330-77ab3b367bec>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
|
algebra (mathematics) :: Islamic contributions
Article Free Pass
Islamic contributions
Islamic contributions to mathematics began around ad 825, when the Baghdad mathematician Muḥammad ibn Mūsā al-Khwārizmī wrote his famous treatise al-Kitāb al-mukhtaṣar fī ḥisāb al-jabr wa’l-muqābala
(translated into Latin in the 12th century as Algebra et Almucabal, from which the modern term algebra is derived). By the end of the 9th century a significant Greek mathematical corpus, including
works of Euclid, Archimedes (c. 285–212/211 bc), Apollonius of Perga (c. 262–190 bc), Ptolemy (fl. ad 127–145), and Diophantus, had been translated into Arabic. Similarly, ancient Babylonian and
Indian mathematics, as well as more recent contributions by Jewish sages, were available to Islamic scholars. This unique background allowed the creation of a whole new kind of mathematics that was
much more than a mere amalgamation of these earlier traditions. A systematic study of methods for solving quadratic equations constituted a central concern of Islamic mathematicians. A no less
central contribution was related to the Islamic reception and transmission of ideas related to the Indian system of numeration, to which they added decimal fractions (fractions such as 0.125, or ^1/
Al-Khwārizmī’s algebraic work embodied much of what was central to Islamic contributions. He declared that his book was intended to be of “practical” value, yet this definition hardly applies to its
contents. In the first part of his book, al-Khwārizmī presented the procedures for solving six types of equations: squares equal roots, squares equal numbers, roots equal numbers, squares and roots
equal numbers, squares and numbers equal roots, and roots and numbers equal squares. In modern notation, these equations would be stated ax^2 = bx, ax^2 = c, bx = c, ax^2 + bx = c, ax^2 + c = bx, and
bx + c = ax^2, respectively. Only positive numbers were considered legitimate coefficients or solutions to equations. Moreover, neither symbolic representation nor abstract symbol manipulation
appeared in these problems—even the quantities were written in words rather than in symbols. In fact, all procedures were described verbally. This is nicely illustrated by the following typical
problem (recognizable as the modern method of completing the square):
What must be the square which, when increased by 10 of its own roots, amounts to 39? The solution is this: You halve the number of roots, which in the present instance yields 5. This you multiply
by itself; the product is 25. Add this to 39; the sum is 64. Now take the root of this, which is 8, and subtract from it half the number of the roots, which is 5; the remainder is 3. This is the
root of the square which you sought.
In the second part of his book, al-Khwārizmī used propositions taken from Book II of Euclid’s Elements in order to provide geometric justifications for his procedures. As remarked above, in their
original context these were purely geometric propositions. Al-Khwārizmī directly connected them for the first time, however, to the solution of quadratic equations. His method was a hallmark of the
Islamic approach to solving equations—systematize all cases and then provide a geometric justification, based on Greek sources. Typical of this approach was the Persian mathematician and poet Omar
Khayyam’s Risālah fiʾl-barāhīn ʿalā masāʾil al-jabr waʾl-muqābalah (c. 1070; “Treatise on Demonstration of Problems of Algebra”), in which Greek knowledge concerning conic sections (ellipses,
parabolas, and hyperbolas) was applied to questions involving cubic equations.
The use of Greek-style geometric arguments in this context also led to a gradual loosening of certain traditional Greek constraints. In particular, Islamic mathematics allowed, and indeed encouraged,
the unrestricted combination of commensurable and incommensurable magnitudes within the same framework, as well as the simultaneous manipulation of magnitudes of different dimensions as part of the
solution of a problem. For example, the Egyptian mathematician Abu Kāmil (c. 850–930) treated the solution of a quadratic equation as a number rather than as a line segment or an area. Combined with
the decimal system, this approach was fundamental in developing a more abstract and general conception of number, which was essential for the eventual creation of a full-fledged abstract idea of an
Do you know anything more about this topic that you’d like to share?
|
{"url":"http://www.britannica.com/EBchecked/topic/14885/algebra/231065/Islamic-contributions","timestamp":"2014-04-19T04:36:34Z","content_type":null,"content_length":"96617","record_id":"<urn:uuid:9047c754-a23d-4fd8-bfc5-1bfe81e7c0af>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Electron. J. Diff. Eqns., Vol. 2008(2008), No. 126, pp. 1-22.
Construction of almost periodic sequences with given properties
Michal Vesely Abstract:
We define almost periodic sequences with values in a pseudometric space X and we modify the Bochner definition of almost periodicity so that it remains equivalent with the Bohr definition. Then, we
present one (easily modifiable) method for constructing almost periodic sequences in X. Using this method, we find almost periodic homogeneous linear difference systems that do not have any
non-trivial almost periodic solution. We treat this problem in a general setting where we suppose that entries of matrices in linear systems belong to a ring with a unit.
Submitted October 9, 2007. Published September 10, 2008.
Math Subject Classifications: 39A10, 42A75.
Key Words: keywords Almost periodic sequences; almost periodic solutions to difference systems.
Show me the PDF file (323 KB), TEX file, and other files for this article.
│ │ Michal Vesely │
│ │ Department of Mathematics and Statistics, Masaryk University │
│ │ Janackovo nam. 2a, CZ-602 00 Brno, Czech Republic │
│ │ email: michal.vesely@mail.muni.cz │
Return to the EJDE web page
|
{"url":"http://ejde.math.txstate.edu/Volumes/2008/126/abstr.html","timestamp":"2014-04-19T22:17:09Z","content_type":null,"content_length":"1838","record_id":"<urn:uuid:9ecd1b8f-1d59-4dcc-af5c-8a58999e0727>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On limited versus polynomial nondeterminism
Uriel Feige? Joe Kiliany
December 5, 1995
We show that efficient algorithms for some problems that require limited nondeterminism imply subexponential simulation of nondeterministic computation by deterministic computation. In particular, if
cliques of size O(log n) can be found in polynomial time, then nondeterministic time f(n) is contained in deterministic time
2O(pf(n) polylog f(n) ).
1 Introduction
A major open question in computational complexity is whether P = NP . Put in other words, is it true that if a language L can be recognized in time f(n) by a nondeterministic Turing machine (where n
denotes the input length), then L can be recognized in time (f(n))c by a deterministic Turing machine (for some constant c)? Let DTIME(f(n)) denote the class of languages accepted by a deterministic
Turing machine in time O(f(n)), and let NTIME(f(n)) denote the class of languages accepted by a deterministic Turing machine in time O(f(n)). The trivial relations between these classes are DTIME(f
(n)) ae NTIME(f(n)) (by definition), and NTIME(f(n)) ae DTIME(2O(f(n))) (by exhaustive search). Perhaps the only nontrivial relation known is that DTIME(n) <> NTIME(n) [16].
A natural question to ask is whether there is a general methodology that improves over exhaustive search, and applies to a wide range of NP-problems. For some specific NP- problems, improvements over
exhaustive search that involve the constant in the exponent were obtained in [23, 22, 4]. However, we seek a more dramatic improvement, with more general applicability. This leads to the following
?Department of Applied Math and Computer Science, the Weizmann Institute, Rehovot 76100, Israel. feige@wisdom.weizmann.ac.il. Incumbent of the Joseph and Celia Reskin Career Development Chair.
Supported by a Yigal Alon fellowship. Work done in part while the author was visiting the NEC Research Institute.
yNEC Research Institute, 4 Independence Way, Princeton, New Jersey, 08540. joe@research.nj.nec.com
|
{"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cstr--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-help---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&cl=CL1.223&d=HASH01e530e58f7e2612ba46fff4.1&hl=0&gc=0>=0","timestamp":"2014-04-18T05:59:31Z","content_type":null,"content_length":"9826","record_id":"<urn:uuid:f3de091a-d581-4d88-8541-c2cb9c8fe783>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rahns, PA Algebra 2 Tutor
Find a Rahns, PA Algebra 2 Tutor
...Personally, I missed 5 questions on the SAT, 3 questions on the GMAT, received a perfect score on the LSAT and scored in the 99th percentile on every standardized test I can recall since early
childhood. --> PPS: Details for admissions services are available by appointment. Admissions counse...
23 Subjects: including algebra 2, reading, English, geometry
...The most important aspect of my tutoring is the feedback that I receive from my students. I am constantly looking to improve my teaching style and welcome any suggestions of how I can be a
more effective tutor. If any student is unsure whether or not I will be able to help them in a particular subject, I encourage them to contact me so that we can discuss if I can be of assistance
to them.
19 Subjects: including algebra 2, calculus, statistics, geometry
...Having worked with a diverse population of students, I have strong culturally competent teaching practices that are adaptive to diverse student learning needs. I have a robust knowledge of
various math curricula and resources that will get your child to love math in no time! I look forward to w...
9 Subjects: including algebra 2, geometry, ESL/ESOL, algebra 1
...I have been tutoring elementary school students since I was in 8th grade and high school students beginning in my sophomore year through my school's National Honors Society. My methods for
tutoring vary from subject to subject. For matters such as math, music theory, physics, and speech, I focus on the SDT standby: See, Do, Teach.
42 Subjects: including algebra 2, reading, English, calculus
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching
because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including algebra 2, calculus, physics, ACT Math
Related Rahns, PA Tutors
Rahns, PA Accounting Tutors
Rahns, PA ACT Tutors
Rahns, PA Algebra Tutors
Rahns, PA Algebra 2 Tutors
Rahns, PA Calculus Tutors
Rahns, PA Geometry Tutors
Rahns, PA Math Tutors
Rahns, PA Prealgebra Tutors
Rahns, PA Precalculus Tutors
Rahns, PA SAT Tutors
Rahns, PA SAT Math Tutors
Rahns, PA Science Tutors
Rahns, PA Statistics Tutors
Rahns, PA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Charlestown, PA algebra 2 Tutors
Congo, PA algebra 2 Tutors
Creamery algebra 2 Tutors
Delphi, PA algebra 2 Tutors
Eagleville, PA algebra 2 Tutors
Englesville, PA algebra 2 Tutors
Fagleysville, PA algebra 2 Tutors
Gabelsville, PA algebra 2 Tutors
Graterford, PA algebra 2 Tutors
Gulph Mills, PA algebra 2 Tutors
Linfield, PA algebra 2 Tutors
Morysville, PA algebra 2 Tutors
Trappe, PA algebra 2 Tutors
Valley Forge algebra 2 Tutors
Zieglersville, PA algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Rahns_PA_algebra_2_tutors.php","timestamp":"2014-04-20T13:52:17Z","content_type":null,"content_length":"24242","record_id":"<urn:uuid:7d372a7f-c4f4-4eb3-9c97-7af80f40102c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hypergraphs and Cellular Networks
Citation: Klamt S, Haus U-U, Theis F (2009) Hypergraphs and Cellular Networks. PLoS Comput Biol 5(5): e1000385. doi:10.1371/journal.pcbi.1000385
Editor: Jörg Stelling, ETH Zürich, Switzerland
Published: May 29, 2009
Copyright: © 2009 Klamt et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: This work was supported by the German Federal Ministry of Education and Research (HepatoSys and FORSYS-Centre MaCS (Magdeburg Centre for Systems Biology)), the Ministry of Education and
Research of Saxony-Anhalt (Research Center “Dynamic Systems”), and the Helmholtz Alliance on Systems Biology (project “CoReNe”). The funders had no role in study design, data collection and analysis,
decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
The understanding of biological networks is a fundamental issue in computational biology. When analyzing topological properties of networks, one often tends to substitute the term “network” for
“graph”, or uses both terms interchangeably. From a mathematical perspective, this is often not fully correct, because many functional relationships in biological networks are more complicated than
what can be represented in graphs.
In general, graphs are combinatorial models for representing relationships (edges) between certain objects (nodes). In biology, the nodes typically describe proteins, metabolites, genes, or other
biological entities, whereas the edges represent functional relationships or interactions between the nodes such as “binds to”, “catalyzes”, or “is converted to”. A key property of graphs is that
every edge connects two nodes. Many biological processes, however, are characterized by more than two participating partners and are thus not bilateral. A metabolic reaction such as A+B→C+D
(involving four species), or a protein complex consisting of more than two proteins, are typical examples. Hence, such multilateral relationships are not compatible with graph edges. As illustrated
below, transformation to a graph representation is usually possible but may imply a loss of information that can lead to wrong interpretations afterward.
Hypergraphs offer a framework that helps to overcome such conceptual limitations. As the name indicates, hypergraphs generalize graphs by allowing edges to connect more than two nodes, which may
facilitate a more precise representation of biological knowledge. Surprisingly, although hypergraphs occur ubiquitously when dealing with cellular networks, their notion is known to a much lesser
extent than that of graphs, and sometimes they are used without explicit mention.
This contribution does by no means question the importance and wide applicability of graph theory for modeling biological processes. A multitude of studies proves that meaningful biological
properties can be extracted from graph models (for a review see [1]). Instead, this contribution aims to increase the communities' awareness of hypergraphs as a modeling framework for network
analysis in cell biology. We will give an introduction to the notion of hypergraphs, thereby highlighting their differences from graphs and discussing examples of using hypergraph theory in
biological network analysis. For this Perspective, we propose using hypergraph statistics of biological networks, where graph analysis is predominantly used but where a hypergraph interpretation may
produce novel results, e.g., in the context of a protein complex hypergraph.
Like graphs, hypergraphs may be classified by distinguishing between undirected and directed hypergraphs, and, accordingly, we divide the introduction to hypergraphs given below into two major parts.
Undirected Hypergraphs
An undirected hypergraph H = (V,E) consists of a set V of vertices or nodes and a set E of hyperedges. Each hyperedge eЄE may contain arbitrarily many vertices, the order being irrelevant, and is
thus defined as a subset of V. For this reason, undirected hypergraphs can also be interpreted as set systems with a ground set V and a family E of subsets of V. If no hyperedge is a subset of
another hyperedge, H is also called a Sperner hypergraph, or clutter.
Undirected graphs are special cases of hypergraphs in which every hyperedge contains two nodes (i.e., has a cardinality of two). Protein–protein interaction (PPI) networks provide a nice example
illustrating the differences that may arise in modeling biological facts with graphs and hypergraphs. Various technologies for measuring protein interactions have been developed, but we concentrate
here on data obtained, e.g., by tandem affinity purification (TAP, [2],[3]) delivering protein complexes (with possibly more than two partners) instead of direct binary interactions. A small-scale
example mimicking experimental data derived by TAP is shown in Figure 1A (left). TAP data naturally span a hypergraph: We have a ground set of proteins and a set of complexes, which themselves
represent subsets (hyperedges) of the ground set of proteins. One method for drawing undirected hypergraphs is shown in Figure 1A (middle). Hypergraphs are often projected onto graphs, losing some
information but making their drawing easier and their analysis amenable to the huge corpus of methods and algorithms from graph theory. A typical graph representation of our example is shown in
Figure 1A (right) (another way to convert hypergraphs to graphs will be shown below). This representation still captures the information on pairs of proteins that occurred together in a complex;
however, in contrast to the hypergraph, the complexes themselves cannot be reconstructed from this figure. This may lead to different results when computing network properties such as the k-core, a
measure that is often used to identify the core proteome [4],[5]. In a graph, the k-core is the maximal node-induced sub-graph in which all nodes have a degree (defined as the number of edges a node
participates in) equal to or larger than k. The maximum core of a graph corresponds to the highest k where the graph has a non-empty k-core. The maximum k-core of the graph in Figure 1A is a 3-core
consisting of the nodes {A,B,C,D}. A similar definition of a k-core can be defined for (Sperner) hypergraphs, where k corresponds to the number of hyperedges each node participates in [5]. The
maximum k-core of the hypergraph in Figure 1A is a 2-core consisting of {A,C,E}. Thus, as one would intuitively expect, the maximum k-core of the hypergraph ranks A, C, and E as most important—in
contrast to the graph model, whose maximum k-core would weight B and D stronger than E.
Figure 1. Examples of undirected (A,B) and directed (C,D) hypergraphs arising in the context of biological networks analysis.
Detailed explanations are given in the text.
Another application of undirected hypergraphs is minimal hitting sets (MHSs), also known as generalized vertex covers or hypergraph transversals [6],[7]. For example, in a given hypergraph model of a
PPI network, an interesting problem related to experimental design [5] is to determine minimal (irreducible) subsets of bait proteins that would cover or “hit” all complexes in a minimal way; i.e.,
no proper subset of an MHS would hit all complexes. In Figure 1A, the corresponding MHSs would be {A,E},{A,C},{B,E},{C,E},{D,E}. MHSs are also relevant for computing intervention strategies [8],[9].
An example: Assume that a network (which is in our example a directed graph) contains feedback loops, and a given selection of them is to be disrupted with appropriate interventions. This is
equivalent to computing MHSs in a hypergraph, where V is the ground set of interactions (here: edges) and E is the set of targeted feedback loops, i.e., each hyperedge contains the involved
interactions of one feedback loop. Figure 1B shows a simple interaction graph (left) containing three feedback loops, thereof two being negative (Figure 1B, middle). There are five MHSs for
disrupting the negative feedback loops: Two of them remove only one edge, whereas the other three cut two edges. Even though they require two interventions, the MHSs {3,4} and {3,5} will be preferred
if the only positive feedback loop in the network, constituted by edges 1,2,7,6, is to be kept functional. In a very similar way, one may compute MHSs of a “target set” of elementary modes, revealing
intervention strategies in metabolic networks [8],[10].
Hypergraphs are also closely related to the concept of independence systems. An independence system I = (V,U) is a collection U of subsets of a ground set V in which for each set uЄU all subsets of u
are part of the collection. Any Sperner hypergraph H = (V,E) can be extended to an independence system I = (V,U) in which V is still the set of vertices and U contains all hyperedges of E plus all
subsets of these hyperedges. The hyperedges of the original hypergraph are then the maximal independent sets (also called bases) of the independence system I . For example, the family of sets of the
independence system induced by the protein complex hypergraph in Figure 1A would contain the three protein complexes (the maximal independent sets) plus all subsets of each complex. Consider now the
following problem: Each protein is assigned a weight representing, for instance, the molecular weight of the protein. We could ask for the complex of maximal weight. Can we find such a complex
without examining all complexes, i.e., all maximal independent sets? If not, how good are approximations that we can find quickly? These questions can be answered by the theory of independence
systems using methods from discrete optimization and combinatorics [11],[12]. The most prominent type of independence system is that of a matroid [13]. Optimization problems on matroids are of low
complexity because the simple greedy algorithm (taking in each step the locally optimal choice) always finds a globally maximal independent set. Coming back to the optimization problem of finding the
heaviest protein complex in Figure 1A, assume the (molecular) weights are as follows: A = 1, B = 2, C = 3, D = 4, E = 5. A greedy strategy (operating on the vertices) would first select protein E
because it has the highest weight. This reduces the search space to complex C[2] and C[3]. For the next protein we choose C because its molecular weight is larger than that of A. The algorithm
finishes at that point as it has found a maximal independent set (complex C[3]) whose weight is 8, which is apparently not the optimum (note that this is not due to the larger size of complex C[1];
choosing A = 8, B = 1, C = 1, D = 9, E = 8, the greedy algorithm would deliver the four-protein complex C[1], although the true optimum is then the two-protein complex C[2]). The reason that the
greedy algorithm fails in this simple example is that the independence system spanned by the complex hypergraph is not a matroid.
Given how frequently greedy-type algorithms on hypergraphs are applied as heuristics in practice, it appears important to study the deviation of the hypergraph under consideration from being a
matroid [13]. A recent study on algorithms for measuring phylogenetic diversity underlines this point [14].
Directed Hypergraphs
The definition of directed hypergraphs is similar to undirected hypergraphs, D = (V,A), but each hyperedge aЄA—here also called hyperarc—is assigned a direction, implying that one has to define where
it starts and where it ends. Directed hypergraphs allow us to connect several start nodes (the tail ) with several end nodes (the head H). A hyperarc is thus defined as a = (,H) with and H being
subsets of the vertices V. Again, directed graphs are special cases of directed hypergraphs where both and H contain exactly one node limiting their scope to 1:1 relationships. In contrast, directed
hypergraphs can represent arbitrary n:m relationships.
Typical examples are (bio)chemical reactions, which are often bi-molecular, such as the example A+B→C+D. The tail of this hyperarc consists of the reactants A and B, whereas the head H contains the
product C and D. However, for an exact description of stoichiometric reactions we need to include the stoichiometric coefficients (which can be different from unity) in the hypergraph model. For this
purpose, one adds into each hyperarc two functions : →N and c[H]: H→N, assigning the stoichiometric coefficients for the nodes in and H, respectively. Each hyperarc a then reads a = (, , H, c[H]).
This completes the description of a stoichiometric network, which is in practice often conveniently described by a stoichiometric matrix (Figure 1C): The columns correspond to the reactions, i.e.,
hyperarcs, and the rows to the nodes, i.e., metabolites with their stoichiometric coefficients [15]. Reactants can be distinguished from the products by the negative sign at their stoichiometric
Directed hypergraphs can be drawn as shown in the example in Figure 1C. For simplifying drawing and analysis, directed hypergraphs are often converted (Figure 1C, right) either to directed substrate
graphs (similar to the graph in Figure 1A) or to directed bipartite graphs. In the latter, both reactions and metabolites are represented as two different types of nodes, and edges exist only from
metabolites to reactions, or vice versa. In contrast to the simple graph projection used in the substrate graph, the bipartite graph still reflects the original information from the hypergraph. This
representation can be used to determine a number of relevant topological network properties using graph-theoretical techniques [16]. However, even in bipartite graphs, graph-theoretical methods may
not be appropriate when analyzing functional properties that require an explicit consideration of the AND connections between reactants and products. For example, as one can easily verify, removing
reaction R1 from the reaction network in Figure 1C implies that a continuous production of E from A alone would not be possible anymore. However, a path from A to E still exists in the bipartite
graph (via nodes R2, D, and R3), which might suggest that this was still possible. Techniques of the popular constraint-based analysis of metabolic networks [15] directly operate on the
stoichiometric matrix and therefore take the hypergraphical nature of metabolic networks explicitly into account. Using a prominent example from the central metabolism (production of sugars from
fatty acids), a recent contribution illustrates that non-functional pathways might be detected in metabolic networks when paths in the underlying graph representation are interpreted as valid routes
[17]. A widely used concept for pathways in hypergraphical reaction networks is based on elementary (flux) modes, which are minimal functional sub-networks able to operate in steady state [18].
Elementary modes are better suited for studying functional aspects of metabolic networks than simple paths in the graph representation. However, it comes at the expense of higher computational
efforts. For example, the calculation of the elementary mode with the smallest number of reactions involved is much harder (NP-hard [19]) than the easy problem of computing shortest paths in graphs.
Furthermore, some problems can be safely studied in the graph representation. An example from Figure 1C: The shortest “influence” path along which a perturbation in the concentration of metabolite A
can spread over the network and affect the concentration of node E involves two steps (reactions R2 and R3) and can be deduced from both graph representations. Even if reaction R1 is absent, this
path would be valid if we assume that the concentration of B is non-zero at the beginning. What would not be possible with R2 and R3 alone, as discussed above, is a continuous production of E when A
is provided as a substrate.
Another application of directed hypergraphs in computational biology is the representation of logical relationships in signaling and regulatory networks. Interaction graphs (signed directed graphs)
are commonly used topological models for causal relationships and signal flows in cellular networks. For example, in Figure 1D, species A and B have a positive and C a negative influence on the
activation level of D. However, due to the 1:1 relationships, we cannot decide which combinations of input signals of D will eventually activate D itself. With additional information, a refined
hypergraph representation might be constructed as in the right part of Figure 1D: The hyperarc connecting A and B with D expresses a logical AND, whereas the (simple) red hyperarc from C to D
indicates an alternative way to activate D, namely if the inhibiting species C is not active. Hence, this hypergraph expresses the Boolean function “D gets activated if A AND B are active OR if C is
inactive”. In fact, any Boolean network can be represented by a directed hypergraph [9], which can be advantageous when analyzing biologically relevant network properties [9],[20],[21]. Again, a
correct analysis of network function and dysfunction, e.g., which knock-out combinations guarantee an inactivation of D in Figure 1D, requires the explicit consideration of AND relationships properly
captured by hypergraphs.
Algorithmic Considerations
The concept of hypergraphs provides such a rich modeling framework that algorithms necessarily will be problem-specific, and will differ in complexity from similar algorithms for graphs. Clearly,
since graphs are special cases of hypergraphs, algorithms for hypergraphs are at least as hard as its specialized implementations in the graph case. Generally, when discussing algorithms in graphs
and hypergraphs, one has to distinguish between two types of problems. The first type encompasses algorithms determining a particular (e.g., optimal) solution. One example, as noted above, are
shortest-path algorithms for graphs that are of low complexity (and thus applicable in large-scale networks) and which can also be used to find the connected components or to determine spanning trees
in a hypergraph. This is due to the fact that the graph representation as in Figure 1C captures all necessary information for these questions. If hyperedges are weighted, however, the shortest-path
problems are hard ones in general, unless assumptions about the structure of the edge weight function are made: If each edge is weighted by its cardinality, the shortest-path problem is NP-hard, but
if the weight function is additive, the problem can be solved using a modified Dijkstra algorithm [22]. On the other hand, problems that are computationally easy for graphs can be hard for
hypergraphs: Finding a maximal matching in a bipartite graph, i.e., determining a set of edges with maximal weight so that each node is contained in exactly one of the edges, is polynomial
time–solvable. Even checking whether a hypergraph is bipartite, i.e., can be partitioned into two sets of nodes so that no hyperedge is contained in either of them, is NP-hard [23].
The second type of problem is enumeration problems such as computing all paths and cycles in a graph or all minimal hitting sets in a hypergraph. These problems typically require enormous
computational effort and are often limited to networks of moderate size. For example, the hardness of computing the minimal hitting sets (transversal of a hypergraph) is an open question in
complexity theory [11]. The theoretically fastest currently known algorithm is quasi-polynomial [24], used successfully, e.g., in [12], whereas variants of Berge's method [6] are often faster in
practice [10]. In general, it turns out that the particular topology of cellular networks renders enumeration problems often feasible where one would expect infeasibility in random networks with
comparable size (see, e.g., [10],[25]).
Network Statistics in Hypergraphs
With the increasing availability of large-scale molecular interaction graphs such as PPI or gene regulatory networks, more and more researchers have begun asking not only for single specific elements
of a graph but instead for its statistical properties or significant building blocks. Examples are the neural network of C. elegans, which satisfies the small-world property, implying shorter mean
shortest paths and higher clustering coefficients than one would expect in random networks [26], and the PPI network of yeast, which may be modeled using a scale-free topology and whose node
connectivity is correlated with essentiality of the corresponding protein [27]. Key novelties in these approaches are that properties of the graphs are now interpreted as statistical distributions,
which can be correlated with other variables and asked for significance within an appropriate class of random graphs [28],[29]. In the following, we will first shortly outline some existing
extensions of graph statistics to hypergraph statistics and corresponding random models and afterward indicate applications in computational biology. We will focus on undirected hypergraphs, although
extensions to directed ones are possible.
The degree d(ν) of a vertex νЄV of an undirected hypergraph H = (V,E) is the number of hyperedges that contain ν. Similarly, the degree d′(e) of an hyperedge eЄ H is the number of vertices of that
hyperedge. If G is a graph, then d′(e) = 2. In the more general hypergraph setting, however, we can consider distributions both of vertex and hyperedge degrees. We can ask for mean degrees or more
general properties of the distributions. In social network analysis, this has already been done: For instance, an actor–movie hypergraph obeys power-law distributions in both degrees whereas an
author–publication hypergraph shows a power law only in the number of co-authored papers, but not in the author degree [30]—which is simply due to the fact that the number of authors on a paper is
relatively limited.
The natural next step in defining hypergraph statistics is to correlate vertex and hyperedge connectivity, a major ingredient for determining, e.g., the small-world property known from the graph case
[26]. Here, the commonly used graph clustering coefficient may be extended. For this, let denote the neighborhood of a vertex, which is defined as the set of hyperedges that contain ν. Then the
(hypergraph) clustering coefficient cc defined for a pair of vertices (u,ν) is given by , which quantifies overlap between neighborhoods. By analogy, it can be defined for hyperedges as well, and, by
averaging over all vertices, a univariate clustering coefficient may be defined. In the author–publication hypergraph, clustering coefficients of both vertices and hyperedges are higher than expected
by chance [30]. Another proposal for clustering coefficients in hypergraphs can be found in [31]. In addition to such local measures, we may also ask for global or semi-global properties. A common
question in the graph case is to identify clusters, often denoted as communities, within the graph. Various methods have been proposed in this context, with normalized cut [32] and graph modularity
[33] being two of the most popular ones, resulting in applications such as the search for modular structures, ideally protein complexes, in PPI networks [34]. The former method has already been
extended to hypergraphs [35].
In order to test for significance of certain structures, e.g., network motifs [36] or scaling structures [26],[27], good null models are important. Such null models describe random occurrences of
structures. One typically wants to keep some statistics of the network fixed while at the same time randomly sampling from its representational class. This results in the notion of random graphs with
certain additional properties such as Erdös-Rényi [37] or Barabási-Albert [38]. Extensions of random models, in particular to hypergraphs, would focus on generative models, which increasingly find
applications at least in the graph case [26],[39]. In the context of hypergraphs, first models have already been proposed [40].
What could be potential biological applications of hypergraph statistics? Given the fact that in gene regulatory networks statistical properties are decisive [27], it stands to reason that if one
wants to combine two types of regulations or interactions, e.g., gene and microRNA regulation, the resulting hypergraph ought to be analyzed from a hypergraph statistics point of view. Another
example is the human–disease network [41], consisting of disease genes and related diseases. Often, analysis and visualization are done on the projected versions, either onto diseases or genes.
However, node statistics or motif detection [36] may be performed in the hypergraph itself. The latter is already implemented, e.g., in FANMOD [42], a motif-finding tool ready to deal with n-partite
networks. Finally, we want to mention a hypergraph analysis of a mammalian protein complex hypergraph acquired from the CORUM database [43]. The hypergraph shows scale-free behavior in both vertex
degree and hyperedge degree distribution [44]. As illustrated schematically in Figure 2, the authors then built a random hypergraph, in which each node and each edge still had the same degrees as in
the original hypergraph, but where any higher-order node correlations such as the clustering coefficients were destroyed. By using this hypergraph null model, the authors were able to show that
certain large protein complexes with low mean protein length would not be expected by chance. Altogether, hypergraph statistics can be easily applied to, e.g., networks of interactions between nodes
of two types, and first examples already show promising results.
Figure 2. Generating a hypergraph null model by rewiring.
Choose two distinct hyperedges and two different vertices contained in either of the two. Then swap them. Clearly this operation keeps both degree distributions fixed. After a certain number of
iterations, the thus-generated Markov chain produces independent samples of the underlying random hypergraph with given degree distributions. In the figure, this is illustrated using the in-this-case
simpler-to-visualize bipartite version. The gray double-arrows indicate edges to be swapped. Each of the three swaps, (A,H[2])–(C,H[3]), (B,H[1])–(E,H[3]), and (B,H[3])–(D,H[1]), does not change the
vertex and edge degrees. Significance analysis of the CORUM protein complex hypergraph was done in [44] using this idea.
To summarize, hypergraphs generalize graphs by allowing for multilateral relationships between the nodes, which often results in a more precise description of biological processes. Hypergraphs thus
provide an important approach for representing biological networks, whose potential has not been fully exploited yet. We therefore expect that applications of hypergraph theory [6],[22] in
computational biology will increase in the near future.
FT thanks Florian Blöchl and SK is grateful to Regina Samaga and Axel von Kamp for helpful comments during the preparation of the manuscript.
Hypergraphs are also useful utilizing prior knowleges with multiple types of data integration
Posted by taehyun
hypergraphs are also useful for sequence alignment!
Posted by rbradley
Another Perspective article on Meta- and Hypergraphs for Biological Networks
Posted by albrechtm
|
{"url":"http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000385","timestamp":"2014-04-18T15:41:01Z","content_type":null,"content_length":"149316","record_id":"<urn:uuid:67bfda26-2d8f-4d0f-b2f2-0b2e512c8072>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Workshop: Impagliazzo’s Worlds — June 3-5, 2009
Tentative program for the workshop on
"Complexity and Cryptography: Status of Impagliazzo's Worlds"
June 3-5, Princeton
Talks are in Room 006 in the Friend center , lunch and rump session in convocation room of Friend center, banquet is in Palmer House.
Parking: You can park in lots 3 and 21 using this permit, see campus map . There are also metered parking spots on Olden St and Propspect Ave.
Below are streaming versions of videos, you can also download them via this link
Wednesday June 3rd.
10:30 Breakfast, coffee etc...
11:15-11:30 Opening words
11:30-12:30 Russell Impagliazzo: 5 worlds of problems hi-res video
12:30-2:30 Lunch
2:30-3:30 Chris Peikert: Public-Key Cryptosystems from the Worst-Case Shortest Vector Problem
hi-res video slides
3:30-4 Coffee break
4-5 Craig Gentry: A Homomorphic Public Key Cryptosystem hi-res video
6-9pm Banquet at Palmer House
Thursday, June 4th.
9 Breakfast, coffee etc..
9:30-10:30 Iftach Haitner: Inaccessible Entropy hi-res video slides
10:30-11 Coffee break
11-12 Luca Trevisan: On Algorithmica vs Heuristica
12-1:30 Lunch
1:30-2:30 Dimitris Achlioptas: Algorithmic Phase Transitions in Constraint Satisfaction Problems hi-res video slides
2:30-3:30 Uri Feige: Dense subgraphs of random graphs hi-res video
3:30-4 Coffee Break
4-5 Benny Applebaum: Cryptography from Average-Case Hardness hi-res video slides
Dinner on your own.
7-9pm Rump/open problems session schedule
Dodis Goyal Vadhan Ostrovski Naor Mahmoody-Ghidary Chung Prabhakaran Trevisan Haitner Wee Rothblum Gordon Viola Xiao Peikert Barak
Friday, June 5th.
9 Breakfast, coffee etc..
9:30-10:30 Scott Aaronson: Impagliazzo's Worlds in Arithmetic Complexity hi-res video slides
10:30-11 Coffee break
11-12 Valentine Kabanets: Direct Product Testing hi-res video slides
12-1:30 Lunch
1:30-2:30 Moni Naor: How Fair Can a Coin Toss Be? hi-res video
2:30-3:30 Yuval Ishai: Efficiency vs. assumptions in secure computation hi-res video slides
3:30-4 Coffee break
4-5 Amit Sahai: A New Protocol Compiler hi-res video slides (keynote)
Russell Impagliazzo : Five Worlds of Problems
``A personal view on average-case complexity'' introduced five possible worlds as
metaphors for the outcomes of fundamental questions in complexity concerning
whether and to what extent intractable problems exist.
In this talk, I will try to put forward some concrete open problems addressing these issues.
I will emphasize those problems I believe are both within reach,
in that there is no fundamental reason they should be difficult, but whose solutions
would cast light on which of the five worlds is the case and what these worlds look like.
Chris Peikert: Public-Key Cryptosystems Based from the Worst-Case Shortest Vector Problem
We construct public-key cryptosystems that are secure assuming the
worst-case hardness of approximating the minimum distance on
$n$-dimensional lattices to within small poly(n) factors. Prior
cryptosystems with worst-case connections were based either on the
shortest vector problem for a *special class* of lattices (Ajtai and
Dwork, STOC 1997; Regev, J.~ACM 2004), or on the conjectured hardness
of lattice problems for *quantum* algorithms (Regev, STOC 2005).
Our main technical innovation is a reduction from variants of the
shortest vector problem to corresponding versions of the ``learning
with errors'' problem; previously, only a quantum reduction of this
kind was known. As an additional contribution, we construct a natural
*chosen ciphertext-secure* cryptosystem having a much simpler
description and tighter underlying worst-case approximation factor
than prior schemes.
Craig Gentry: A Homomorphic Public Key Cryptosystem
We propose a fully homomorphic encryption scheme – i.e., a scheme that
allows one to evaluate circuits over encrypted data without being able
to decrypt. Our solution comes in three steps. First, we provide a
general result – that, to construct an encryption scheme that permits
evaluation of arbitrary circuits, it suffices to construct an encryption
scheme that can evaluate (slightly augmented versions of) its own
decryption circuit; we call a scheme that can evaluate its (augmented)
decryption circuit bootstrappable.
Next, we describe a public key encryption scheme using ideal lattices
that is almost bootstrappable. Lattice-based cryptosystems typically
have decryption algorithms with low circuit complexity, often dominated
by an inner product computation that is in NC1. Also, ideal lattices
provide both additive and multiplicative homomorphisms (modulo a
public-key ideal in a polynomial ring that is represented as a lattice),
as needed to evaluate general circuits.
Unfortunately, our initial scheme is not quite bootstrappable – i.e.,
the depth that the scheme can correctly evaluate can be logarithmic in
the lattice dimension, just like the depth of the decryption circuit,
but the latter is greater than the former. In the final step, we show
how to modify the scheme to reduce the depth of the decryption circuit,
and thereby obtain a bootstrappable encryption scheme, without reducing
the depth that the scheme can evaluate. Abstractly, we accomplish this
by enabling the encrypter to start the decryption process, leaving less
work for the decrypter, much like the server leaves less work for
decrypter in a server-aided cryptosystem.
Iftach Haitner: Inaccessible Entropy
We put forth a new computational notion of entropy, which measures the
(in)feasibility of sampling high entropy strings that are consistent
with a given protocol. Specifically, we say that the i'th round of a
protocol (A, B) has _accessible entropy_ at most k, if no
polynomial-time strategy A^* can generate messages for A such that the
entropy of its message in the i'th round has entropy greater than k
when conditioned both on prior messages of the protocol and on prior
coin tosses of A^*. We say that the protocol has _inaccessible entropy_
if the total accessible entropy (summed over the rounds) is noticeably
smaller than the real entropy of A's messages, conditioned only on
prior messages (but not the coin tosses of A). As applications of
this notion, we
* Give a much simpler and more efficient construction of statistically
hiding commitment schemes from arbitrary one-way functions.
* Prove that constant-round statistically hiding commitments are
necessary for constructing constant-round zero-knowledge proof systems
for NP that remain secure under parallel composition (assuming the
existence of one-way functions).
Luca Trevisan: On Algorithmica vs Heuristica
Dimitris Achlioptas: Algorithmic Phase Transitions in Constraint Satisfaction Problems
For many Constraint Satisfaction Problems by now we have a good
understanding of the largest constraint density for which solutions
exist. At the same time, though, all known polynomial-time algorithms
for these problems stop finding solutions at much smaller densities. We
study this phenomenon by examining how the different sets of solutions
evolve as constraints are added. We prove in a precise mathematical
sense that, for each problem studied, the barrier faced by algorithms
corresponds to a phase transition in that problem’s solution-space
geometry. Roughly speaking, at some problem-specific critical density,
the set of solutions shatters and goes from being a single giant ball
to exponentially many, well-separated, tiny pieces. All known
polynomial-time algorithms work in the ball regime, but stop as soon as
the shattering occurs. Besides giving a geometric view of the solution
space of random CSPs our results provide novel constructions of one-way functions.
Joint work with Amin Coja-Oghlan.
Uri Feige: Dense subgraphs of random graphs
see pdf file
Benny Applebaum: Cryptography from Average-Case Hardness
We construct a public key cryptosystem based on the assumption that
it is hard to solve n^{1.41} random 3-sparse equations modulo 2 in n
variables, where an n^{-0.2} fraction of the equations are noisy.
This assumption seems different than previous assumptions used to
construct public key systems. In particular, to our knowledge, it is the
first noisy equations type public key system with noise magnitude larger
than 1/sqrt{n}. We also show that our assumption is not refuted
by certain natural algorithms such as Lassere SDP's, low-degree polynomials and ACO circuits,
and, using Feige's 3XOR Principle, that the refutation variant of
this assumption with the same parameters is implied by the hardness
of refuting random 3CNFs with O(n^{1.41}) clauses.
We also construct a public key cryptosystem based on the hardness o
the above problem with fewer equations- as low as c*n for a large
constant c - at the expense of making an additional assumption about
a planted dense subgraph problem in random 3-uniform hypergraphs.
Joint work with Boaz Barak and Avi Wigderson.
Scott Aaronson: Impagliazzo's Worlds in Arithmetic Complexity
In this talk, I'll examine Impagliazzo’s worlds through the "funhouse
mirror" of arithmetic computation over finite fields. The motivation
is not only to gain insight into the ordinary Boolean worlds, but also
to create a Natural Proofs theory for arithmetic circuits, which would
explain our failure to prove superpolynomial lower bounds for the
Permanent. I'll prove two main results. Firstly, in a reasonable
model of arithmetic computation over "small" finite fields F---where
"small" means |F|<=2^poly(n)---one-way functions, pseudorandom
generators, and pseudorandom functions exist if and only if they exist
in the ordinary Boolean world. The proof uses a shifted quadratic
character problem due to Boneh and Lipton as a "bridge" between the
Boolean and arithmetic worlds. Secondly, over "large" finite fields
F—where "large" means |F|>2^poly(n)---there's a meaningful sense in
which "P!=NP implies the existence of one-way functions": that is,
Heuristica, Pessiland, and Minicrypt all collapse to a single world.
I'll end by speculating about the possibility of a Natural Proofs
theory for low-degree arithmetic circuits, and proposing a concrete
candidate for a low-degree arithmetic pseudorandom function.
Joint work with Andrew Drucker
Valentine Kabanets: Direct Product Testing
Moni Naor: How Fair Can a Coin Toss Be?
Coin-flipping protocols allow mutually distrustful parties to
generate a common unbiased random bit, guaranteeing that even if
one of the parties is malicious, it cannot significantly bias the
output of the honest party.
A classical result by Cleve from STOC 1986 showed that without
simultaneous broadcast, for any two-party coin-flipping protocol
with r rounds, there exists an efficient adversary that can bias
the output of the honest party by Omega(1/r). However, the best
previously known protocol only guarantees O(1/sqrt(r)) (based on
Blum's protocol) and Cleve and Impagliazzo have shown that this is
optimal in the fail-stop model (where the adversary is limited to
early stopping but has unbounded computational power).
We establish the optimal tradeoff between the round complexity and
the maximal bias of two-party coin-flipping protocols. Under
standard assumptions, we show that Cleve's lower bound is tight:
we construct an r-round protocol with bias O(1/r). We make use of
recent progress by Gordon, Hazay, Katz and Lindell regarding fair
protocols (STOC 2008).
Unlike Blum's coin flipping protocol that requires only the
existence of any one-way function, our protocol (like the Protocol
of Gordon et al) requires "cryptomania" style assumptions. One of
the main interesting questions is whether this is essential.
Joint work with Tal Moran and Gil Segev
Yuval Ishai: Efficiency vs. assumptions in secure computation
The talk will survey the state of the art in the complexity of secure computation, highlight
connections with the topics of the workshop, and discuss the role of
nonstandard assumptions in some of the current frontiers.
Amit Sahai: A New Protocol Compiler
One of the most fundamental goals in cryptography is to design
protocols that remain secure when adversarial participants can engage
in arbitrary malicious behavior. In 1986, Goldreich, Micali, and
Wigderson presented a powerful paradigm for designing such protocols:
their approach reduced the task of designing secure protocols to
designing protocols that only guarantee security against
"honest-but-curious" participants. By making use of zero-knowledge
proofs, the GMW paradigm enforces honest behavior without compromising
secrecy. Over the past two decades, this approach has been the
dominant paradigm for cryptographic protocol design.
In this talk, we present a new general paradigm / protocol compiler
for secure protocol design. Our approach also reduces the task of
designing secure protocols to designing protocols that only guarantee
security against honest-but-curious participants. However, our
approach avoids the use of zero-knowledge proofs, and instead makes
use of multi-party protocols in a much simpler setting - where the
majority of participants are completely honest (such multi-party
protocols can exist without requiring any computational assumptions).
Our paradigm yields protocols that rely on Oblivious Transfer (OT) as
a building block. This offers a number of advantages in generality
and efficiency.
In contrast to the GMW paradigm, by avoiding the use of zero-knowledge
proofs, our paradigm is able to treat all of its building blocks as
"black boxes". This allows us to improve over previous results in the
area of secure computation. In particular, we obtain:
* Conceptually simpler and more efficient ways for basing
unconditionally secure cryptography on OT.
* More efficient protocols for generating a large number of OTs using
a small number of OTs.
* Secure and efficient protocols which only make a black-box use of
cryptographic primitives or underlying algebraic structures in
settings where no such protocols were known before.
This talk is based primarily on joint works with Yuval Ishai
(Technion and UCLA) and Manoj Prabhakaran (UIUC).
|
{"url":"http://intractability.princeton.edu/blog/2009/05/program-for-workshop-on-impagliazzos-worlds/","timestamp":"2014-04-18T05:29:53Z","content_type":null,"content_length":"37991","record_id":"<urn:uuid:96595af0-694e-4e90-b756-00059d1c558d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roadmap to Computer Algebra Systems Usage for Algebraic Geometry
up vote 9 down vote favorite
I've decided it's time to start learning how to use a computer to do calculations... I've used Singular to some small extent so far, but I want to start relying on computer algebra systems more.
Which computer algebra system is best for what, and what is the easiest/most fun(?) way to learn how to deal with them?
I should mention that I do (arguably) Arithmetic Geometry. So, ideally I should invest my time in learning something that is capable of making the abstract existence theorems in that field explicit
(see for example: How to get explicit unramified covers of an elliptic curve?, or another example is normalizations).
P.S. I don't have access to Magma. How powerful is their "online calculator", and is it worthwhile to learn Magma just to use the online version?
tag-removed ag.algebraic-geometry arithmetic-geometry
2 Some arithmetic geometers I know use Mathematica, and appear to lead happy, productive lives. I have no idea whether it would work for you. – Igor Rivin Jan 15 '11 at 3:52
1 Sage can handle quite a bit, though for hardcore routines it typically calls to Magma (so if you do not have this installed, they will not work in Sage either). On that note, one of the postdocs
with my supervisor uses Sage and the online Magma calculator in combination, and seems happy with it. – mathic Jan 15 '11 at 6:43
I like Macaulay 2. It's quite good for commutative algebra and there are several packages to deal with various algebro-geometric problems. – Baptiste Calmès Jan 15 '11 at 11:04
If you ever get seriously involved in computer calculations, chances are that you will end up using a variety of programs. Symbolic computation is extremely heavy-duty, and for almost any
non-trivial task, performance varies wildly from software to software, with no single program being best for everything. – Thierry Zell Jan 16 '11 at 2:43
10 Mathic's comment above about Sage calling to Magma for "hardcore stuff" is utterly and completely false. – William Stein Jan 16 '11 at 3:24
show 3 more comments
2 Answers
active oldest votes
Your question is "Which computer algebra system is best for what, and what is the easiest/most fun(?) way to learn how to deal with them?"
• Regarding community, I think Sage (http://sagemath.org) is the best CAS, since Sage is completely open and free, and there are about a dozen mailing lists, and thousands of
subscribers and messages a month. There is also a forum like mathoverflow for Sage: http://ask.sagemath.org/questions/. Also, Sage builds heavily on Singular, so it may feel
familiar. See this 1-page article I published in the Notices for more about motivation: http://www.ams.org/notices/200710/tx071001279p.pdf For a sense of the capabilities of Sage,
skim the reference manual: http://sagemath.org/doc/reference/. There is also an optional package of more unstable code for arithmetic geometry here: http://code.google.com/p/
• Regarding raw interpreted speed, it's been claimed that the Magma interpreter is often faster than the Sage interpreter (=Python). However, Sage has Cython http://cython.org, which
can produce code that is much faster than anything one can produce using the Magma (or any other CAS) interpreter.
• Magma is the only existing CAS that has well developed capabilities for computing with function fields of transcendance degree 1 over $\mathbf{F}_p$, i.e., the function field
analogue of algebraic number theory. Chris Hall and I are working on something similar for Sage right now, but this will take a long time.
up vote 16
down vote • Magma is also the only CAS that can compute fairly general spaces of Hilbert Modular Forms (again, work is also under way to add this to Sage, but it is a nontrivial project).
accepted Someday you might care about this.
• Regarding elliptic curves, Sage has much more related to $p$-adic $L$-functions and Heegner points, but Magma has 3 and 4 descent.
The answer to your question: "Is it worth learning Magma?" is definitely still yes, since there are still many algorithms today in arithmetic geometry that are only implemented in Magma,
and available nowhere else (definitely not in Macaulay2, Singular, Mathematica, Sage, etc.). It will only take a few days, and you will have a better sense of what is possible. The exact
same argument applies to Sage as well. Learn both. For Sage, you basically should: 1. learn Python, and 2. go through the Sage tutorial, which takes 2-3 hours.
Another point: Sage and Magma have a huge overlap in functionality related to arithmetic geometry, but the overlap in code bases is almost zero (and in many cases the people who
implemented the algorithms in both systems are disjoint too). Thus there is some nontrivial value in comparing the output of both systems. See, e.g., this trac ticket about
implementation of a function for computing all integral points on an elliptic curve for a nice example of this http://trac.sagemath.org/sage_trac/ticket/3674 This ticket also illustrates
how the code that goes into Sage is all publicly peer reviewed (unlike this case with Magma).
add comment
I use Macaulay2 for standard computations in commutative algebra/algebraic geometry, like Gröbner bases, graded free resolutions, Tor/Ext groups etc. There are also a lot of add-on
packages you can import for working with say, intersection theory, toric varieties and convex geometry. M2 is easy to learn and runs quite smoothly, especially using the interface in
up vote 4
down vote In addition to the M2-documentation, which is very complete, there is also a really nice book, Computations in algebraic geometry with Macaulay 2 available freely on-line.
add comment
Not the answer you're looking for? Browse other questions tagged tag-removed ag.algebraic-geometry arithmetic-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/52145/roadmap-to-computer-algebra-systems-usage-for-algebraic-geometry","timestamp":"2014-04-17T16:01:51Z","content_type":null,"content_length":"63398","record_id":"<urn:uuid:8d135eef-5e66-4ffc-a256-931f5258530b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
|
skip to events
Event Detail Information
Event Detail Information
Date Feb 21, 2013
Time 4:00 pm
Location 245 Altgeld Hall
Sponsor Department of Mathematics
Contact Steve Bradlow
Event type colloquia
Views 532
James Fill (Johns Hopkins University) will present "Distributional Convergence for the Number of Symbol Comparisons Used by Quicksort." Abstract: We will begin by reviewing the operation of the
sorting algorithm QuickSort. Most previous analyses of QuickSort have used the number of key comparisons as a measure of the cost of executing the algorithm. In contrast, we suppose that the n
independent and identically distributed (iid) keys are each represented as a sequence of symbols from a probabilistic source and that QuickSort operates on individual symbols, and we measure the
execution cost as the number of symbol comparisons. Assuming only a mild "tameness" condition on the source, we show that there is a limiting distribution for the number of symbol comparisons after
normalization: first centering by the mean and then dividing by n. Additionally, under a condition that grows more restrictive as p increases, we have convergence of moments of orders p and smaller.
In particular, we have convergence in distribution and convergence of moments of every order whenever the source is memoryless, i.e., whenever each key is generated as an infinite string of iid
symbols. This is somewhat surprising: Even for the classical model that each key is an iid string of unbiased ("fair") bits, the mean exhibits periodic fluctuations of order n.
|
{"url":"http://illinois.edu/calendar/detail/3024?eventId=27315098&calMin=201302&cal=20130204&skinId=1","timestamp":"2014-04-18T06:39:13Z","content_type":null,"content_length":"59374","record_id":"<urn:uuid:9b4930cd-8a1a-4584-9601-5af0fba3bf93>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Naturally occurring orderings
up vote 8 down vote favorite
The are many orderings that naturally occur in interesting but seemingly unrelated circumstances. Here are some examples:
1. The volume spectrum of orientable hyperbolic 3-manifolds has order type $\omega^\omega$.
2. Ordinals that play important roles in Conway's $\mathbf {On_2}$, most notably $\omega^{\omega^\omega}$, the algebraic closure of $2$. See Lenstra's papers 1 2, Conway's ONAG, and Lieven's blog
3. The set of fusible numbers has order type $\epsilon_0$ (quite likely but not proven, see my note).
4. The Sharkovsky ordering of natural numbers, which does not have order type of an ordinal.
5. There are proof theoretical ordinals, which I know little about.
Do you know any other examples or see any connection among aforementioned examples? Most of the examples above are ordinals, but other interesting examples are welcome.
gm.general-mathematics ordinal-numbers
That's a good question! – Todd Trimble♦ Jun 12 '11 at 14:41
Not certain if this is an appropriate for an answer so I'll leave it as a comment for now. Here's another nice-but-surprising way to get $\omega^\omega$: Let f(n) denote the smallest number of 1's
4 needed to write n using any combination of addition and multiplication, e.g., f(7)=6 as shortest way for 7 is 7=(1+1+1)(1+1)+1. For any n, f(n)>=3log_3(n). So subtract off this lower bound and
consider d(n)=f(n)-3log_3(n). Then the set of all values of d is a well-ordered set of real numbers, with order type $\omega^\omega$. Afraid I can't give a proper reference as we haven't published
this yet! :) – Harry Altman Jun 13 '11 at 0:18
1 @Harry: your results are definitely interesting and also arise from a natural number theory question, so I'd like to vote it up if you post it as an answer:) By the way, there are two more
examples became known to me recently: the order type of the set of Pisot numbers, and the $\sigma$-ordering on braid groups. – Junyan Xu Jun 15 '11 at 5:38
add comment
2 Answers
active oldest votes
Alright, I'll put my comment as an answer and hopefully get this off the no-upvoted-answers queue. :)
Here's another nice-but-surprising way to get $\omega^\omega$: Let $\|n\|$ denote the smallest number of 1's needed to write n using any combination of addition and multiplication, e.g.,
$\|7\|=6$ as shortest way for 7 is $7=(1+1+1)(1+1)+1$. (This is known as the "integer complexity" of n; it's sequence A005245.)
up vote 7 Now, for any n, we have the lower bound $\|n\|\ge 3log_3 n$. So subtract this off and consider $\delta(n):=\|n\|-3log_3 n$. Then the set of all values of $\delta$ is a well-ordered
down vote subset of $\mathbb{R}$, with order type $\omega^\omega$.
For a proof, I refer you to my preprint: http://arxiv.org/abs/1310.2894
add comment
Hrbacek, following Ballard, has recently put a certain partial ordering called $\sqsubseteq$ on absolutely everything in order to do nonstandard analysis a la Nelson (i.e. internal set
Let $\mathscr{L}$ be the language of ZFC (and Tarski-Grothendieck if you insist). We say a well-formed formula of $\mathscr{L}$ is an $\in$-formula. We assert that ZFC holds for all
well-formed $\in$-formulae.
Now we throw in $\sqsubseteq$ to $\mathscr{L}$ to get a bigger language $\mathscr{HB}.$ First, we assume that ZFC holds for all $\in$-formulae. Let us write $x \sqsubseteq_\alpha y$ as an
abbreviation of $x \sqsubseteq \alpha \vee x \sqsubseteq y.$ Suppose we have a well-formed formula $P$ of $\mathscr{HB}.$ We write $P^\alpha$ for the replacement of every instance of $\
sqsubseteq$ with $\sqsubseteq_\alpha.$ Let us also write $x \sqsubset y$ for $x \sqsubseteq y \wedge y \not\sqsubseteq x.$ We also write $2^A_{\mathrm{fin}}$ for the set of all finite
subsets of $A.$ We also write $(\forall u \sqsubseteq v) P(u,v)$ for $(\forall u)(u \sqsubseteq v \implies P(u,v)),$ and so on for $\exists,$ and for $\in$ as well.
Then Hrbacek's GRIST is the following condition on $\sqsubseteq,$ with four axiom schemata:
R elativization condition on $\sqsubseteq$: $\sqsubseteq$ is a total dense preordering with minimal element $\emptyset$ and no maximal element; i.e. the conjunction of
1. Partial ordering: $(\forall u,v,w)((v\sqsubseteq u \wedge w \sqsubseteq v) \implies w \sqsubseteq u) \wedge u \sqsubseteq u;$
2. Preordering: $(\forall u,v)(u \sqsubseteq v \wedge v \sqsubseteq u);$
3. Minimality of $\emptyset$: $(\forall u)(\emptyset \sqsubseteq u);$
4. Illimitability: $(\forall u) (\exists v) (u \sqsubset v);$
up vote 5. Density: $(\forall u,v) (u \sqsubset v \implies (\exists w)(u \sqsubset w \sqsubset v)).$
0 down
vote Axiom schemata, in which we use words so as not to have our eyes completely glaze over:
For any well-formed formula $P$ of $\mathscr{HB}$ depending on finitely many variables,
1. T ransfer: for all $u \sqsubseteq v$ and $x_1,\ldots, x_n \sqsubseteq u,$
$P^u(x_1,\ldots,x_n) \iff P^v(x_1,\ldots,x_n)$
2. S tandardization: for all $u \sqsupset \emptyset$ and for all $A, x_1, \ldots, x_n,$ there are $v \sqsubset u$ and $B \sqsubset v$ such that, for every $w$ with $v \sqsupseteq w \
sqsupset u,$
$(\forall y \sqsubseteq w)(y \in B \iff y \in A \wedge P^w(y,x_1,\ldots,x_n)).$
3. I dealization: For all $A \sqsubset v$ and all $x_1,\ldots,x_n,$
$(\forall a \in 2^A_{\mathrm{fin}})\big([a\sqsubset v \implies (\exists y)(\forall x \in a) P^v(x,y,x_1,\ldots,x_n)] \iff[(\exists y)(\forall x \in A)[x \sqsubset u \implies P^v
4. G ranularity: For all $x_1,\ldots,x_k,$ if $(\exists u) P^u(x_1,\ldots,x_k),$ then
$(\exists u)[P^u(x_1,\ldots,x_k) \wedge (\forall v)(v \sqsubset u \implies \neg P^v(x_1,\ldots,x_n))]$
add comment
Not the answer you're looking for? Browse other questions tagged gm.general-mathematics ordinal-numbers or ask your own question.
|
{"url":"http://mathoverflow.net/questions/61929/naturally-occurring-orderings/73344","timestamp":"2014-04-18T21:17:10Z","content_type":null,"content_length":"61473","record_id":"<urn:uuid:4e846c22-0fdb-46bb-8ccb-3ac187dbd14b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Creep Modeling in a Composite Rotating Disc with Thickness Variation in Presence of Residual Stress
International Journal of Mathematics and Mathematical Sciences
Volume 2012 (2012), Article ID 924921, 14 pages
Research Article
Creep Modeling in a Composite Rotating Disc with Thickness Variation in Presence of Residual Stress
Department of Mathematics, Punjabi University, Punjab Patiala 147 002, India
Received 30 March 2012; Revised 4 August 2012; Accepted 5 September 2012
Academic Editor: Frank Werner
Copyright © 2012 Vandana Gupta and S. B. Singh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Steady-state creep response in a rotating disc made of Al-SiC (particle) composite having linearly varying thickness has been carried out using isotropic/anisotropic Hoffman yield criterion and
results are compared with those using von Mises yield criterion/Hill's criterion ignoring difference in yield stresses. The steady-state creep behavior has been described by Sherby's creep law. The
material parameters characterizing difference in yield stresses have been used from the available experimental results in literature. Stress and strain rate distributions developed due to rotation
have been calculated. It is concluded that the stress and strain distributions got affected from the thermal residual stress in an isotropic/anisotropic rotating disc, although the effect of residual
stress on creep behavior in an anisotropic rotating disc is observed to be lower than those observed in an isotropic disc. Thus, the presence of residual stress in composite rotating disc with
varying thickness needs attention for designing a disc.
1. Introduction
Residual stress significantly affects the engineering properties of materials and structural components, notably fatigue life, distortion, dimensional, corrosion resistance, brittle fracture, and so
forth. For that reason, the residual stress analysis is an important stage in the design of parts and structural elements. The thermal residual stresses induced due to thermal mismatch between the
metal matrix and the ceramic reinforcement in metal matrix composite may impart plastic deformation to the matrix. Thermal mismatch strains also may quite often crack the matrix resulting in a
relaxation of the residual stresses. Presence of thermal residual stresses can induce the asymmetry in the tensile and compressive yield stresses of the composite. Residual stresses may be reduced or
eliminated by annealing, by plastic deformation, or just by letting the piece at room temperature enough time. Because of its influence on the properties, the residual stress in composites has been
the subject of several studies, both experimentally and analytically. Bhatnagar et al. [1] have performed steady-state creep analysis of orthotropic rotating discs having constant thickness, linearly
varying thickness, and hyperbolically varying thickness. They have used Norton’s power law to describe creep behavior of the disc material and concluded that by selecting an optimum profile for the
disc, the stress and strain rate in the disc may be reduced. Arsenault and Taya [2] investigated the magnitude of the thermal residual stresses by determining the difference of the yield stresses
between tension and compression resulting from the thermal residual stresses. Mishra and Pandey [3] have claimed that Sherby’s constitutive creep model works better than Norton’s creep law to
describe the creep behavior of aluminum matrix composites. Pandey et al. [4] have studied the steady-state creep behavior of Al-SiCp composites under uniaxial loading condition in the temperature
range between 623K and 723K for different combinations of particle sizes (1.7μm, 14.5μm, and 45.9μm) and with varying particle content (10vol%, 20vol%, and 30vol%) and found that the
composite with finer particle size has better creep resistance than that containing coarser ones. Davis and Allison [5] investigated that the mismatch in the coefficient of thermal expansion between
the SiC particle and the aluminum alloy metal matrix gave rise to a high density of dislocation both at and near the reinforcement/matrix interface. The enhanced expansion of the matrix induced
plastic deformation during cooling with an associated increase in the density of dislocations. Hu and Huang [6] proposed an analytical method to predict the influence of residual stress on the
elastic-plastic behavior of a general composite. The model incorporates the microstructural parameters like fiber shape, orientation, distribution, and volume fraction. The computed results show that
the influence of residual stress on the macroscopic properties depends closely on the microstructures of the composite. Jahed and Shirazi [7] investigated loading and residual stresses and associated
strains and displacements in thermoplastic rotating discs at elevated temperatures. Orcan and Eraslan [8] investigated the distribution of stress, displacement, and plastic strain in a rotating
elastic-plastic solid disc of variable thickness in a power function. The analysis is based on Tresca's yield condition. By employing a variable thickness disc the plastic limit angular velocity
increases and the magnitude of stresses and deformations in the disc reduces. Singh and Ray [9] proposed a new yield criterion for residual stress, which at appropriate limits reduces to Hill
anisotropic and Hoffman anisotropic yield criterion, and carried out analysis of steady-state creep in a rotating disc made of Al-SiCw composite using this criterion and compared the results obtained
using Hill anisotropic yield criterion ignoring difference in yield stresses. Singh and Ray [10] studied the effect of thermal residual stress on the steady-state creep behavior of a rotating disc
made of composite using isotropic Hoffman yield criterion while describing the creep by Norton’s power law. They concluded that the tensile residual stress significantly affects the strain rates in
the disc when compared with the strain rate in the disc without residual stress. Gupta et al. [11] have analyzed steady-state creep in isotropic aluminum silicon carbide particulate rotating disc.
The creep behavior has been described by Sherby’s law. The authors concluded that the tangential as well as radial stress distribution in the disc does not vary significantly for various combinations
of material parameters and operating temperatures. Moreover the tangential as well as radial strain rates in the disc reduce significantly with reducing particle size, increasing particle content,
and decreasing operating temperature. Jahed et al. [12] observed that the use of variable thickness disc helps in minimizing the weight of disc in aerospace applications. There are numerous
applications for gas turbine discs in the aerospace industry such as in turbojet engines. These discs normally work under high temperatures while subjected to high angular velocities. Sayman [13]
investigated elastic-plastic and residual stresses in thermoplastic composite laminated plates under linear thermal loading and also carried out an elastic-plastic, thermal stress analyses on
thermoplastic composite disc reinforced with steel fibers under uniform temperature distribution and concluded that plastic yielding expands both around inner and outer surfaces and that plastic flow
is highest at the inner surface. Singh [14] has performed creep analysis in an anisotropic composite disc rotating at 15,000rpm and undergoing steady-state creep at 561K following Norton’s power
law. The presence of anisotropy leads to significant reduction in the tangential and radial strain rates over the entire disc and helps in restraining creep response both in the tangential and in the
radial directions. Xuan et al. [15] have studied the time-dependent deformation and fracture performance of multimaterial system and structures at elevated temperature. He developed a microregion
deformation measuring technique which allows the direct measurement of the full creep strain fields of multimaterial system at higher temperature. Chamoli et al. [16] have studied the effect of
anisotropy on the stress and strain rates and concluded the anisotropy of the material has a significant effect on the creep of a rotating disc. The creep behavior is described by Sherby’s law. Singh
and Rattan [17] have investigated the stress distributions and the resulting creep deformation in isotropic rotating disc having constant thickness and made of silicon carbide particulate reinforced
aluminium base composite in presence of thermal residual stress. It is concluded that the presence of the tensile residual stress affects the distribution of stresses and strain in the disc with
constant thickness. Chen et al. [18] have investigated the effects of material gradients on the creep stress and strain of a pressurized tank, which is assumed to be made of functionally graded
materials. They have concluded that the magnitude of the creep strain is influenced by the elastic modulus distribution as well as the creep property distribution inside the functionally graded
materials and achieved some fundamental knowledge of the materials distribution to reduce the maximum creep stress/strain level inside the functionally graded materials tank. Gupta and Singh [19]
have studied the effect of anisotropy on the stress and strain rates in composite disc made of anisotropic material (6061Al-30% vol SiC[p]) and concluded that the anisotropy of the material has a
significant effect on the creep of a rotating disc with varying thickness.
In this paper, the steady-state creep has been investigated for composite rotating disc made of material 6061Al base alloy containing 20vol% of SiC (particle). The analysis has been done with/
without thermal residual stresses for isotropic/anisotropic disc of linearly varying thickness. The creep behavior has been described by Sherby’s constitutive model.
2. Mathematical Formulation
Consider a thin orthotropic composite disc of 6061 Al- SiC[p] of density and rotating at a constant angular speed radian/sec. The thickness of the disc is assumed to be and and be inner and outer
radii of the disc, respectively. Let and be the moment of inertia of the disc at inner radius and outer radius and , respectively. and are the area of cross-section of disc at inner radius and outer
radius and , respectively. Then For the purpose of analysis of the disc the following assumptions are made.(1)Material of disc is orthotropic and incompressible.(2)Elastic deformations are small for
the disc and therefore they can be neglected as compared to creep deformation.(3)Axial stress in the disc may be assumed to be zero as thickness of disc is assumed to be very small compared to its
diameter.(4)The composite shows a steady-state creep behavior, which may be described by following Sherby’s law [20]: where , where , , , , , , , , be the effective strain rate, effective stress, the
stress exponent, threshold stress, a constant, lattice diffusivity, the subgrain size, the magnitude of burgers vector, Young’s modulus. The values of creep parameters and are described by the
following regression equations as a function of the particle size and the percentage of dispersed particles apart from the temperature , which extracted from the available experimental results of
Panday et al. [4]: The different material combinations in the composite are conceptually replaced by an equivalent monolithic material that has the yielding and creep behavior similar to those
displayed by the composite. Taking reference frame along the principal directions of , and , the generalized constitutive equations for an anisotropic disc under multiaxial stress condition are given
as where the effective stress, , is given by where , , and are anisotropic constants of the material. , , and , , are the strain rates and the stresses, respectively, in the direction , and . is the
effective strain rate, is the effective stress, and , are uniaxial compression and tensile yield stresses, respectively. For biaxial state of stress , the effective stress is Using (2.5) and (2.9), (
2.5) can be rewritten as Similarly from (2.6) From the material’s incompressibility assumption, it follows that where is the ratio of radial and tangential stresses and is the radial deformation
Dividing (2.10) by (2.11) where Integrating and taking limit to on both sides where is the radial deformation rate at the inner radius. Dividing (2.15) by and equated to (2.11) where Substituting
from (2.9) to (2.16), it gives where The equation of equilibrium for a rotating disc with varying thickness can be written as Integrating (2.21) within limits to and using (2.1) and (2.2)
Substituting from (2.18) into (2.2) Using (2.22) and (2.23), (2.18) becomes Integrating (2.21) within limits to and using (2.1) Thus the tangential stress and radial stress are determined by (2.24)
and (2.25). Then strain rates , and are calculated from (2.10), (2.11), and (2.12).
3. Solution Procedure
The stress distribution is evaluated from the previous analysis by iterative numerical scheme of computation (Figure 1). In the first iteration, it is assumed that over the entire disc radii.
Substituting for in (2.25) the first approximation value of , that is, , is obtained. The first approximation of stress ratio, that is, , is obtained by dividing by which can be substituted in (2.13)
to calculate first approximation of , that is, . Now one carries out the numerical integration of from limits of to and uses this value in (2.17) to obtain first approximation of , that is, . Using
this and in (2.19) and (2.20), respectively, and are found, which are used in (2.18) to find second approximation of , that is, . Using for in (2.25), second approximation of , that is, , is found
and then the second approximation of , that is, , is obtained. The iteration is continued till the process converges and gives the values of stresses at different points of the radius grid.
For rapid convergence 75 percent of the value of obtained in the current iteration has been mixed with 25 percent of the value of obtained in the last iteration for use in the next iteration, that
is, .
The strain rates are then calculated from (2.10), (2.11), and (2.12).
4. Numerical Computations and Discussions
For the sake of computation of a rotating disc made of SiC[p] reinforced 6061Al matrix composite (Figure 2), we assume that the disc is subjected to an angular velocity of 15,000rpm and we choose
particle size μm, particle content , and temperature . For the anisotropic material and isotropic material , , creep parameters have been carried out as reported in Gupta and Singh [19]. The creep
behavior of the material is described by threshold stress-based creep law by assuming a stress exponent of 8. The inner radii and the outer radii of all the discs are taken as 31.75mm and 152.4mm,
respectively. A computer program based on the analysis presented in this paper has been developed to obtain the steady-state creep response of the composite discs with linearly varying thickness in
presence of residual stress and obtained results are compared to the disc without residual stress to analyze the importance of residual stress. For the analysis, the tensile residual stress is taken
as 32MPa, as observed by Singh and Rattan [17].
For the isotropic/anisotropic discs, linearly varying thickness, that is, , , has been taken, where is the thickness of disc (mm) and the thickness is assumed to be of the form and , where is the
slope of a disc.
Using this expression for thickness, (2.1) becomes The creep analysis in a rotating disc made of Al-SiC (particle) composite having linearly varying thickness has been carried using isotropic/
anisotropic Hoffman yield criterion of yielding and results are compared with those using von Mises yield criterion/Hill’s criterion of yielding ignoring difference in yield stresses, that is, .
Figure 3 shows the tangential stress in an isotropic/anisotropic rotating disc with linearly varying thickness in presence of residual stress and the results are compared with those for without
residual stress. It is concluded that in the isotropic/anisotropic discs, the tangential stress is a little lower in region near the inner radius and slightly higher in region near the outer radius
in the presence of residual stress as compared to the disc without residual stress. It is also noted that the effect of residual stress on tangential stress distribution is lesser in an anisotropic
rotating disc of linearly varying thickness compared to an isotropic disc of linearly varying thickness. Also the tangential stress becomes more uniform in the presence/absence of residual stress in
an anisotropic disc compared to an isotropic disc.
Figure 4 shows the variation of radial stresses along the radius of the isotropic/anisotropic rotating disc in presence/absence of residual stress. It is observed that in presence of residual stress,
the radial stress developing due to rotation is slightly lesser than the radial stress of an isotropic/anisotropic disc without residual stress, although the change in the magnitude of radial stress
distribution is very small in the isotropic/anisotropic discs with linearly varying thickness due to presence of residual stress. As one moves from the inner radius to the outer radius of the disc,
the radial stress increases from zero and reaches a maximum near the middle region of the disc with linearly varying thickness and then starts decreasing towards the outer region.
Figure 5 shows that in presence of tensile residual stress, the tangential strain rates enhance significantly in both the isotropic/anisotropic discs compared to the discs without residual stress.
Also the difference in the tangential strain rate caused due to presence and absence of residual stresses (i.e., the residual effect) goes on increasing with radial distance and the extent of
increase in difference is maximum in the region near the outer radius in both the isotropic/anisotropic discs having linearly varying thickness. Secondly, it is also noticed that variation in
magnitude due to residual stress in an anisotropic disc is smaller compared to that for an isotropic disc. In an isotropic/anisotropic disc with/without residual stress, the tangential strain rates
are highest at the inner radius and then decrease continuously, when one moves towards the outer radius of the disc. The trend of variation of tensile strain rate in tangential direction remains the
same in an isotropic/anisotropic disc in the presence/absence of residual stress, but the magnitude can be reduced in an anisotropic disc.
Figure 6 shows that in the presence of residual stress, the radial strain rate in an isotropic/anisotropic rotating disc with linearly varying thickness and results are compared with those for
without residual stress. By seeing both the discs of isotropic/anisotropic material, it is concluded that in the absence of residual stress, the nature of the radial strain rate which was compressive
becomes tensile at the middle of the disc in the presence of residual stress. It is also noticed that the difference in strain rate caused due to the presence and absence of residual stresses is
smaller in an anisotropic disc as compared to that for an isotropic disc. Another point to be observed is that the magnitude of radial strain rate firstly increases rapidly with radial distance and
then starts decreasing. It reaches a minimum before increasing again towards the outer radius in both the isotropic/anisotropic discs with residual/without residual stress.
From previous discussion, it can be concluded that residual stress may cause significant distortion in an isotropic/anisotropic rotating disc having linearly varying thickness, but the magnitude of
distortion can be reduced by selecting anisotropic disc.
5. Conclusion
The previous results and discussion conclude the following.(1)The presence of the thermal residual stress developing due to rotation does not significantly affect the distribution of stress in the
isotropic/anisotropic disc having linearly varying thickness, but it affects the strain rates significantly.(2)The magnitude of variation of tangential stress and the radial stress obtained in the
anisotropic disc with/without residual stress are relatively smaller as compared to that for isotropic rotating disc.(3)In the isotropic/anisotropic disc, the tangential strain rate is more in
presence of thermal residual stress. Also, the extent of difference in creep caused due to the presence/absence of the residual stress increases as one moves towards the outer radius of the disc,
although the magnitude of this difference is smaller in anisotropic disc compared to that in isotropic disc.(4)In the presence of residual stress in the isotropic/anisotropic disc, the nature of
radial strain rate changes from compressive to tensile, particularly in the middle region of the disc with linearly varying thickness as compared to the disc without residual stresses. Further, the
magnitude of residual’s effect in the anisotropic disc is significantly lower as compared to those for the isotropic disc.(5)For designing a rotating disc with linearly varying thickness operating at
elevated temperature, the presence of residual stress needs attention from the point of view of steady-state creep rate. However, the effect of residual stress on the steady-state creep rate in the
anisotropic disc is observed to be significantly lower than that observed in the isotropic disc.
1. N. S. Bhatnagar, M. P. S. Kulkarni, and V. K. Arya, “Steady-state creep of orthotropic rotating disks of variable thickness,” Nuclear Engineering and Design, vol. 91, no. 2, pp. 121–141, 1986.
View at Publisher · View at Google Scholar · View at Scopus
2. R. J. Arsenault and M. Taya, “Thermal residual stress in metal matrix composite,” Acta Metallurgica, vol. 35, no. 3, pp. 651–659, 1987. View at Publisher · View at Google Scholar · View at Scopus
3. R. S. Mishra and A. B. Pandey, “Some observations on the high-temperature creep behavior of 6061 Al-SiC composites,” Metallurgical Transactions A, vol. 21, no. 7, pp. 2089–2090, 1990. View at
Publisher · View at Google Scholar · View at Scopus
4. A. B. Pandey, R. S. Mishra, and Y. R. Mahajan, “Steady state creep behaviour of silicon carbide particulate reinforced aluminium composites,” Acta Metallurgica Et Materialia, vol. 40, no. 8, pp.
2045–2052, 1992. View at Publisher · View at Google Scholar · View at Scopus
5. L. C. Davis and J. E. Allison, “Residual stresses and their effects on deformation,” Metallurgical Transactions A, vol. 24, no. 11, pp. 2487–2496, 1993. View at Publisher · View at Google Scholar
· View at Scopus
6. G. K. Hu and G. L. Huang, “Influence of residual stress on the elastic-plastic deformation of composites with two- or three-dimensional randomly oriented inclusions,” Acta Mechanica, vol. 141,
no. 3, pp. 193–200, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
7. H. Jahed and R. Shirazi, “Loading and unloading behaviour of a thermoplastic disc,” International Journal of Pressure Vessels and Piping, vol. 78, no. 9, pp. 637–645, 2001. View at Publisher ·
View at Google Scholar · View at Scopus
8. Y. Orcan and A. N. Eraslan, “Elastic-plastic stresses in linearly hardening rotating solid disks of variable thickness,” Mechanics Research Communications, vol. 29, no. 4, pp. 269–281, 2002. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
9. S. B. Singh and S. Ray, “Newly proposed yield criterion for residual stress and steady state creep in an anisotropic composite rotating disc,” Journal of Materials Processing Technology, vol.
143-144, no. 1, pp. 623–628, 2003. View at Publisher · View at Google Scholar · View at Scopus
10. S. B. Singh and S. Ray, “Modeling the creep in an isotropic rotating disc of Al-SiCw composite in presence of thermal residual stress,” in Proceedings of the 3rd International Conference on
Advanced Manufacturing Technology (ICAMT '04), pp. 766–770, Kuala Lumpur, Malaysia, May 2004.
11. V. K. Gupta, S. B. Singh, H. N. Chandrawat, and S. Ray, “Steady state creep and material parameters in a rotating disc of Al-SiCP composite,” European Journal of Mechanics, A/Solids, vol. 23, no.
2, pp. 335–344, 2004. View at Publisher · View at Google Scholar · View at Scopus
12. H. Jahed, B. Farshi, and J. Bidabadi, “Minimum weight design of inhomogeneous rotating discs,” International Journal of Pressure Vessels and Piping, vol. 82, no. 1, pp. 35–41, 2005. View at
Publisher · View at Google Scholar · View at Scopus
13. O. Sayman, “Stress analysis of a thermoplastic composite disc under uniform temperature distribution,” Journal of Thermoplastic Composite Materials, vol. 19, no. 1, pp. 61–77, 2006. View at
Publisher · View at Google Scholar · View at Scopus
14. S. B. Singh, “One parameter model for creep in a whisker reinforced anisotropic rotating disc of Al-SiCw composite,” European Journal of Mechanics, A/Solids, vol. 27, no. 4, pp. 680–690, 2008.
View at Publisher · View at Google Scholar · View at Scopus
15. F. Z. Xuan, J. J. Chen, Z. Wang, and S. T. Tu, “Time-dependent deformation and fracture of multi-material systems at high temperature,” International Journal of Pressure Vessels and Piping, vol.
86, no. 9, pp. 604–615, 2009. View at Publisher · View at Google Scholar · View at Scopus
16. N. Chamoli, M. Rattan, and S. B. Singh, “Effect of anisotropy on the creep of a rotating disc of Al-SiCp composite,” International Journal of Contemporary Mathematical Sciences, vol. 5, no. 11,
pp. 509–516, 2010.
17. S. B. Singh and M. Rattan, “Creep analysis of an isotropic rotating Al-SiCp composite disc taking into account the phase-specific thermal residual stress,” Journal of Thermoplastic Composite
Materials, vol. 23, no. 3, pp. 299–312, 2010. View at Publisher · View at Google Scholar · View at Scopus
18. J. J. Chen, K. B. Yoon, and S. T. Tu, “Creep behavior of pressurized tank composed of functionally graded materials,” Journal of Pressure Vessel Technology, vol. 133, no. 5, Article ID 051401,
2011. View at Publisher · View at Google Scholar · View at Scopus
19. V. Gupta and S. B. Singh, “Modeling anisotropy and steady state creep in a rotating disc of Al-SiCp having varying thickness,” International Journal of Scientific & Engineering Research, vol. 2,
no. 10, pp. 1–12, 2011.
20. O. D. Sherby, R. H. Klundt, and A. K. Miller, “Flow stress, subgrain size, and subgrain stability at elevated temperature,” Metallurgical Transactions A, vol. 8, no. 6, pp. 843–850, 1977. View at
Publisher · View at Google Scholar · View at Scopus
|
{"url":"http://www.hindawi.com/journals/ijmms/2012/924921/","timestamp":"2014-04-19T19:08:14Z","content_type":null,"content_length":"294757","record_id":"<urn:uuid:6d78fa9b-767a-4f1c-a356-d1f7d6f3f6bb>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
|