content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Evaluation of time profile reconstruction from complex two-color microarray designs
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Evaluation of time profile reconstruction from complex two-color microarray designs
As an alternative to the frequently used "reference design" for two-channel microarrays, other designs have been proposed. These designs have been shown to be more profitable from a theoretical point
of view (more replicates of the conditions of interest for the same number of arrays). However, the interpretation of the measurements is less straightforward and a reconstruction method is needed to
convert the observed ratios into the genuine profile of interest (e.g. a time profile). The potential advantages of using these alternative designs thus largely depend on the success of the profile
reconstruction. Therefore, we compared to what extent different linear models agree with each other in reconstructing expression ratios and corresponding time profiles from a complex design.
On average the correlation between the estimated ratios was high, and all methods agreed with each other in predicting the same profile, especially for genes of which the expression profile showed a
large variance across the different time points. Assessing the similarity in profile shape, it appears that, the more similar the underlying principles of the methods (model and input data), the more
similar their results. Methods with a dye effect seemed more robust against array failure. The influence of a different normalization was not drastic and independent of the method used.
Including a dye effect such as in the methods lmbr_dye, anovaFix and anovaMix compensates for residual dye related inconsistencies in the data and renders the results more robust against array
failure. Including random effects requires more parameters to be estimated and is only advised when a design is used with a sufficient number of replicates. Because of this, we believe lmbr_dye,
anovaFix and anovaMix are most appropriate for practical use.
Microarray experiments have become an important tool for biological studies, allowing the quantification of thousands of mRNA levels simultaneously. They are being customarily applied in current
molecular biology practice.
In contrast to the Affymetrix based technology, for the two-channel microarray technology assays, mRNA extracted from two conditions is hybridised simultaneously on a given microarray. Which
conditions to pair on the same array is a non trivial issue and relates to the choice of the "microarray design". The most intuitively interpretable and frequently used design is the "reference
design" in which a single, fixed reference condition is chosen against which all conditions are compared. Alternatively, other designs have been proposed (e.g. a loop design). From a theoretical
point of view, these alternative designs usually offer, at the same cost, more balanced measurements in the number of replicates per condition than a common reference design. They are thus, based on
theoretical issues, potentially more profitable [1,2]. For instance, a loop design would outperform the common reference design when searching for differentially expressed genes [3]. However, the
drawback of such alternative design is that the interpretation of the measurements becomes less straightforward. More complex analysis procedures are needed to reconstruct the factor of interest
(genes being differentially expressed between two particular conditions, a time profile, etc.), so that the practical usefulness of a design depends mainly on how well analysis methods are able to
retrieve this factor of interest from the data. Such analysis would require removing systematic biases from the raw data by the appropriate normalization steps and combining replicate values to
reconstruct the factor of interest.
When focusing on profiling the changes in gene expression over time, the factor of interest is the time profile [1,2]. For such time series experiments, the "reference design", where, for instance,
time point zero is chosen as the common reference has a straightforward interpretation: for each array, the genes' mean ratio between replicates readily represents the changes in expression of that
gene relative to the first time point. However, when using an alternative design, such as an interwoven design, mean ratios represent the mutual comparison between distinct (sometimes consecutive)
time points. A reconstruction procedure is needed to obtain the time profile from the observed ratios [3-5].
Several profile reconstruction methods are available for complex designs. They all rely on linear models, and for the purpose of this study, we subdivided them in "gene-specific" and "two-stage"
methods. Gene-specific profile reconstruction methods apply a linear model to each gene separately. The underlying linear model is usually only designed for reconstructing a specific gene profile
from a complex design, but not for normalizing the data. As a result, normalized log-ratios are used as input to these methods (see 'Methods'). Examples of these methods are described by Vinciotti,
et al. (2005) [3] and Smyth, et al. (2004) (Limma) [4]. Two-stage profile reconstruction methods on the other hand, first apply a single linear model to all data simultaneously, i.e. the model is
fitted to the dataset as a whole. These models use the separate log-intensity values for each channel, as spot effects are explicitly incorporated. They return normalized absolute expression levels
for each channel separately, which can then be used to reconstruct the required time profile by a second-stage gene-specific model. An example of such two-stage method is implemented in the MAANOVA
package [6].
So far, comparative studies focused on the ability of different methods to reconstruct "genes being differentially expressed" from different two-color array based designs [7-9] or the ratio
estimation between two particular conditions [5]. In this study, we aimed at performing a comparative study focusing on the time profile as the factor of interest to be reconstructed from the data.
We compared to what extent five existing profile reconstruction methods (lmbr, lmbr_dye, limmaQual, anovaFix, and anovaMix; see 'Methods' for details) were able to reconstruct similar profiles from
data obtained by two channel microarrays using either a loop design or an interwoven design. We assessed similarities between the methods, their sensitivity towards using alternative normalizations
and their robustness against array failure. A spike-in experiment was used to assess the accuracy of the ratio estimates.
Assessing the influence of the used methodology on the profile reconstruction
We compared to what extent the different methods agreed with each other in 1) estimating the changes in gene expression relative to the first time point (i.e. the log-ratios of each single time point
and the first time point) and 2) in estimating the overall gene-specific profile shapes. Results were evaluated using two test sets, each of which represents a different complex design.
The first dataset was a time series experiment consisting of 6 time points measured on 9 arrays using an interwoven design (Figure (Figure1a).1a). This design resulted in three replicate
measurements for each time point, with alternating dyes. As a second test, a smaller loop design was derived from the previous dataset by picking the combination of five arrays that connect five time
points in a single loop (Figure (Figure1b).1b). A balanced loop is obtained with two replicates per condition, for which each condition is labeled once with the red and once with the green dye (see
Experimental microarray designs used in this study. Circles represent samples or time points, and arrows represent a direct hybridization between two samples. The arrows point from the time point
labeled with Cy3 to the time point labeled with Cy5. (a) ...
The balance with respect to the dyes (present in the loop design) ensures that the effect of interest is not confounded with other sources of variation. In this study, the effect of interest
corresponds to the time profile. The replication (as present in the interwoven design) improves the precision of the estimates and provides the essential degrees of freedom for error estimation [2].
Moreover, the interwoven design not only has more replicates, but also increases the possible paths to join any two conditions in the design. As they have different characteristics, using both
datasets allows us to assess the reconstruction process under two different settings, while the RNA preparations for both designs are the same.
Effect of profile reconstruction methods on the ratio estimates
We first assessed to what extent the different methods agreed with each other in estimating similar log-ratios for each single gene at each single time point. To this end, we calculated the overall
correlation per time point between the gene expression ratios estimated by each pair of two different methods. Table Table11 gives the results for all mutual comparisons between the methods tested
for the loop design. Irrespective of which two methods were compared, the correlation between the estimated ratios was high on average, ranging from 0.94 to 0.98 (Table (Table1,1, mean column).
Moreover, this high average correlation is due to a high correlation of all individual ratios throughout the complete ratio range (see Additional file 1), with only a few outliers (genes for which a
rather different ratio estimate was obtained, depending on the method used). Note that for the loop design, there was no difference between the results of lmbr and lmbr_dye due to the balanced nature
of this design (see 'Methods' section).
Pairwise correlation between ratios estimates for the loop design
For this loop design the ratio estimates T3/T1 or T4/T1 obtained by each of the different methods are on overall more correlated than estimates of respectively T5/T1 and T6/T1. As can be expected,
direct estimates, i.e. estimates of a ratio for which the measurements were assessed on the same array (see Figure Figure1b:1b: ratios T3/T1 and T4/T1) are more consistent than indirect estimates,
i.e. the measurements used to obtain the estimates were assessed on different arrays (see Figure Figure1b:1b: ratios T5/T1 and T6/T1). A similar observation was already made by Kerr and Churchill
(2001) [2], and Yang and Speed (2002) [10]. For a loop design, both the ANOVA (two-stage) [2] and the gene-specific methods [10], have trouble estimating ratios between conditions not measured on the
same array (indirect estimates). The larger the loops (the longer the paths) between indirectly measured pairs of conditions, the less precise estimates will be.
For the interwoven design, the correlation between ratio estimates, obtained by any pair of two different methods was even higher, with values ranging from 0.95 to 0.99 (see Additional file 2). For
this unbalanced design, the ratio estimates for the lmbr_dye and the lmbr methods were no longer exactly the same. The difference in consistency between direct and indirect ratio estimates was not
obviously visible for this design.
Effect of profile reconstruction methods on the profile shape
A high average correlation between the ratio estimates obtained by the different methods at each single time point is a first valuable assessment. However, it is biologically more important that
gene-specific profiles reconstructed by the different methods exhibit the same tendency over time. Therefore, we also compared to what extent profile shapes estimated by each of the methods differed
from each other. This was done by computing the mean similarity between profile estimates obtained by any combination of two methods (Table (Table22).
Mean similarity between profiles for both the interwoven and the loop design
Figure Figure22 shows a few illustrative examples of profiles estimated by the different methods. For the ribosomal gene "L22" (Figure (Figure2a),2a), irrespective of the method, highly similar
profiles were obtained. However, for the MGC85244 gene (Figure (Figure2c),2c), the observed degree of similarity between profiles derived by each of the different methods is much lower, especially
for the last two time points.
Examples of reconstructed profiles for two representative genes from the interwoven design. For the gene-specific methods (based on log-ratios estimate) the ratios are expressed relative to T1 and
the ratio T1/T1 is set to zero. For the two-stage (ANOVA) ...
Table Table22 summarizes the results of the profile comparison expressed as average profile similarities across all genes. The similarity was computed with the cosine similarity measure after mean
centering the profiles (see 'Methods'). It ranges from -1 (anti-correlation) to 1 (perfect correlation), 0 being no correlation. Also here, the overall correlation between different methods was not
drastically different. From this table, it appears that the more similar the underlying principles of the used methods (both the model and the input data) are, the more correlated their results.
Indeed, correlations between profiles estimated by either limmaQual and lmbr (both gene-specific models without dye effect), or anovaMix and anovaFix (both two-stage models) are high. The most
divergent correlations are observed when comparing a gene-specific method (more specifically lmbr, or limmaQual) with a two-stage method (anovaFix or anovaMix). When using lmbr_dye on the interwoven
design, it behaves somewhere in between: although it is a single gene model, it includes a dye effect just like the two-stage models. This does not apply for the loop design due to its dye-balance
(lmbr and lmbr_dye give the same results for balanced designs; see 'Methods').
Differences in the input data (log-ratio versus log-expression values) and alterations in the underlying model (including a dye or random effect) are confounded in affecting the final result.
Therefore, in order to assess in more detail the specific effect of including either a dye or a random effect in the model, we compared results between methods that share the same input data.
To assess the influence of including a dye effect on profile estimation, we compared the results of the gene-specific methods (see Table Table2,2, the first two rows). Including a dye effect
(present in lmbr_dye but not in limmaQual and lmbr) has a strong effect under the unbalanced interwoven design (seen as decrease in correlation between lmbr_dye and the other single gene methods).
For the loop design this effect is non-existent because of the loop design's balance with respect to the dyes (see 'Methods').
The mere impact of including a random effect in the model can be assessed by comparing results of anovaFix and anovaMix. Indeed, they both contain the same input data, the same normalization
procedure, and the same model except for the random effect. Seemingly, inclusion of the random effect has a higher influence on the loop design than on the interwoven design.
Usually in a microarray experiment, an important proportion of the genes does not change its expression significantly under the conditions tested (global normalization assumption), exhibiting a
"flat" profile. We wondered whether removing such flat genes, with a noisy profile would affect the similarity in profile estimation between the different methods. Indeed, because the cosine
similarity with centering only measures the similarity in profile shape, regardless of its absolute expression level, the higher level of similarity we observe between the methods might be due to a
high level of random correlation between the "flat" profiles. Therefore, we applied a filtering procedure by removing those genes for which the profile variance over the different time points was
lower than a certain threshold (a range of threshold values going from 0.2–0.4 was tested). The similarity was assessed for any pair of profile estimates corresponding to the same gene if at least
one of the two profiles passed the filter threshold (Table (Table33 for the variance threshold of 0.4, results for the other thresholds can be found in the supplementary information, see Additional
file 3).
Mean similarity between profiles after filtering for both the interwoven and the loop design
Overall, the results obtained with each of the different variance thresholds confirmed the observations of Table Table2:2: 1) the more similar the models and input data, the more similar the methods
behaved (two-stage methods differed most from limmaQual followed by lmbr in estimating the gene profiles), 2) including a dye effect has a pronounced effect in an interwoven design (in a loop design
there is no distinction due to the balance with respect to the dyes; see 'Methods'), 3) including a random effect has most influence on the loop design. In addition, it seems that, the more flat
profiles are filtered from the dataset, the more similar the results obtained by each of the different methods become.
The effect of array failure on the profile reconstruction
In practice, when performing a microarray experiment some arrays might fail with their measurements falling below standard quality. When these bad measurements are removed from the analysis, the
complete design and the results inferred from it will be affected. Here we evaluated this issue experimentally by simulating array defects. In a first experiment, the interwoven design (dataset 1)
was considered as the original design without failure. We tested 9 different possible situations of failure, by each time removing a single array from the design, resulting in 9 reduced datasets. The
same test was performed with the loop design (dataset 2).
We compared for each of the different profile reconstruction methods the mean similarity between the ratios obtained either with the full dataset or with each of the reduced datasets (9 comparisons).
Table Table44 summarizes the results for the interwoven design, and Table Table55 for the loop design.
Assessing the effect of array failure on estimated ratios for the interwoven design
Assessing the effect of array failure on estimated ratios for the loop design
For the interwoven design (Table (Table4),4), it appears that in general removing one array from the original design did not really affect the ratio reconstruction. For all methods, ratio estimates
tend to be more affected when an array measuring the reference time point was removed (T1) (Table (Table4).4). Overall the two-stage methods, and in particular anovaMix, seemed most robust against
array failure, while limmaQual was most sensitive (Table (Table4).4). Methods including a dye effect were more robust against array failure. Similar results were obtained when the effect of array
failure was assessed on the similarity in profiles (see Additional file 4).
For the loop design, the situation was quite different (Table (Table5).5). Note that here, the lmbr_dye and limmaQual methods were not used for profile reconstruction as the reduced datasets did not
contain sufficient information for estimating all the model parameters. For both lmbr_dye and limmaQual, the linear models lose their main differing characteristics compared to lmbr (see 'Methods'
section). For all remaining methods removing one array from the design affected the results considerably more than was the case for the interwoven design. Two-stage methods were the most robust, but
in this design anovaMix performs slightly worse than anovaFix. The lmbr method turned out to be very sensitive to array failure, giving a mean profile similarity around 0.2, indicating no correlation
between profiles estimated with and without array failure (see Additional file 5).
Note that overall, all methods seem to be more robust to array failure under the interwoven design than under the loop design. This is to be expected as the latter design contains more replicates.
Consistency of the methods under different normalization procedures
In the previous section we compared profiles and ratio estimates obtained by the different methods after applying default normalization steps. However, other normalization strategies are possible,
and could potentially affect the outcome. To assess the influence of using alternative normalization procedures, we compared profiles reconstructed from data normalized with 1) print tip Loess
without additional normalization step (the default setting for anovaMix and anovaFix as used throughout this paper), 2) print tip Loess with a scale-based normalization between arrays [11], and 3)
print tip Loess with a quantile-based between array normalization [12,13] (the default normalization for lmbr, lmbr_dye, and limmaQual as used throughout this paper).
Table Table66 shows, for each of the different methods, the mean similarity between reconstructed profiles derived from differently normalized datasets. Overall, the influence of the normalization
was not drastic. More importantly, the influence of the additional normalization steps seemed independent of the method used (similar influences were observed for all methods). When assessing the
similarity in ratio estimates instead of profile estimates, similar results were obtained (data not shown).
Effect of additional normalization procedures on estimating gene profiles from both the interwoven design and the loop design
Accuracy of estimation
So far we only assessed to what extent changes in the used methodologies or normalization steps affected the inferred profiles. This, however, does not give any information on the accuracy of the
methods, i.e., which of these methods is able to best approximate the true time profiles. Assessing the accuracy is almost impossible as usually the true underlying time profile is not known.
However, datasets that contain external controls (spikes) could prove useful in this regard. Spikes are added to the hybridisation solution in known quantities, so that we have a clear view of their
actual profile. In the following analysis, we used a publicly available spike-in experiment in attempt to assess the accuracy of each of the profile reconstruction methods [14]. For the technical
details of this dataset we refer to 'Methods' and Table Table77.
Concentration (copies per cell) of the control clones spiked
As lmbr and lmbr_dye and limmaQual gave exactly the same results using this balanced design, we further assessed to what extent lmbr, anovaFix and anovaMix agreed with each other. Fig. Fig.33 shows
the effect of using different spike concentrations as reference points for ratio estimation. Panels A through C reflect decreasing reference concentrations. The choice of reference has little effect
on the shape of the profile (as indicated by consistent relationships between the different estimates). However, Fig. Fig.33 illustrates that 1) lower reference concentrations (intensities)
introduce a bias in the profile (true ratio's are consistently underestimated), 2) irrespective of the concentration of the reference ratio's derived for the lower expression values of the test are
nearly identical, and thus uninformative. Both observations can be attributed to the lower saturation characteristics of microarray data (low concentrations do not generate signals that are
distinguishable from the background). Although not as complex as the previously used loop or interwoven designs, the spiked-in design illustrates that this lower saturation effect, an inherent
property of microarray data, can distort estimated profiles: interpretation of ratios with lower signals for test or reference should be done with care.
Spike-in expression ratio estimates. Reconstructed expression ratio estimates of spikes 7 (+ markers) and 8 (x markers) are plotted for lmbr/lmbr_dye/limmaQual (solid line), anovaFix (dotted line),
and anovaMix (dashed line). Concentrations (cpc; copies ...
In this study, we evaluated the performance of five methods based on linear models in estimating gene expression ratios and reconstructing time profiles from complex microarray experiments. From a
theoretical viewpoint, two major differences can be distinguished between the methods selected for this study: 1) differences related to alterations in the input data: the selected two-stage methods
make use of the log-intensity values while the gene-specific methods use log-ratios, 2) differences related to the model characteristics: some of the models include an explicit dye effect (lmbr_dye,
anovaFix and anovaMix) or an explicit random effect (anovaMix).
Although Kerr [5] assumed that observed differences in estimates obtained by different models are due to the differences in model characteristics, rather than to the input data, we cannot clearly
make this distinction. Indeed, the way the error-term is modeled influences the statistical inference and hence the use of log-intensities or log-ratios does cause a difference between models [5].
However, when focusing on results obtained between methods with similar input data, we can assess, to some extent, the effect of different model specificities. In the following sections, some of
these effects are discussed more in detail.
The inclusion of the dye effect
In general we observed that, gene-specific methods without dye effects, and two-stage models with dye effect behaved more similar with each other than when they were compared among each other.
Lmbr_dye (a gene-specific model with dye effect) is situated somewhere in between when the design is unbalanced with respect to the dyes. Indeed, the gene-specific models lmbr and limmaQual contain a
combination of log-ratios plus an error term. However, when adding a dye effect to these models as is the case of lmbr_dye, the formulations and estimations converge with those of the two-stage ANOVA
models for unbalanced designs.
Originally, Vinciotti, et al. (2005) [3] and Wit, et al. (2005) [15] added the dye effect for purposes of data normalization when one is working with non-normalized data. From our results, we also
noted a practical advantage of including a dye effect even with normalized data. The fact that adding a dye effect showed pronounced differences for a dye-unbalanced design indicates that, despite
the data being normalized, there are still dye-related inconsistencies in the data that might -partially- be compensated for by including a dye effect. Moreover, models with dye effects seemed more
robust in estimating log-ratios from a design disturbed by array failure. Therefore, when working with unbalanced designs, it is advisable to include a dye effect, not only for the two-stage ANOVA
models, as was also suggested by Wolfinger (2001) [16], Kerr (2003) [5], and Kerr and Churchill (2001) [2], but also for gene-specific models based on log-ratios.
Mixed models versus Fixed models
Several studies advise the users to model the spot-gene or array-gene effects as random variables [9,16]. We observed that under the loop design (with 5 arrays), profiles estimated by anovaMix and
anovaFix diverged. We also noticed that, for the loop design anovaMix had a lower capacity than anovaFix to handle array failures. For the interwoven design with 9 arrays these effects were less
pronounced. The loop design used in our study does not contain a sufficient number of replicates to allow for reliable estimation of the spot-gene effect when using a mixed ANOVA model. As a result,
ratios and time profiles estimated by anovaMix than anovaFix are less reliable for an experiment with few replicates
The effect of using alternative normalization steps on the methods' performance
We tested the influence of using additional normalization steps. Differently normalized data give different results, but the effects were not dramatic. Moreover, they had the same influence on all
methods, indicating that all methods were equally sensitive to changes in the normalization.
Accuracy of estimated ratios
Based on spike-in experiments for two-channel microarrays, we could also assess to what extent the estimated ratios approximated the true ratios (i.e., the accuracy of the estimated ratios). We
observed that all five tested linear methods generated biased estimations, consistently overestimating changes in expression relative to a reference with low mRNA-concentration. These results were
independent of the method used (gene-specific or two-stage) or of the number of effects included the model.
On average the correlation between the estimated ratios was high, and all methods more or less agreed with each other in predicting the same profile. The similarity in profile estimation between the
different methods improved with an increasing variance of the expression profiles.
We observed that when dealing with unbalanced designs, including a dye effect, such as in the methods lmbr_dye, anovaFix and anovaMix, seems to compensate for residual dye related inconsistencies in
the data (despite an earlier normalization step). Adding a dye effect also renders the results more robust against array failure. Including random effects requires more parameters to be estimated and
is only advised when a design is used with a sufficient number of replicates.
Conclusively, because of their robustness against imbalances in the design and array failure, we believe lmbr_dye, anovaFix and anovaMix are most appropriate for practical use (given a sufficient
number of replicates in case of the latter).
Microarray data
The first dataset used in this study was a temporal Xenopus tropicalis expression profiling experiment. The array used consisted of 2999 oligos of 50 mers, corresponding to 2898 unique X. tropicalis
gene sequences and negative control spots (Arabidopsis thaliana probes, blanks and empty buffer controls). Each oligo was spotted in duplicate on each array in two separated grids. On each grid,
oligonucleotides were spotted in 16 blocks of 14 × 14 spots. Pairs of duplicated oligo's on the two grids of the same gene sequence were treated as replicates during analysis, corresponding to a
total of 2999 different duplicated measurements (a few oligos were spotted multiple times on the arrays). MWG Biotech performed oligonucleotide design, synthesis and spotting. X. tropicalis gene
sequences were derived from the assembly of public and in-house expressed sequence tags. The temporal expression of X. tropicalis during metamorphosis was profiled at 6 time points, using an
experimental design consisting of 9 arrays. Each time point was measured three times, with alternating dyes as shown in Figure Figure1a.1a. This interwoven design was used as a first test set.
From this original design a second test set containing a smaller loop design was derived by picking the combinations of five arrays that connect five time points in a single loop (Figure (Figure1b)
1b) and with the first time point as a reference. This results in a balanced loop design
A publicly available spike-in experiment [17] was used as a third test set. This dataset contains 13 spikes-in, or control clones spiked with known concentrations. The control clones were spiked at
different concentrations for each of the 7 conditions (Table (Table77).
The microarray design used for the spike-in experiment was a common reference design, with dye swap for each condition, and the concentrations of spikes ranges from 0 to 10,000 copies per cellular
equivalent (cpc), assuming that the total RNA contained 1% poly(A) mRNA and that a cell contained on average 300,000 transcripts. This concentration range covered all biologically relevant transcript
Probes preparation and microarray hybridization
10 μg of total RNA were used to prepare probes. Labeling was performed with the Invitrogen SuperScript™ Indirect cDNA labeling system (using polyA and random hexamers primers) using the Amersham Cy3
or Cy5 monofunctional reactive dyes. Probe quality was assessed on an agarose minigel and quantified with a Nanodrop ND-1000 spectrophotometer. Dye quantities were equilibrated for hybridization by
the amount of fluorescence per ng of cDNA. The arrays were hybridized for 20 h at 45°C according to the manufacturers protocol (QMT ref). Washing was performed in 2× SSC 0.1% SDS at 42°C for 5' and
then twice at room temperature in 1× SSC, 0.5× SSC each time for 5'. Arrays were scanned using a GenePix Axon scanner.
Microarray normalization
The raw intensity data were used for further normalization. No background subtraction was performed. Data were log-transformed and the intensity dependent dye or condition effects were removed by
using a local linear fit loess on these log-transformed data (Printtiploess command with default settings as implemented in the limma BioConductor package [18]). As this loess fit not only normalizes
the data but also linearizes them, applying it before profile reconstruction is a prerequisite as all linear models used for profile reconstruction assume non linearities to be absent from the data.
For the gene-specific methods (lmbr, lmbr_dye and limmaQual), Loess corrected log-ratios (per print tip) were subjected to an additional quantile normalization step [4,12] as suggested by Vinciotti
et al. (2005) [3] in order to improve the intercomparability between arrays. It equalizes the distribution of probe intensities for each array in a set of arrays. For the two-stage profile
reconstruction methods (anovaFix and anovaMix), corrected log-intensities for the red (R[CORR]) and green (G[CORR]) channels were calculated from the Loess corrected log-ratios (M[CORR]; no
additional quantile normalization was done for the two-stage methods) and mean absolute intensities (A) as follows: R[CORR ]= (A + M[CORR])/2, and G[CORR ]= (A - M[CORR])/2.
Used profile reconstruction methods
Available R implementations (BioConductor [19]) of the presented methods were used to perform the analyses.
Gene-specific methods based on log-ratios
Gene-specific profile reconstruction methods apply a linear model on each gene separately. The goal is to estimate the true expression differences between the mRNA of interest and the reference mRNA,
from the observed log-ratios. The presented models assume that the expression values have been appropriately pre-processed and normalized [3,20]. The three selected gene-specific models for this
study are:
1) lmbr, the linear model described by Vinciotti et al. (2005) [3]:
An observation y[jk ]is the log-ratio of condition j and condition k. For each gene a vector of n observations y = (y[1],...,y[n]) can be represented as
where X is the design matrix defining the relationship between the values observed in the experiment and a set of independent parameters μ = (μ[12], μ[13],...,μ[1T ]representing true expression
differences, and ε is a vector of errors. The parameters in μ are arbitrarily chosen this way for estimation purposes; all expression differences between other conditions i and k can be calculated
from these parameters as μ[ik ]= μ[1k ]- μ[1i]. The goal is to obtain estimates of the true expression differences $μ^$ separately for each gene. Given the assumptions behind the linear model, the
least squares estimator for μ is [3]
2) lmbr_dye, an extension of lmbr including a general dye effect:
The previous model can be extended to include a gene-specific dye effect [3]
where D is a vector of n times a constant value representing the gene-specific dye effect δ. Alternatively, one could write y = X[D]μ[D ]+ ε where X[D ]is the design matrix X with an extra column of
ones, and μ = (μ[12], μ[13],...,μ[1T], δ). Note that in the case of dye-balanced designs, the addition of a dye effect will not yield any different estimators for the contrasts of interest. In a
balanced design, each column of X will have an equal amount of 1's and -1's. I.e. the ith column of X, corresponding to the true expression difference μ[1i], reflects how condition i was measured an
equal number of times with both dyes. As such, the positive and negative influences of the dye effect will cancel each other out in the estimation of true expression differences. The use of lmbr_dye
will thus only render different results compared to lmbr when using it to analyze unbalanced experiments.
In order to estimate all parameters, the matrix X[D ]must be of full rank. If the column representing the dye effect is not linearly independent, the matrix is rank deficient. This situation occurs
for example when an array is removed from the loop design used in this paper. In this case, there are an infinite number of possible least squares parameter estimates. Since we expect a single set of
parameters, a constraint must be applied (this is done on the dye effect) in which case the true expression estimates are the same as for lmbr.
Lmbr and lmbr_dye were implemented in the R language using the function 'lm' for linear least squares regression.
3) limmaQual, the Limma model [4,20,21] including an array quality adjustment:
The quality adjustment assigns a low weight to poor quality arrays, which can be included in the inference. The approach is based on the empirical reproducibility of the gene expression measures from
replicated arrays, and it can be applied to any microarray experiment. The linear model is similar to the model describes by Vinciotti et al., (2005) but the variance of the observations y includes
the weight term. In this case, the weighted least squares estimator of $μ^$ is [20]:
where Σ is the diagonal matrix of weights.
The weights in the limmaQual model are the inverse of estimated array variances, down weighting the observations of lower quality arrays in order to decrease the power to detect differential
expression. The method is restricted for use on data from experiments that include at least two degrees of freedom. When testing the array failure in case of the loop design, there is no array level
replication for two of the conditions so the array quality weights can not be estimated: the Limma function returns a perfect quality for all the arrays (in this case Σ is a diagonal matrix of 1's).
The fit is by generalized least squares allowing for correlation between duplicate spots or related arrays, implemented in an internal function (gls.series) of the Limma package.
Two-stage methods based on the log-intensity values
The selected methods correspond to ANOVA (Analysis of variance) models. They can normalize microarray data and provide estimates of gene expression levels that are corrected for potential confounding
Since the global methods are computationally time-consuming, we selected two-stage methods that apply a first stage on all data simultaneously and a second stage on a gene by gene level. These models
use partially normalized data as input (i.e., the separate log-intensity values for each channel), as spot effects are explicitly incorporated. They return normalized absolute expression levels for
each channel separately (i.e. no ratios), which can then be used to reconstruct the required time profile.
4) anovaFix, two-stage ANOVA with fixed effects [6]:
We denote the loess-normalized log-intensity data by y[ijkgr ]that represents the measurement observed in the array i, labeled with the dye j, representing the time point k, from gene g and spot r.
The first stage is the normalization model:
y[ijkgr ]= μ + A[i ]+ D[j ]+ AD[ij ]+ r[ijkgr]
where the term μ captures the overall mean. The other terms capture the overall effects due to arrays (A), dyes (D) and labelling reactions (AD). This step is called "normalization step" and it
accounts for experiment systematic effects that could bias inferences made on the data from the individual genes. The residual of the first stage is the input for the second stage, which models the
gene-specific effects:
r[ijkgr ]= G + SG[r ]+ DG[j ]+ VG[k ]+ ε[ijkgr]
Here G captures the average effect of the gene. The SG effect captures the spot-gene variation and we used it instead of the more global AG array-gene effect. The use of this effect obviates the need
for intensity ratios. DG captures specific dye-gene variation and VG (variety-gene) is the effect of interest, the effects due to the time point measured. The MAANOVA fixed model computes least
squares estimators for the different effects.
5) anovaMix, two-stage ANOVA with mixed effects [6,16]:
The model applied is exactly the same as anovaFix, but in this case the SG effect was treated as a random variable, meaning that if the experiment were to be repeated, the random spot effects would
not be exactly reproduced, but they would be drawn from a hypothetical population. A mixed model, where some variables are treated as random, allows for including multiple sources of variation.
We used the default method to solve the mixed model equation, the REML (restricted maximum likelihood) method. Duplicated spots were treated as independent measurements of the same gene. For MAANOVA
and Limma packages the option to do so is available, for lmbr and lmbr_dye duplicated spots were taken into account by the design matrix.
Profile reconstruction
Applying the gene-specific methods mentioned above results in estimated differences in log-expression between a test and a reference condition or in log-ratios. To reconstruct from the different
designs a time profile, the first time point was chosen as the reference. A gene-specific reconstructed profile thus consists of a vector which contains as entries ratios of the measured expression
level of that gene at each time point except the first, relative to its expression value at the first time point. For instance, for the loop design shown in Table Table11 the profile contains 4
In contrast to the gene-specific methods, two-stage methods estimate the absolute gene expression level for each time point rather than log-ratios. In this case, for the loop design shown in Table
Table1,1, the profile contains 5 gene expression levels.
Comparison of profile reconstruction
To assess the influence of using different methodologies on the profile reconstruction, the following similarity measures were used to compare the consistency in reconstructing profiles for the same
gene between the compared methods:
1. Overall similarity in the estimated ratios: we assessed the similarity between the estimations of each single ratio of the time profile generated by two methods using the Pearson correlation.
Since two-stage methods estimate gene expression levels (variety-gene effect in the model) instead of log-ratios, we converted these absolute values into log-ratios by subtracting from the absolute
expression levels estimated for each of the conditions the estimated level of the first time point (the reference).
2. Profile shape similarity: the profile shape reflects the expression behaviour of a gene over time. For each single gene, we computed the mutual similarity between profile estimates obtained by any
combination of two methods. To make profiles consisting of log-ratios obtained by the gene-specific methods comparable with the profiles estimated by the two-stage methods, we extended the log-ratios
profile by adding as a first time point a zero. This represents the contrast between the expression value of the first time point against itself in log-scale (Figure (Figure22).
Profile Similarity
The mutual similarity was computed as the cosine similarity, which corresponds to the angle between two vectors representing genes i and j with profiles P[i ]and P[j].
All profiles were mean centered, i.e. data have been shifted by the mean of the profile ratios to have an average of zero for each gene, prior to computing the cosine similarity. With centered data,
the cosine similarity can also be viewed as the correlation coefficient, and it ranges between -1 (opposite shape) and 1 (similar shape), 0 being no correlation. The cosine similarity only considers
the angle between the vectors focusing on the shape of the profile. As a result, it ignores the magnitude of ratios of the profiles, resulting in relatively high similarities for false positives
(i.e. "flat profiles", genes that do not change their expression profile over time, but for which the noise profile corresponds by chance to other gene profiles).
No variance normalization was performed on the profiles to preserve their shape. Instead of normalizing by the variance, the profiles were filtered using the standard deviation.
Filtering flat profiles
Constitutively expressed genes or genes for which the expression did not significantly change over the conditions were filtered by removing genes of which the variance in expression over different
conditions was lower than a fixed threshold (the following range of thresholds was tested: 0.1, 0.2, 0.4). A pair wise similarity comparison was made for all profile estimates (corresponding to the
same gene) that were above the filtering threshold in at least one of the two methods compared. Similar results were obtained when applying as a filter that all profile estimates had to be above the
filtering threshold in both methods compared (data not shown).
Authors' contributions
AF performed the analysis and wrote the manuscript. RT and NP were responsible for the microarray hybridization. NP and GB critically read the draft. KE contributed to the analysis. KM coordinated
the work and revised the manuscript. All authors read and approved the final manuscript.
Supplementary Material
Additional file 1:
Plot of corresponding ratios estimated by two linear methods. Comparison of corresponding ratios estimated by lmbr and anovaMix using the loop design. The line indicates the identity between both
methods and most of the points are situated near this identity line.
Additional file 2:
Pairwise correlation between ratios estimated for the interwoven design. The table shows the pairwise correlation between ratios estimated by each pair of methods (columns 1 and 2) for the interwoven
design. The ratios correspond to the change in expression compared to the first time point. The last column corresponds to the mean correlation of the 5 estimations.
Additional file 3:
Mean similarity between profiles using different filtering thresholds. Values in the table correspond to the similarity between any two methods, expressed as the mean profile similarity of the genes.
Results are shown for both the interwoven and loop design using different filtering thresholds. Since the loop design is balanced with respect to the dyes, the results for lmbr and lmbr_dye were the
same (see 'Methods' section), which is why they are not treated differently. A) No filtering applied, similarity is assessed for all 2999 profile estimates, B) a filtering threshold (SD) is used on
all profiles estimated by each of the methods, a pairwise similarity comparison is made for all profile pairs (corresponding to the same gene) estimated by each of the two methods compared, for which
at least one profile is above the filtering threshold (SD >0.1, 0.2, 0.4 respectively).
Additional file 4:
Effect of array failure for the interwoven design. The table shows the effect of array failure in reconstructing profiles from an interwoven design. Profile similarities were assessed using the
cosine similarity. The different methods for which the influence of the failure was assessed are represented in the columns. Each row shows the mean cosine similarity between the corresponding
profiles estimated from the complete design and those obtained from a defect design (where one array was removed compared to the complete design). Mean: shows the overall mean similarity for a given
Additional file 5:
Effect of array failure for the loop design. The table shows the effect of array failure in reconstructing profiles from a loop design. Profile similarities were assessed using the cosine similarity.
The different methods for which the influence of the failure was assessed are represented in the columns. Each row shows the mean cosine similarity between the corresponding profiles estimated from
the complete design and those obtained from a defect design (where one array was removed compared to the complete design). Mean: shows the overall mean similarity for a given method.
We thank Dr. Filip Vandenbussche for his comments on earlier versions of this manuscript. AF was partially supported by a grant French embassy – CONICYT Chili. This work was also partially supported
by 1) IWT: SBO-BioFrame, 2) Research Council KULeuven: EF/05/007 SymBioSys, IAP BioMagnet.
• Glonek GF, Solomon PJ. Factorial and time course designs for cDNA microarray experiments. Biostatistics (Oxford, England) 2004;5:89–111. [PubMed]
• Kerr MK, Churchill GA. Experimental design for gene expression microarrays. Biostatistics (Oxford, England) 2001;2:183–201. [PubMed]
• Vinciotti V, Khanin R, D'Alimonte D, Liu X, Cattini N, Hotchkiss G, Bucca G, de Jesus O, Rasaiyaah J, Smith CP, Kellam P, Wit E. An experimental evaluation of a loop versus a reference design for
two-channel microarrays. Bioinformatics (Oxford, England) 2005;21:492–501. doi: 10.1093/bioinformatics/bti022. [PubMed] [Cross Ref]
• Smyth GK. Linear models and empirical bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol. 2004;3:Article3. [PubMed]
• Kerr MK. Linear models for microarray data analysis: hidden similarities and differences. J Comput Biol. 2003;10:891–901. doi: 10.1089/106652703322756131. [PubMed] [Cross Ref]
• MAANOVA software http://www.jax.org/staff/churchill/labsite/software/anova
• Tsai CA, Chen YJ, Chen JJ. Testing for differentially expressed genes with microarray data. Nucleic acids research. 2003;31:e52. doi: 10.1093/nar/gng052. [PMC free article] [PubMed] [Cross Ref]
• Marchal K, Engelen K, De Brabanter J, Aert S, De Moor B, Ayoubi T, Van Hummelen P. Comparison of different methodologies to identify differentially expressed genes in two-sample cDNA microarrays.
J Biological Systems. 2002;10:409–430. doi: 10.1142/S0218339002000731. [Cross Ref]
• Cui X, Churchill GA. Statistical tests for differential expression in cDNA microarray experiments. Genome biology. 2003;4:210. doi: 10.1186/gb-2003-4-4-210. [PMC free article] [PubMed] [Cross Ref
• Yang YH, Speed T. Design issues for cDNA microarray experiments. Nature reviews. 2002;3:579–588. [PubMed]
• Smyth GK, Speed T. Normalization of cDNA microarray data. Methods (San Diego, Calif. 2003;31:265–273. [PubMed]
• Yang YH, Thorne NP. Normalization for two-color cDNA microarray data. In: DR G, editor. Science and Statistics: A Festschrift for Terry Speed, IMS Lecture Notes-Monograph Series. Vol. 40. 2003.
pp. 403–418.
• Bolstad BM, Irizarry RA, Astrand M, Speed TP. A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Bioinformatics (Oxford, England) 2003;
19:185–193. doi: 10.1093/bioinformatics/19.2.185. [PubMed] [Cross Ref]
• Hilson P, Allemeersch J, Altmann T, Aubourg S, Avon A, Beynon J, Bhalerao RP, Bitton F, Caboche M, Cannoot B, Chardakov V, Cognet-Holliger C, Colot V, Crowe M, Darimont C, Durinck S, Eickhoff H,
de Longevialle AF, Farmer EE, Grant M, Kuiper MT, Lehrach H, Leon C, Leyva A, Lundeberg J, Lurin C, Moreau Y, Nietfeld W, Paz-Ares J, Reymond P, Rouze P, Sandberg G, Segura MD, Serizet C, Tabrett
A, Taconnat L, Thareau V, Van Hummelen P, Vercruysse S, Vuylsteke M, Weingartner M, Weisbeek PJ, Wirta V, Wittink FR, Zabeau M, Small I. Versatile gene-specific sequence tags for Arabidopsis
functional genomics: transcript profiling and reverse genetics applications. Genome research. 2004;14:2176–2189. doi: 10.1101/gr.2544504. [PMC free article] [PubMed] [Cross Ref]
• Wit E, Nobile A, Khanin R. Near-optimal designs for dual-channel microarray studies. Applied Statistics. 2005;54:817–830.
• Wolfinger RD, Gibson G, Wolfinger ED, Bennett L, Hamadeh H, Bushel P, Afshari C, Paules RS. Assessing gene significance from cDNA microarray expression data via mixed models. J Comput Biol. 2001;
8:625–637. doi: 10.1089/106652701753307520. [PubMed] [Cross Ref]
• Allemeersch J, Durinck S, Vanderhaeghen R, Alard P, Maes R, Seeuws K, Bogaert T, Coddens K, Deschouwer K, Van Hummelen P, Vuylsteke M, Moreau Y, Kwekkeboom J, Wijfjes AH, May S, Beynon J, Hilson
P, Kuiper MT. Benchmarking the CATMA microarray. A novel tool for Arabidopsis transcriptome analysis. Plant physiology. 2005;137:588–601. doi: 10.1104/pp.104.051300. [PMC free article] [PubMed] [
Cross Ref]
• Yang YH, Dudoit S, Luu P, Lin DM, Peng V, Ngai J, Speed TP. Normalization for cDNA microarray data: a robust composite method addressing single and multiple slide systematic variation. Nucleic
acids research. 2002;30:e15. doi: 10.1093/nar/30.4.e15. [PMC free article] [PubMed] [Cross Ref]
• Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, Ellis B, Gautier L, Ge Y, Gentry J, Hornik K, Hothorn T, Huber W, Iacus S, Irizarry R, Leisch F, Li C, Maechler M, Rossini AJ,
Sawitzki G, Smith C, Smyth G, Tierney L, Yang JY, Zhang J. Bioconductor: open software development for computational biology and bioinformatics. Genome biology. 2004;5:R80. doi: 10.1186/
gb-2004-5-10-r80. [PMC free article] [PubMed] [Cross Ref]
• Ritchie ME, Diyagama D, Neilson J, van Laar R, Dobrovic A, Holloway A, Smyth GK. Empirical array quality weights in the analysis of microarray data. BMC bioinformatics. 2006;7:261. doi: 10.1186/
1471-2105-7-261. [PMC free article] [PubMed] [Cross Ref]
• Smyth GK, Michaud J, Scott HS. Use of within-array replicate spots for assessing differential expression in microarray experiments. Bioinformatics (Oxford, England) 2005;21:2067–2075. doi:
10.1093/bioinformatics/bti270. [PubMed] [Cross Ref]
Articles from BMC Bioinformatics are provided here courtesy of BioMed Central
• SWISS MADE: Standardized WithIn Class Sum of Squares to Evaluate Methodologies and Dataset Elements[PLoS ONE. ]
Cabanski CR, Qi Y, Yin X, Bair E, Hayward MC, Fan C, Li J, Wilkerson MD, Marron JS, Perou CM, Hayes DN. PLoS ONE. 5(3)e9905
• Meta Analysis of Gene Expression Data within and Across Species[Current Genomics. 2008]
Fierro AC, Vandenbussche F, Engelen K, Van de Peer Y, Marchal K. Current Genomics. 2008 Dec; 9(8)525-534
See all...
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2265676/?tool=pubmed","timestamp":"2014-04-17T12:44:09Z","content_type":null,"content_length":"135700","record_id":"<urn:uuid:1c4b7355-bea2-439a-bdf2-132d8881933d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cheyney Calculus Tutor
Find a Cheyney Calculus Tutor
...I completed math classes at the university level through advanced calculus. This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra,
differential equations, analysis, complex variables, number theory, and non-euclidean geometry. Writing is a big part of the curriculum in both business and law schools, and I wrote extensively in
12 Subjects: including calculus, writing, geometry, algebra 1
...A continuation of Algebra 1 (see course description). Use of irrational numbers, imaginary numbers, quadratic equations, graphing, systems of linear equations, absolute values, and various
other topics. May be combined with some basic geometry. Emphasis on the ideas that lie behind dates, facts and documents.
32 Subjects: including calculus, English, geometry, biology
...Further, having studied classical languages, I have a very good linguistic sense and could thus tutor English language and composition. Aside from my primary strengths in mathematics,
philosophy, and English language, I am also competent in some other subject areas, specifically literature, clas...
26 Subjects: including calculus, English, reading, writing
...I scored a 5 on AP chemistry and for physics I scored a 5 on mechanics and a 4 on electricity and magnetism. GMAT I just took the GMAT in October 2013 and scored a 770 (99th percentile). I can
teach both the quantitative and the verbal section. Computer Programming As an electrical engineering graduate I have significant experience in programming.
15 Subjects: including calculus, chemistry, physics, algebra 1
...Where does this fit in to other subjects you've taken before? What will this course not include? -Provide a thorough understanding of basic concepts. If you understand the foundations of a
subject very well, you can keep your head in a reasonable place when things get complicated. -Work on the same level as my students.
25 Subjects: including calculus, chemistry, physics, writing
|
{"url":"http://www.purplemath.com/cheyney_pa_calculus_tutors.php","timestamp":"2014-04-17T15:28:40Z","content_type":null,"content_length":"24018","record_id":"<urn:uuid:c509ccfe-0ce4-4038-a474-a745c2021eea>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vectors continued
March 24th 2013, 05:49 PM #1
Junior Member
Feb 2013
Charlotte, NC
Vectors continued
How could I describe how vectors can be used in geometric representation such as lines, line segments and planes to someone that is math illiterate?
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/advanced-algebra/215453-vectors-continued.html","timestamp":"2014-04-20T01:08:08Z","content_type":null,"content_length":"28021","record_id":"<urn:uuid:3a0c270a-2019-458f-9c83-a5367d35c034>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I was preparing a quiz for Perl programmers, and wanted a question on the difference between arrays and lists. So the question was : what do you get in
my @tab = ( 1, 3, 6..9 );
my $x = @tab;
my $y = ( 1, 3, 6..9 );
where I expected
to be 6 (the number of items) and
to be 9 (the last element), as documented in
. But before distributing the quiz I thought it was worth a test ... and indeed it was! To my surprise,
turns out to be an empty string. It took me some time to find out why : with
my $y = ( 1, 3, 6, 7, 8, 9 );
the result is 9 indeed. But with
my $y = ( 1, 3, 6..9 );
the interpreter does not build the range in list context, and then keep the last element. What happens instead is that it throws away the beginning of the list (there is even a warning about that),
and then evaluates
my $y = 6..9;
context; where 6..9 is no longer a range, but a flip-flop operator. Now since the left-hand side is true, that operator is supposed to be true, right ? Well, no, because 6 is a constant, and in that
tells us that the flip-flop is "considered true if it is equal (
) to the current input line number (the
variable)". So $y ends up being an empty string, while
my $six = 6; my $nine = 9; my $y = $six..$nine;
would yield 1E0!
I couldn't be
nasty to the interviewed programmers, so in the end that question will not be part of the quiz.
Several years ago I complained that object accessors prevent you from using some common Perl idioms; in particular, you couldn't take a slice of several attributes within an object.
Now I just discovered the Want module; this is great for playing with lvalue subroutines. With the help of this module, I was finally able to use slices of method calls within an object : see https:/
/metacpan.org/module/Method::Slice . There are some caveats in comparison with a usual hash slice, but nevertheless the main point is there : you can extract some values :
my @coordinates = mslice($point, qw/x y/);
or even assign to them:
(mslice($point, qw/x y/)) = (11, 22);
This was written just for fun ... but in the end I might use it in real code, because in some situations, I find slices to be more compact and high-level than multiple assignment statements.
For Data::Domain I needed a way to test if something is a Perl class. Since UNIVERSAL is the mother of all classes, it seemed to make sense to define the test as
defined($_) && !ref($_) && $_->isa('UNIVERSAL')
Other people did it through $_->can('can') or UNIVERSAL::can($_, 'can'). Both ways used to work fine, but this is no longer true since June 22, 2012 (bleadperl commit 68b4061) : now just any string
matches these conditions.
At first I found this change a bit odd, but somehow it makes sense because any string will answer to the 'can' and 'isa' methods. Also, bless({}, 'Foo') and 'Foo' now return consistent answers for ->
isa() and for ->can(), which was not the case before. So let's agree that this change was a good step.
But now it means that while every object or class is a UNIVERSAL, the reverse is not true : things that are UNIVERSAL are not necessarily objects or classes. Hum ... but what "is a class", really ?
Moose type 'ClassName' defines this through Class::Load::is_class_loaded, which returns true if this is a package with a $VERSION, with an @ISA, or with at least one method. By this definition, an
empty package is not a class. However, perldoc says that a class is "just a package", with no restriction.
So after some thoughts I ended up with this implementation :
defined($_) && !ref($_) && $_->can($_)
This returns true for any defined package, false otherwise, and works both before and after June 22.
Thoughts ?
Perl hashes are not ordered, so one is not supposed to make assumptions about the key order. I thought I did not ... but Perl 5.17.6 showed me that I was wrong !
About two weeks ago I started receiving report about test failures which were totally incomprehensible to me. Since I work on Windows, I had no bleadperl environment, so it was hard to guess what was
wrong just from the test reports. Andreas König kindly opened a ticket in which he spotted that the problem was related to a recent change in bleadperl : now Perl not only makes no guarantee about
the key order, it even guarantees that the key order will be different through several runs of the same program!
This convinced me of investing some time to get a bleadperl environment on my Windows machine : VMware player + a virtual Unbutu + perlbrew did the job perfectly. Now I could start working on the
The point were I was making an implicit assumption was a bit nasty, so I thought it was worth writing this post to share it : the code in SQL::Abstract::More more or less went like this :
my $ops = join "|", map quotemeta, keys %hash;
my $regex = qr/^($ops)?($rest_of_regex)/;
See the problem ? The regex starts with an alternation derived from the hash keys. At first glance one would think that the order of members in the alternation is not important ... except when one
member is a prefix of the other, because the first member wins. For example, matching "absurd" against qr/^(a|ab|abc)?(.*)/ is not the same as qr/^(abc|ab|a)?(.*)/ : in one case $1 will contain 'a',
in the other case it will contain 'ab'.
To fix the problem, the code above was rewritten to put the longest keys first, and everything is fine again.
This is to announce new modules on CPAN:
a) module Data::Domain, written a couple of years ago for checking input from Web forms, now has number of new functionalities in v1.02 : new builtin domains 'Nat', 'Handle', 'Regexp', 'Obj',
'Class', 'Ref', new properties like -isweak, -readonly, etc., and new experimental support for checking method calls and coderef calls. Also, it now relies on Scalar::Does (for having a nice uniform
way of checking what references "do" either as builtin Perl operations, or through overloaded methods), and on Sub::Exporter (for allowing clients to rename the imported functions). If you need to
check deep datastructures against complex constraints, possibly with contextual dependencies or even recursive rules, then this module may be of interest to you.
b) the new module Test::InDomain is a wrapper around Data::Domain for writing automated tests; it sits more or less in the same niche as Test::Deep, but with a different API and some differences in
I hope it might be useful to some of you. Have a look, and let me know of any comments/suggestions.
Perl's "smart match" operator came with Perl 5.10 in 2007 (as a matter of fact, it was released during the French Perl Workshop 2007, and this is where I also learned about this new feature). I
immediately thought : "Wow, this is great, it is going to dramatically change my way of programming!".
Unfortunately, our infrastructure at work still remained Perl 5.8 for several years, and by the time I was at last able to make use of smart match, I had nearly forgotten all its virtues. I have a
couple of lines of code with "given/when" statements, but that's mostly it.
However, after having recently attended the excellent class by Damian Conway on Advanced Perl Programming Techniques, smart match came back to my mind. I know, it's been criticized for being too
complex in some of its matching rules; but the basic rules are great and yield more readable and more flexible code.
Here are some examples that I'm refactoring right now :
List membership
Instead of
if ($data =~ /^(@{[join '|', @array]})$/) {...}
use List::MoreUtils qw/any/;
if (any {$data eq $_} @array) {...}
if ($data ~~ @array) {...}
No need to test for undef
Instead of
my $string = $hashref->{some_field} || '';
if ($string eq 'foo') {...}
if ($hashref->{some_field} ~~ 'foo') {...}
No need to test for numish
Instead of
use Scalar::Util qw/looks_like_number/;
my $both_nums = looks_like_number($x)
&& looks_like_number($y);
if ($both_nums ? $x == $y : $x eq $y) {...}
if ($x ~~ $y) {...}
For info, the slides of my talks at the French Perl Workshop 2012 are now online :
|
{"url":"http://ldami.blogspot.fr/","timestamp":"2014-04-19T17:02:33Z","content_type":null,"content_length":"68084","record_id":"<urn:uuid:b2037057-b16d-42e9-aa1e-f4595bbbbbce>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numbers which are not exactly calculable by any method.
Is there any way to prove that a real number exists which is not calculable by any method?
For example, you could have known irrational and/or transcendental numbers like e or π. You could have e^x where x is any calculable number, whether it be by infinite series with hyperbolic/normal
trigonometric functions and an infinite number of random terms, and use that as the upper limit for an integral of whatever other type of function or combination of functions.
Is there a possible way to prove that there exists any real number that is not equal to any combination of functions (apart from 0/0)?
It's easy to prove such number
(the site micromass links to shows that the set of all "computable numbers" is countable while the set of all real numbers is uncountable. In a very real sense "almost all number are not computable".
I don't believe, however, it is possible to give an
of such a number- in fact the very naming of such a number would probably be a description of how to compute it!
|
{"url":"http://www.physicsforums.com/showthread.php?p=3900823","timestamp":"2014-04-17T03:51:19Z","content_type":null,"content_length":"35991","record_id":"<urn:uuid:63dfea1d-a467-4159-951b-bdaf0d06a746>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Congruence.
The Congruence Axiom.(Cong) There is a group
of permutations of the set of points such that
M B
and such that if (Hi, Ri), i = 1, 2, are flags then there is one and only one M such that
[H1] = H2 and [R1] = R2.
We will call the members of M motions
Definition. Congruence of sets of points. We say the sets X and Y of points are congruent and
X Y
if there exists M such that [X] = Y in which case we say X can be moved to Y (by the motion
Theorem. We have
(i) If X is a set of points then X X.
(ii) If X and Y are sets of points and X Y then Y X.
(iii) If X, Y and Z are sets of points, X Y and Y Z then X Z.
Remark. Equivalently, congruence is an equivalence relation on the family of sets of points.
Proof. This follows directly from the fact that the set of motions is a group.
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/145/3039925.html","timestamp":"2014-04-21T10:05:46Z","content_type":null,"content_length":"7861","record_id":"<urn:uuid:d688dd90-f381-4146-b5ca-ffd6410ed615>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spectrum of a relativistic accretion
disk around a spinning black hole
Spectrum of a relativistic accretion disk around a spinning black hole
BHSPEC is a spectral fitting model which has been implemented as a additive table model for the Xspec Spectral Fitting Package. It consists of a large grid of tabulated artificial spectra which are
used to fit data via interpolation. Currently, the model is comprised of multiple files which cover different, but overlapping, regions of parameter space.
Our method for generating the artificial spectra is described in detail in Davis et al. (2005, ApJ, 621, 372) and Davis & Hubeny (submitted to ApJ). The spectra are based on fully relativistic
accretion disk models similar to the KERRBB model already implemented in Xspec. The main difference between KERRBB and BHSPEC is the treatment of the emission at disk surface. KERRBB utilizes a
color-corrected blackbody prescription with either isotropic emission or an analytic limb darkening profile. BHSPEC uses stellar atmospheres-like calculations of disk annuli to self-consistently
calculate the vertical structure and radiative transfer. The spectra from these calculations are tabulated in the parameters of interest and then used to calculate the spectra at a given radius via
The BHSPEC model is parameterized as follows: (note that parameters and their ordering may vary slightly from file-to-file)
• log(M/Msun), the mass of the black hole in units of solar masses.
• log(L/Ledd), the luminosity of the accretion disk as a fraction of the Eddington luminosity for completely ionized H (Ledd=1.3*10^38 (M/Msun) erg/s. Here L=eta*Mdot*c^2 where the efficiency, eta,
is determined by the position of the last stable circular orbit in the Kerr spacetime.
• cos(i), the cosine of the inclination of the accretion disk relative to the plane of the sky.
• a/M, the dimensionless spin of the black hole.
• log(alpha), the dimensionless variable in the Shakura & Sunyaev (1973) prescription for the midplane accretion stress tau_{r phi}=alpha*P, where P is the total pressure at the disk midplane.
Currently the model is tabulated for only two values of alpha: 0.1 and 0.01. We recommend keeping this parameter fixed at log(alpha)=-1 or -2, the values for which the spectra were tabulated.
• K, the model normalization. K is related to the source distance D via K=(10 kpc/D)^2
The table model files available are bhspec.fits.gz and bhspec2.fits.gz. One first file considers black hole spins (a/M) extending from 0 - 0.8 for alpha=0.1 and 0.01 (alpha is the standard 'anamalous
viscocity' parameter). The other file covers a larger range of spin (a/M from 0 - 0.99), but only for alpha=0.01. This partitioning is needed because the disk annuli of high spin, low alpha models do
not converge. Keith Arnaud, Lab. for High Energy Astrophysics, NASA/Goddard Space Flight Center
Xspec Home Page
HEASARC Home | Observatories | Archive | Calibration | Software | Tools | Students/Teachers/Public
Last modified: Thursday, 09-Feb-2006 10:28:52 EST
|
{"url":"http://heasarc.gsfc.nasa.gov/docs/software/lheasoft/xanadu/xspec/models/bhspec.html","timestamp":"2014-04-25T09:52:03Z","content_type":null,"content_length":"19757","record_id":"<urn:uuid:024b7052-5695-4fc6-ba60-7f045ce89ce5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MATLAB:Please Help with My Fixed Point Iteration Program
April 18th 2010, 09:30 AM
MATLAB:Please Help with My Fixed Point Iteration Program
Hi all, I am trying to write a Fixed Point Iteration program but when I enter in the command line it kept giving me an error message. Can you please look over my program and tell me what might
have gone wrong? Thank you very much!
First, I defined a function in a new M-File:
function y=FUN3(x)
Then, I opened up another M-File and wrote the program:
function Xs = FixedIterationRoot3(FUN3,Xest,imax)
syms x FUN3 FunDer3
FunDer3 = diff(FUN3) % To find g'(x)
if abs(subs(FunDer3,x,Xest))>=1 % to check if g'(x) diverges
fprintf('not valid')
for i=2:imax
When I enter in the command line:
>> xSolutions=FixedIterationRoot3('FUN3',-3,10)
it gave me error message:
syms x FUN3 FunDer3
??? Output argument "Xs" (and maybe others) not assigned during call
"C:\Users\Jane\Documents\MATLAB\FixedIterationRoot 3.m>FixedIterationRoot3".
Can you please help me fix something that is wrong or suggest an alternate method for Fixed Point Iteration?
I GREATLY APPRECIATE YOUR TIME!
April 18th 2010, 02:14 PM
Hi all, I am trying to write a Fixed Point Iteration program but when I enter in the command line it kept giving me an error message. Can you please look over my program and tell me what might
have gone wrong? Thank you very much!
First, I defined a function in a new M-File:
function y=FUN3(x)
Then, I opened up another M-File and wrote the program:
function Xs = FixedIterationRoot3(FUN3,Xest,imax)
syms x FUN3 FunDer3
FunDer3 = diff(FUN3) % To find g'(x)
if abs(subs(FunDer3,x,Xest))>=1 % to check if g'(x) diverges
fprintf('not valid')
for i=2:imax
When I enter in the command line:
>> xSolutions=FixedIterationRoot3('FUN3',-3,10)
it gave me error message:
syms x FUN3 FunDer3
??? Output argument "Xs" (and maybe others) not assigned during call
"C:\Users\Jane\Documents\MATLAB\FixedIterationRoot 3.m>FixedIterationRoot3".
Can you please help me fix something that is wrong or suggest an alternate method for Fixed Point Iteration?
I GREATLY APPRECIATE YOUR TIME!
Do not use FUN3 for both a symbolic object and a .m function.
April 18th 2010, 02:31 PM
I need help with that problem too! ECM 6?
April 18th 2010, 02:33 PM
April 18th 2010, 02:54 PM
so I fixed my program:
function Xs = FixedIterationRoot3(FUN3,Xest,imax)
syms x
FunDer3 = diff('FUN3');
if abs(subs(FunDer3,x,Xest))>=1
fprintf('this function does not converge')
for i=2:imax
Apparently, this gives me the same error. I've also noticed that if I do this :
syms x
FunDer3 = diff('FUN3');
it doesn't give me the right derivative, it gives me the number 1. How can I fix this to get the symbolic derivative?
|
{"url":"http://mathhelpforum.com/math-software/139859-matlab-please-help-my-fixed-point-iteration-program-print.html","timestamp":"2014-04-16T11:08:55Z","content_type":null,"content_length":"9297","record_id":"<urn:uuid:b6f81eaa-b28c-4ef6-9675-153c928274ef>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Continuous Random Variables: Lab I
1. For each part below, use a complete sentence to comment on how the value obtained from the experimental data (see part III) compares to the theoretical value you expected from the distribution in
section II. (How it is reflected in the corresponding data. Be specific!)
a. minimum value:
b. first quartile:
c. median:
d. third quartile
e. maximum value:
f. width of IQR:
V - Plotting the Data and Interpreting the Graphs.
1. What does the probability graph for the theoretical distribution look like? Draw it here and label the axis.
2. Use Minitab to construct a histogram a using 5 bars and density as the y-axis value. Be sure to attach the graphs to this lab.
a. Describe the shape of the histogram. Use 2 - 3 complete sentences. (Keep it simple. Does the graph go straight across, does it have a V shape, does it have a hump in the middle or at either end,
etc.? One way to help you determine the shape is to roughly draw a smooth curve through the top of the bars.)
b. How does this histogram compare to the graph of the theoretical uniform distribution? Draw the horizontal line which represents the theoretical distribution on the histogram for ease of
comparison. Be sure to use 2 – 3 complete sentences.
3. Draw the box plot for the theoretical distribution and label the axis.
4. Construct a box plot of the experimental data using Minitab and attach the graph.
a. Do you notice any potential outliers? _________
If so, which values are they? _________
b. Numerically justify your answer using the appropriate formulas.
c. How does this plot compare to the box plot of the theoretical uniform distribution? Be sure to use 2 – 3 complete sentences.
|
{"url":"http://cnx.org/content/m17348/1.2/","timestamp":"2014-04-21T02:30:48Z","content_type":null,"content_length":"41878","record_id":"<urn:uuid:e62318fe-1339-4005-a194-1f75355c6f64>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What Math Do People Who Want to Become Doctors Need to Take in High School & College? | The Classroom | Synonym
Students who want to become doctors should take pre-calculus, AP calculus and calculus 1 and 2 in high school and college.
Students who are interested in becoming doctors have to complete a four-year bachelor's degree, a four-year medical school degree, and two to seven years of internship and residency work depending on
the kind of specialty they are interested in pursuing as physicians. While students cannot major in pre-med, and have to choose a concrete major such as biology or chemistry for their bachelor's
degrees, they do have to fulfill certain pre-med requirements prior to enrolling in medical school.
Pre-Med Requirements
Students interested in becoming doctors while they are still in high school often choose to take classes that will help them succeed in pre-med courses in college. Requirements vary, but most medical
schools require prospective applicants to have one year of general biology with a laboratory, one year each of general and organic chemistry with laboratories, one year of general physics with
laboratory, one year of calculus and a semester of microbiology. High school students can prepare for this coursework by taking high school-level equivalents of these courses. In particular, many
high schools offer students introductory and advanced placement courses in biology, chemistry, physics and calculus.
General High School Math
Students who want to become doctors should take all available math courses in high school. In chronological order, most high schools and many colleges offer students courses in pre-algebra, algebra
1, algebra 2 and trigonometry or pre-calculus. Each of these courses require students to have a passing grade in the previous courses. In other words, the prerequisites for trigonometry or
pre-calculus are pre-algebra and algebra 1 and 2. These courses introduce students to topics such as factoring, polynomials, functions and graphing, and give them a stronger foundation for the study
of calculus.
AP Calculus
Many high schools also offer students the opportunity to take calculus. Calculus is the study of the rate of change of functions and high school calculus courses are typically divided into advanced
placement calculus AB and advanced placement calculus BC. Advanced placement courses are considered to be equivalent to college courses, and students can get college credit for their high school
coursework if they pass the advanced placement exam in the class, typically offered at the end of the year. Taking one or both of these courses in high school gives students a thorough introduction
to this difficult subject, even if they do not end up taking or passing the advanced placement exam.
Calculus 1 and 2
Most medical schools expect students to take one or two semesters of calculus. Most students who take advanced placement calculus in high school still take calculus in college to get a better
understanding of the subject. However, advanced placement calculus is not a prerequisite for calculus 1, and students who take pre-calculus in high school can go straight into calculus 1 in college,
if they pass the placement exam. Both calculus 1 and 2 cover topics such as derivatives and integrals in one dimension, also known as single variable calculus. Students interested in becoming doctors
typically complete these courses in the first two years of their undergraduate careers.
Style Your World With Color
Photo Credits
• Pixland/Pixland/Getty Images
|
{"url":"http://classroom.synonym.com/math-people-want-become-doctors-need-high-school-college-3609.html","timestamp":"2014-04-20T18:39:53Z","content_type":null,"content_length":"34966","record_id":"<urn:uuid:da5c8e33-ea18-4305-92a3-2bf3f437f8ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[plt-scheme] Y Combinator Exercise
From: John Clements (clements at brinckerhoff.org)
Date: Wed Nov 16 18:20:40 EST 2005
On Nov 16, 2005, at 12:59 PM, Matthias Felleisen wrote:
> In the monograph on reduction semantics [Felleisen & Flatt], we had
> some sample calculations using the Y combinator along the lines of
> what John points out. Use reductions to show ((Y mk-!) 2) is 2.
> Then show them that the fixpoint equation helps. Reprove the first
> one and wow it all gets easier. Then show them that you need a
> calculus (=) and that reductions (->) are no longer enough for
> this. Puzzle: can you design a cbv Y-like combinator that doesn't
> use 'backwards' reductions. -- Matthias
Okay, now I'm confused. Isn't cbv Y _itself_ a Y-like combinator
that doesn't use 'backwards' reductions?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2430 bytes
Desc: not available
URL: <http://lists.racket-lang.org/users/archive/attachments/20051116/5884c507/attachment.p7s>
Posted on the users mailing list.
|
{"url":"http://lists.racket-lang.org/users/archive/2005-November/010385.html","timestamp":"2014-04-21T08:15:05Z","content_type":null,"content_length":"6341","record_id":"<urn:uuid:2c38efe0-cd0b-493b-b8e3-d70ca872da43>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HowStuffWorks "What are amps, watts, volts and ohms?"
The three most basic units in electricity are voltage (V), current (I, uppercase "i") and resistance (r). Voltage is measured in volts, current is measured in amps and resistance is measured in
A neat analogy to help understand these terms is a system of plumbing pipes. The voltage is equivalent to the water pressure, the current is equivalent to the flow rate, and the resistance is like
the pipe size.
There is a basic equation in electrical engineering that states how the three terms relate. It says that the current is equal to the voltage divided by the resistance.
Let's see how this relation applies to the plumbing system. Let's say you have a tank of pressurized water connected to a hose that you are using to water the garden.
What happens if you increase the pressure in the tank? You probably can guess that this makes more water come out of the hose. The same is true of an electrical system: Increasing the voltage will
make more current flow.
Let's say you increase the diameter of the hose and all of the fittings to the tank. You probably guessed that this also makes more water come out of the hose. This is like decreasing the resistance
in an electrical system, which increases the current flow.
Electrical power is measured in watts. In an electrical system power (P) is equal to the voltage multiplied by the current.
The water analogy still applies. Take a hose and point it at a waterwheel like the ones that were used to turn grinding stones in watermills. You can increase the power generated by the waterwheel in
two ways. If you increase the pressure of the water coming out of the hose, it hits the waterwheel with a lot more force and the wheel turns faster, generating more power. If you increase the flow
rate, the waterwheel turns faster because of the weight of the extra water hitting it.
On the next page, we'll talk more about electrical efficiency.
|
{"url":"http://science.howstuffworks.com/environmental/energy/question501.htm","timestamp":"2014-04-16T10:10:24Z","content_type":null,"content_length":"115831","record_id":"<urn:uuid:16a4acec-0ebb-4cad-90bc-116c8af1e655>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mechanics Question
April 15th 2012, 09:49 AM #1
Apr 2012
Great Britain
Mechanics Question
A body of mass 5kg is attached to one end of a nonlinear spring which does not
obey Hooke’s law. The other end of the spring is fixed, and no other forces act on the
particle, which moves in one dimension. The force exerted by the spring is given in
newtons by
F = 8tan^-1(x/3)ex
where x is the extension of the spring beyond its natural length measured in metres,
and ex is a unit vector pointing in the direction of increasing x.
The spring is extended by a distance of 4 m; the body is held at rest for a moment
and then released. By considering the potential V (x) corresponding to the force F, or
otherwise, determine the speed at which the body is moving when it reaches the point
x = 0.
[Hint: integration by parts may be useful in order to determine V (x).]
ex is really e and a small x slightly below the e but i couldnt figure out how to insert here.
I havent a clue where to start nor do any of my classmates! Help!
Last edited by ly667; April 15th 2012 at 09:58 AM.
Re: Mechanics Question
To get started find x when mass hangs freely We know in this position F=5g So solve 5g=8tan^-1(x/3)
Re: Mechanics Question
A body of mass 5kg is attached to one end of a nonlinear spring which does not
obey Hooke’s law. The other end of the spring is fixed, and no other forces act on the
particle, which moves in one dimension. The force exerted by the spring is given in
newtons by
F = 8tan^-1(x/3)ex
where x is the extension of the spring beyond its natural length measured in metres,
and ex is a unit vector pointing in the direction of increasing x.
The spring is extended by a distance of 4 m; the body is held at rest for a moment
and then released. By considering the potential V (x) corresponding to the force F, or
otherwise, determine the speed at which the body is moving when it reaches the point
x = 0.
[Hint: integration by parts may be useful in order to determine V (x).]
ex is really e and a small x slightly below the e but i couldnt figure out how to insert here.
I havent a clue where to start nor do any of my classmates! Help!
you sure the spring force equation is not $F = -8 \arctan\left(\frac{x}{3}\right)$ ? (force should be opposite to displacement)
$\frac{dV}{dx} = -F$
$V = 8 \int \arctan\left(\frac{x}{3}\right) \, dx$
integrating by parts ...
$u = \arctan\left(\frac{x}{3}\right) , dv = dx$
$du = \frac{3}{9+x^2} \, dx , v = x$
$8 \int \arctan\left(\frac{x}{3}\right) \, dx = 8\left[x \cdot \arctan\left(\frac{x}{3}\right) - \int \frac{3x}{9+x^2} \, dx \right]$
$V = 8\left[x \cdot \arctan\left(\frac{x}{3}\right) - \frac{3}{2} \log(9+x^2) + C \right]$
since $V(0) = 0$ ...
$V = 8\left[x \cdot \arctan\left(\frac{x}{3}\right) - \frac{3}{2} \log(9+x^2) + \frac{3}{2} \log(9) \right]$
$V = 8\left[x \cdot \arctan\left(\frac{x}{3}\right) + \frac{3}{2} \log \left(\frac{9}{9+x^2} \right) \right]$
set $V(4) = \frac{1}{2} mv^2$ to find the speed of the mass as it passes equilibrium
Re: Mechanics Question
I was taking it to be moving vertically so the forces were F upwards and 5g downwards. On reading it again I realize it's moving along an x axis
Re: Mechanics Question
No it is definitely just tan!
Re: Mechanics Question
Re: Mechanics Question
Sorry I thought you meant in me typing out the question. What about the other value in that given equation, the ex?
Re: Mechanics Question
$e_x$ is a unit vector in the direction of the spring's displacement. The negative sign in front of the force expression makes the force vector opposite in direction to the given unit vector.
April 15th 2012, 10:21 AM #2
Senior Member
Mar 2012
Sheffield England
April 15th 2012, 10:42 AM #3
April 15th 2012, 11:03 AM #4
Senior Member
Mar 2012
Sheffield England
April 15th 2012, 02:23 PM #5
Apr 2012
Great Britain
April 15th 2012, 02:48 PM #6
April 15th 2012, 02:59 PM #7
Apr 2012
Great Britain
April 15th 2012, 03:10 PM #8
Apr 2012
Great Britain
April 15th 2012, 03:54 PM #9
|
{"url":"http://mathhelpforum.com/advanced-math-topics/197316-mechanics-question.html","timestamp":"2014-04-18T12:32:15Z","content_type":null,"content_length":"59815","record_id":"<urn:uuid:68e232eb-79d1-4545-977b-c34d197a53b1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Russell's Real Paradox: The Wise Man Is a Fool
Review of Bertrand Russell: A Life
By Caroline Moorehead
Viking, New York, 1993, 596 pages, $30.00
Writing to Lucy Donnelly in 1940 after an anxious moment, Bertrand Russell confessed: I used, when excited, to calm myself by reciting the three factors of a^3 + b^3 + c^3 — 3abc; I must revert to
this practice. I find it more effective than thoughts of the Ice Age or the goodness of God.
Reading this confession in Moorehead's excellent biography, I wondered just where Russell had picked up these factors. Was it as a young student, cramming for admission to Cambridge? I wondered
whether he knew that this expression is the determinant of the 3 x 3 circulant matrix whose first row is [a, b, c]? And did he know that the factors, linear in a, b, c are the three eigenvalues of
the matrix? Did he know that this factorization was historically the seed that, watered by Frobenius, grew into the great subject of group representation theory?
I conjecture that he did not. To Russell, the algebraic expression was a mantra. He saw mathematics as the stabilizing force in the universe; it was the one place where absolute certainty reigned. In
search of this certainty, groping for it, he said, as one might grope for religious faith, he devoted the first half of a very long life to an attempt to establish the identity of mathematics and
I first heard of Russell as an undergraduate. I did a chapter of Principia Mathematica, his masterwork, written (1910-1913) with Alfred North Whitehead, as a reading period assignment in a course in
mathematical logic. At that time Russell was a celebrity, front-page news, having left the dots and epsilons and the "if-thens" of logic far behind. He had been appointed to a professorship of
philosophy at CCNY in 1940, and almost immediately a charge of immorality was laid against him. It hit the papers. I, together with most undergraduates, sided with John Dewey, Albert Einstein, and
Charlie Chaplin as they rushed in to defend Russell's right to teach epistemology.
When I learned more about the man, I admired and tried to emulate his courage to stand alone, his compassion for humanity, his sharp wit, his graceful literary style and willingness to write
comprehensibly. I admired the fact that he had the guts to quit working on a topic that had exhausted him.
To readers of SIAM News, Russell is probably best known as the author (1901) of Russell's paradox: What is the existential status of the set of sets that are not members of themselves? Contradiction
and inconsistency lurk in the shadow of this innocent construction. Russell bit his nails over it for years. In an attempt to deal with it, he came up with his Theory of Types, a cluttered
arrangement that satisfied few people. Even today, people tinker with the Zermelo-Fraenkel axioms of set theory in an attempt to deal with the quandary.
But let me make up my own Who's Who entry.
Bertrand Arthur William Russell, 3rd Earl Russell (1872-1970), was the grandson of Lord John Russell, twice prime minister of England. He was a mathematician, a logician, an agnostic, and a sometime
Cambridge don. Abandoning mathematics in 1914, he became a philosopher, a pacifist, an advocate of free love, a social activist, a soapbox orator, a pamphleteer, a theoretician of education, and the
operator of the Beacon Hill School for children (1927-32). He wrote dozens of books, ranging from deep technical stuff to easily understood popularizations of science, philosophy, and politics, and
treatises on how to live one's life sensibly. When he was jailed in 1918 for pacifism, my wife's aunt brought him fresh peaches. In 1940, the threat of his dismissal from the faculty at CCNY on the
immorality charge became a cause celebre.
In 1967, he masterminded the International War Crimes Tribunal that met in Stockholm and "tried" Lyndon Johnson and other Americans for their brutal treatment of the Vietnamese. He was named a Fellow
of the Royal Society in 1908 and received the Order of Merit in 1948 and the Nobel Prize for Literature in 1950.
So much for the externals. As regards the internals, I believe that no one has summed them up better than Beatrice Webb did in 1902:
Bertrand Russell's nature is pathetic in its subtle absoluteness: faith in an absolute logic, absolute ethic, absolute beauty and all of the most refined and rarified type. . . the uncompromising
way in which he applies these frightens me for his future and for the future of those who love him and whom he loves. Compromise, mitigation, mixed motive, phases of health of body and mind,
qualified statements, uncertain feelings, all seem unknown to him.
Russell was 30 when Beatrice Webb wrote these words. He lived to be 98. His life was strewn with hearts he broke and relationships he terminated. Former wives, children, friends, lovers were "dropped
down an oubliette," someone said, and left there to rot. Webb's description, I think, remained valid throughout his life.
We have in Russell a verification of the comic, prototypical mathematician of the movies. If one puts credence in the theories of hemispheric specialization of the brain, one could say confidently
that Russell was half-brained: The left hemisphere worked overtime producing symbols by the bushel, while the right hemisphere seemed to be atrophied. The famous art critic and historian Bernard
Berenson, his sometime brother-in-law, remarked on how little the visual image meant to Russell. Aldous Huxley stressed this imbalance in his novel Crome Yellow (1922). Huxley's character Scogan, a
bitter caricature of Russell, says of himself:
While I have a certain amount of intelligence, I have no aesthetic sense; while I possess the mathematical faculty, I am wholly without the religious emotion. . . . Life would be richer, warmer,
brighter, altogether more amusing, if I could feel them. . . . I must have gone on looking at pictures for ten years before I would honestly admit to myself that these merely bored me.
Most people are fusions of contradictions, but surely Russell, the man who sought a contradiction-free mathematics, was himself the "Rolls-Royce" of contradictions. The extent to which this is true,
as emphasized by Caroline Moorehead in her book, has become crystal clear with the availability of archival material.
Russell was a pacifist who hated other pacifists for thinking that human nature was essentially benign. He thought at an early age that "human actions like planetary motions could be calculated if we
had sufficient skill" but resisted being "tracked" himself: "I don't think I could stand the academic atmosphere now [1919], having tasted freedom and the joys of a dangerous life."
While he claimed that the "world of pure reason knows no compromise, no practical limitations," Russell exhibited, at least in the minds of many people who seemed to be reasonable, the chaos of
unreason. According to a friend of mine who tried to make a video profile of him at 92, Russell asserted that reason had its limitations: "Beware of rational argument," he said; "you need only one
false premise in order to prove anything you please by logic."
He was a skeptic who claimed to have found the truth. As late as 1937, when he completed a second edition of Principles of Mathematics (to be distinguished from Principia Mathematica), he still
asserted that logic and mathematics were identical. (In sympathy, but not in defense, I agree that it must be a bitter experience to have reached three score and ten years only to find that one's
lifelong vision has collapsed.)
Russell was an aristocrat who was frequently broke. He was a liberal who expressed his ethnic and class prejudices openly and made no attempt to rid himself of them. He loved humanity in the abstract
and championed individual rights, but he had absolutely no idea that real individuals had feelings.
He wrote books on how to live properly, although he totally mucked up three marriages. (The documents relating to his divorce from Dora Black, his second wife, occupy five large boxes in the
archives.) His last and fourth marriage, undertaken at the age of 80, seems to have fared much better. He hated America and all it represented, but at one point talked of settling here permanently.
He excoriated American professors for their shallowness and then sent his children to American universities. He milked America for its daughters, its accolades, its salary offers, its royalties, and
for the security it afforded him from Hitler's army from 1938 to 1944. In political cartoons, he was depicted as the wisest man on earth who was at the same time the complete fool.
Russell's impact on mathematics was enormous. Principia Mathematica, a key technical advance between the work of Frege and Gödel, led later to much of the field of theoretical computer science,
including AI. It established that most--possibly all--mathematical proofs could be expressed as the formal manipulation of symbols, verifiable, at least in theory, automatically. The only way to
establish this was actually to have done it for some large collection of proofs. Gödel's incompleteness theorem, for example, would hardly have carried much punch without a previous demonstration
that the techniques now shown to be incomplete seemed to suffice for all known mathematical proofs.
By establishing that many mathematical concepts can be reduced to set theory, Principia has had a great impact on the presentation of mathematics. The tendency of mathematical writers to give
definitions in the often questionable form "a garble is a six-tuple $" is surely traceable to Principia.
Caroline Moorehead wisely avoided deep explanations of Russell's work in logic and in philosophy. Yet, from reading her book, I gained confirmation of a long-held feeling that mathematics derives
from specific people, and that the particular psychological flavor of a particular individual can infuse that individual's mathematical creations. My own antiplatonic, antilogicist soul, nourished
independently of Gödel's theorem, laughed heartily at Moorehead's reproduction of a proof sheet from Principia:
"Additional Errata to Volume I," and in the margins, in Russell's handwriting, errata to the errata to the errata.
In the social line, Russell's so-called immoralities have regrettably become standard operating procedures in the Western world, and legal practice, at least in the U.S., has favored individual
freedom at the expense of social cohesion. What with computers, social and economic algorithms, and the counterclaim that happiness comes through the lack of constraints, we are living for better and
for worse, with wisdom and with folly, in a world that Russell helped make.
Philip J. Davis, emeritus professor of applied mathematics at Brown University, is an independent writer, scholar, and lecturer based in Providence, Rhode Island. His e-mail address is
Reprinted from SIAM NEWS Volume 27-6, July 1994 (C) 1994 by Society for Industrial and Applied Mathematics All rights reserved.
|
{"url":"http://cs.nyu.edu/faculty/davise/personal/PJDBib/Russell.html","timestamp":"2014-04-16T19:56:02Z","content_type":null,"content_length":"11701","record_id":"<urn:uuid:e0dd7925-a3df-4db5-993d-0ea13c8c0633>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Distance along the surface of an oval between two points
March 14th 2009, 02:12 PM #1
Mar 2009
Distance along the surface of an oval between two points
Hi All...
I am looking for a formula to calculate the distance between 2 points on the outside of an oval.
I made a small example image.
I have the X,Y value of both A and B.. so in this example they are:
A.x = -1.9
A.y = -2.8
B.x = 2.0
B.y = 2.5
I also have the radius of the oval... we can call R and equals 9.42 (should note I made that up, but I know that the radius of a 6x6 circle)
I am trying to find a way to calculate the distance from A to B clock wise along the surface of the oval, and not necessarily the shorted distance.
I found a formula for calculating the distance for two lat and log and tried to adapt that, but it didn't work... here is what I had:
acos(sin(A.x) * sin(B.x) + cos(A.x) * cos(B.x) * cos(B.y-A.y)) * R
Honestly.. this type of math is above my head, but I really need the answer so I am giving it all I have... been reading up on cos and sin and know its like the x and y of my A and B... but
everything I find talks about how to calculate out the angle from the center to get the x and y.. which I already have ....
Hope someone can help.. Please
Last edited by skidx; March 14th 2009 at 04:32 PM.
It's more complicated than you might think...
The formula for the length of part (L) of a curve is $L = \int_a^b \sqrt{1+\left(\frac{dy}{dx}\right)^2}\,dx$
The equation of an oval, more correctly called an ellipse, is $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$
Rearranging gives $y = \pm \sqrt{b^2 - \frac{b^2x^2}{a^2}}$
This becomes difficult to work with as an ellipse is not the graph of a function. If one point is below the x-axis and the other above then you have to use both square roots separately, because
an ellipse is the joint graph of those two square roots above...
March 14th 2009, 10:38 PM #2
Dec 2008
Auckland, New Zealand
|
{"url":"http://mathhelpforum.com/geometry/78687-distance-along-surface-oval-between-two-points.html","timestamp":"2014-04-18T01:55:53Z","content_type":null,"content_length":"33960","record_id":"<urn:uuid:19f0d044-a254-4e29-94d9-26857cd0951a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 10
A polynomial f(x) of degree 3 leaves a remainder of 6 when divided by (x+1) and a remainder of 12 when divided by (x-2). Given that the coefficient of x^3 is 2 and (x+2) is the factor of the
polynomial. Show that f(x)= 2x^3+3x^2-5x+2
which of the following statement is correct? a)multiple alleles for the same gene are always co dominant. b)when there are only two alleles of a particular gene,one alleles is always recessive to the
other. c)in different indivual, the identities of dominant and recesive allel...
need math on 3. From the balanced Equation (6) and the results from step # 2 calculate the number of moles of acid titrated. (6) is CH3 COOH (aq) NaOH (aq) → Na+(aq) + CH3COO- (aq) + H2O results from
#2 is 7.2*10^-3 L
from Balanced CaCO3+2HCL-CaCL2+H2O+CO2 and 7.2*10^-3 L Calculate moles titrated
From the balanced equation for tums and .0072mL calculate no. moles titrated.
How much gravitational force is exerted on a cement block that has a mass of 18 kg?
A person 12,000 km from the center of earth has a mass of 70 kg and weighs 182 N. What is the acceleration due to gravity at this altitude?
Is the answer to a football player has a mass of 80 kg equals 176N in weight ?
What is the mass of an automobile that weighs 9800 N
4.8*10^-9 =x^2/.350
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Everett","timestamp":"2014-04-21T11:24:48Z","content_type":null,"content_length":"7695","record_id":"<urn:uuid:e04be27b-d2c7-4422-8b83-6cbad754c0d7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Distributed Feedback Semiconductor Lasers
Home > eBooks
> Distributed Feedback Semiconductor Lasers
Distributed Feedback Semiconductor Lasers
MyBook is a cheap paperback edition of the original book and will be sold at uniform, low price.
Buy print edition
□ + Show details -
Hide details
□ + Show details -
Hide details
□ p. 1 –36 (36)
This chapter has set the scene for the operation and fabrication of semiconductor lasers, emphasising why the authors see lasers with grating reflectors as so important. The earliest lasers
were examined, showing that a key requirement is to confine the photons and electrons to the same physical region. A geometrical factor Γ, known as the confinement factor, expresses the
degree to which this has achieved. The simplest requirements for lasing were put forward so that the reader understands the importance, just as in electronic feedback oscillators, of the
round-trip gain and phase in achieving a stable oscillation. The importance of single-mode lasers for modern optical communication along silica fibres over many tens of kilometres was noted.
A variety of different types of laser were outlined, and in particular it was observed that to gain a stable single mode the favoured technology is to incorporate some form of Bragg grating.
The fundamentals of gratings were introduced and the elements of different laser designs with gratings buried in their structures were presented.
□ + Show details -
Hide details
□ p. 37 –75 (39)
This chapter discusses the spontaneous emission and optical gain in semiconductor lasers. Spontaneous emission which initiates lasing, optical gain which is essential to achieve lasing, and
other processes involved in lasing all use quantum processes at the level of single atoms and electrons within the lasing material. Without simplifications, the physics and mathematics
necessary to describe such atomic systems fully is too complicated, certainly for the level of this book. The simplification and approximations must, however, be done in a way which is
adapted to the requirements of semiconductor lasers, and the limitations must be understood. The results and implications of quantum physics are discussed here but there are only illustrative
outlines of any derivations.
□ + Show details -
Hide details
□ p. 76 –96 (21)
Principles of modelling guided waves is presented. The mechanisms and mathematics required for optical guiding in semiconductor-slab guides is discussed. Many modes are permitted and the
effective refractive index changes with increasing mode number from a value close to the largest value of refractive index down to a value close to the smallest value of refractive index of
the different layers.
□ + Show details -
Hide details
□ p. 97 –127 (31)
This chapter presents two basic classical methods of modelling mathematically the operation of semiconductor lasers and shows that they are, indeed, just different aspects of the same physics
of energy conservation and are wholly compatible with one another. The first method applies the concepts of photon/electron particle exchange where one discusses the rate of absorption and
emission of photons along with the rate of recombination of holes and electrons, ensuring at each stage that there is a detailed balance between photon generation and electron/hole
recombination leading to particle conservation and energy conservation. This is the standard rate equation approach which is robust and well researched but can be difficult to apply when
there are strong nonuniformities, and even more difficult when the phase of the electromagnetic field is important. For distributed-feedback lasers, both the phase of the field and
nonuniformities are important and so one has to abandon the photon-rate equation in favour of an approach based on interactions between electromagnetic fields and the electric dipoles in an
active optical medium. The electromagnetic field analysis is essential when the refractive index/permittivity changes periodically inside the laser. The chapter then concludes with an
analysis of the coupled-mode equations which determine how a fraction of the forward-travelling field is coupled into the reverse-travelling field with a medium which has a periodic
permittivity, i.e. the waveguide contains a Bragg grating.
□ + Show details -
Hide details
□ p. 128 –164 (37)
This chapter starts with a more physical derivation of the coupled-mode equations and moves on to new features such as the eigenmodes for the analytic solutions, the influence of grating
parameters on the dispersion diagram and the 'stopband' of nonpropagating frequencies.
□ + Show details -
Hide details
□ p. 165 –208 (44)
These more advanced problems of the static design of lasers are outlined here and the chapter ends with a discussion on some results of modelling the dynamic performance of DFB lasers,
considering problems associated with carrier transport into quantum wells. The dynamic performance of DFB lasers highlights yet further the problems that have already been met with a uniform
grating in a uniform DFB laser.
□ + Show details -
Hide details
□ p. 209 –251 (43)
This chapter presents the basics of large-signal time-domain modelling using the travelling-wave time/distance nonlinear partial differential equations of the laser. The field patterns and
electron densities in the laser are computed permitting electron-photon interactions to be visualised. The lasers are excited by random 'spontaneous noise' which leads to outputs which are
never precisely the same from run to run. However, the random output is not normally a major drawback but the time-domain modelling of low frequency noise can require excessively long times
of computation.
□ + Show details -
Hide details
□ p. 252 –303 (52)
This chapter discusses mathematical modeling as a key component in the design of devices and systems in optoelectronics projects. One thrust of the book has been to lay out the physical and
mathematical modelling techniques for the electromagnetic and electronic interactions within distributed feedback lasers to give better explanations and to facilitate new designs. Optical
systems where arrays of devices may be interconnected are one such important area and are discussed. Novel concepts such as that of the push-pull laser, tunable lasers with Bragg gratings,
and surface-emitting lasers are all candidate areas for applications of the modelling principles given in this text. The future for mathematical modelling, coupled with a sound physical under
standing, is bright, extensive and assured.
□ + Show details -
Hide details
□ p. 304 –312 (9)
This appendix provides a summary of plane-wave interactions at dielectric interfaces and a summary of special cases which are of importance in laser-diode design, in particular, indicating
one reason why TE modes are slightly more strongly reflected from a cleaved facet than TM modes.
□ + Show details -
Hide details
□ p. 313 –323 (11)
This appendix gives a systematic approach to providing a program for solving propagation, confinement factor and far-field emission when electromagnetic waves are guided by slab waveguides.
Almost arbitrary numbers of layers can be computed with complex refractive indices. Here, by examining a five-layer system, the way forward to a semi-automated method for an arbitrary number
of layers with complex refractive indices is demon strated. Multilayer guides are particularly relevant in discussing DFB lasers where the longitudinally periodic profile of the permittivity
has lateral variations across the waveguide which can be taken into account by having a series of layers with different patterns of permittivity, giving an average permittivity and an average
periodic component which can be calculated using the programs.
□ + Show details -
Hide details
□ p. 324 –328 (5)
The group velocity is the velocity of a wave packet (i.e. the velocity of energy) that is centred on a central carrier frequency f = ω/2π. The group velocity is different from the phase
velocity for two reasons: (i) the waveguide changes the propagation coefficient as a function of frequency; and (ii) the material permittivity changes with frequency so that the propagation
coefficient in the material changes with frequency. This appendix illustrates these two roles using a symmetric three-layer waveguide so that one can appreciate the physics through putting
numbers into an analytic solution.
□ + Show details -
Hide details
□ p. 329 –340 (12)
This appendix provides a more detailed account of the classic rate equation analyses of appropriately uniform lasers and shows the first-order effects of carrier transport into the active
quantum-well region, and the effects of spontaneous emission and gain saturation on damping of the photon-electron resonance. The appendix ends with large-signal rate equations showing the
influence of four key parame ters which shape the main features of the large-signal response.
□ + Show details -
Hide details
□ p. 341 –351 (11)
Electromagnetic energy exchange phenomena is presented. The rate equations derived from the particle balance are consistent with Maxwell's equations and the group velocity appears in the
travelling-field equations.The parameter χ is called the susceptibility with l+χ=ε[r] giving the relative permittivity. In electro-optic material, it is more correcdy considered as a tensor
so that the polarisation is not necessarily in exactly the same direction as the applied field, but in this work it is adequate to treat χ as a scalar. The random polarisation P[spont] gives
rise to spontaneous polarisation currents.
□ + Show details -
Hide details
□ p. 352 –356 (5)
This appendix gives the detailed calculations for finding the steady state fields in a uniform DFB laser with uniform gain, and thereby finding the threshold conditions. The results are
essential if one wishes to make comparisons with the numerical algorithms to estimate the accuracy of these algorithms. The appendix also provides a tutorial on the use of Pauli matrices for
coupled differential equations.
□ + Show details -
Hide details
□ p. 357 –365 (9)
The Kramers-Krδnig relationships provide fundamental rules for the relationships between the real and imaginary parts of the spectrum of any real physically realisable quantity when expressed
as a complex function of frequency. One cannot, for example, design the optical gain spectrum to be any desired function of frequency without discovering that the phase spectrum is then
closely prescribed. Frequently, such connections appear to be abstract and mathemat ically based. This appendix looks at three different ways of discovering these relationships which should
help the reader to understand the fundamental nature and physics of the Kramers-Kronig relationships. The appendix includes at the end a collection of approximations for the real refractive
indices of relevant laser materials.
□ + Show details -
Hide details
□ p. 366 –371 (6)
The relative-intensity noise (RIN) is an important quantity in determining whether lasers are acceptable for use in optical-commu nication systems. Its analytic study can require extensive
algebra, but in this appendix the emphasis is on the physical significance of RIN and simulations using time-domain modelling to estimate its value for DFB lasers.
□ + Show details -
Hide details
□ p. 372 –385 (14)
This appendix first considers thermal and quantum noise and the ideal signal-to-noise power ratio that can be measured using a 100%-efficient photodetector. An ideal optical amplifier
followed by an ideal detector is then considered. At the output of this ideal amplifier, the signal-to-noise-power ratio depends on the spontaneous emission and it is argued that this ratio
has to be the same as the signal to-noise-power ratio for the ideal direct detection of optical fields at the input.
□ + Show details -
Hide details
□ p. 386 –391 (6)
Laser optoelectronic packaging of DFB-laser chips is reported. Thermal properties, laser monitoring by photodiode, package-related backreflection and fiber coupling is presented.
□ + Show details -
Hide details
□ p. 392 –395 (4)
This appendix contains tables to provide summary of material and device parameters used for large-signal dynamic modelling of uniform-grating DFB laser.
□ + Show details -
Hide details
□ p. 399 –404 (6)
Programs written in MATLAB 4 that has been designed. Before obtaining the programs from the World Wide Web, users are recommended to create, within their main MATLAB directory, a subdirectory
(or folder), say laser, with further sub-directories dfb, diff, fabpero, fftexamp, filter, grating, slab, spontan. Within MATLAB, the effective initiation file is the M-file called matlabrc.m
and this should be backed up for future reference into a second file, say matlabrb.m.
□ + Show details -
Hide details
□ p. 405 (1)
Related content
|
{"url":"http://digital-library.theiet.org/content/books/cs/pbcs010e","timestamp":"2014-04-18T13:19:41Z","content_type":null,"content_length":"105911","record_id":"<urn:uuid:13fa33f4-1125-44d5-882d-0198af996a3b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
More Rectangles and One Good Read
Jesus. I'd rather stick needles in my eyes than read another page of Fifty Shades. I'd heard all this buzz about it since summer, yet no one had mentioned just how awful this book is! Someone had
loaned me the set, so I was looking forward to reading them over break. But sentences like
I feel him. There.
I groan... how can I feel this there?
My inner goddess looks like someone snatched her ice cream.
... make me want to kick someone. Since when does the word THERE get italicized so damn often?!
I'm sorry, but I felt it was my PSA for the Holidays to mention this. However, I just started reading this
on Ramanujan, and I highly recommend it because thankfully it has restored my sanity and faith in the written word.
About them rectangles. Curmudgeon just posted this
Painter's Puzzle
yesterday on Christmas Day — what a nice gift for us!
A painting contractor knows that 12 painters could paint all of the school classrooms in 18 days.
They begin painting. After 6 days of work, though, 4 people were added to the team. When will the job be finished?
Students typically read this as a proportion problem: 12 painters can do it in 18 days, so 1 painter can do it in 1.5 days. Except... hmmm, no.
Edward Zaccaro uses what he calls the "Think One" strategy in his
to solve this type of problem. I guess mine is the same idea, except I draw
. Shocking. :)
This is how I normally teach my kids, and what responses I hope to [S:beat it out of:S] get from them.
Kids are terrified of fractions already
. Teaching them to solve this problem — or any of the
work problems
— using rational equations will only confirm how much they dread the blessed fractions. Sure, I'll get to the equations, but I just wouldn't
with them.
Another common problem — that I'll use rectangles to help my kids — goes something like this:
In a state with 10% sales tax, someone buys an article marked "50% discount.” When the price is worked out, does it matter if the tax is added first, and then the discount taken off, or if the
discount is taken off, and then the tax added?
(A quick search for this type of problem yielded this "best answer" that was a total
Tax, then discount:
Discount, then tax:
Heya, I'd love to send the Ramanujan book (from Amazon) to the first person to email me at fawnpnguyen at gmail dot com. [1:49, Robin S. from PA will be receiving the book!] [3:42, Elaine W. from VT
is also getting the book.]
So, there! Hope you're enjoying your break.
• December 26, 2012 4:00 PM Gordon wrote:
Hope you are enjoying your holiday, too! Thanks for the mind jogger...
Reply to this
1. December 26, 2012 4:30 PM fawnnguyen wrote:
Hey Gordon! How are you? Will I see you at the Red Rock this year? Bummer that it'll be a shorter conference.
Reply to this
• December 26, 2012 7:41 PM Robert Kaplinsky wrote:
I like your approach but I wonder how exactly this would roll out in your class. For example, I assume you give them the problem and let them have an opportunity to solve it on their own. I also
completely see how you are asking questions to help them explain the reasoning behind the rectangles. When do you transition from their solutions to your solution? Do you immediately show them
your way?
For me, my usual goal is to let them come up with different ways to solve the problem and then ask questions to facilitate a conversation around how the different methods are connected.
Unfortunately more often then not, students either come up with no ways or one way to solve it. Often I have an elegant way of approaching the problem, but I find that if I introduce my way,
students will often abandon their methodology for my own. Then for some students it becomes a game of, "If I wait long enough, Mr. Kaplinsky will just show me how to do it."
Reply to this
1. December 27, 2012 1:15 AM fawnnguyen wrote:
Hi Robert. Thank you for your thoughtful comments.
"I assume you give them the problem and let them have an opportunity to solve it on their own." Absolutely. I hope this is reflected in all the teaching posts that I share.
"When do you transition from their solutions to your solution?" I hope my solution will always show up dead last, if I show it at all. Like in the most recent lesson Equilateral Triangles, I
never shared with the kids any solution until after it was all over, and only to show how the different solutions compared. Last year, a 6th grader solved a puzzle in a much more elegant way
than I had. I want the kids to teach me and teach each other.
"... let them come up with different ways to solve the problem and then ask questions to facilitate a conversation around how the different methods are connected." The norm for my class is:
individual work, then small group work, then whole-group share. Again, I hope many of my posts show this. The structure I use for group work is similar to the "Five Practices" that I wrote
about here.
"Unfortunately more often then not, students either come up with no ways or one way to solve it." My algebra class does have these tendencies more than my geometry or 6th grade. When they
have no way of solving a problem, I think we have to ask ourselves all these questions: 1) was the problem level appropriate and does it have multiple ways for solving it, 2) was there enough
individual think/struggle time (because getting kids in small groups too early might invite social talk since no one really had anything to contribute), 3) what guiding questions can we ask
to help the groups along, what is that one piece of information we could steer them toward to get the ball rolling.
Normally when only one group has something, I'll ask this group to do a quick share, just a hint, and when this group knows that they can only give a hint or ask a guiding question, they love
it and try hard to craft a good question.
"Then for some students it becomes a game of..." Right, kids are very smart and will do this. But I'm way more stubborn. Funny, before break I did a problem with the geometry kids, but time
was running out, so I told them we needed to move forward and I needed to give them one piece of information (NOT the answer), four kids actually stepped outside because they didn't want to
hear this info yet. I really believe it's a culture that I've worked hard to develop, and this takes time. I hope the culture in my class include: "struggling is a sign of learning" and "if
it's easy, it's not worth doing" and "don't bother asking Mrs. Nguyen because she ain't talking."
The rectangles should be a new strategy for all of my 6th graders and about half of my 8th graders, so I go through this process via guided questioning, then lots of practice on their own,
and then some more. The toughest part for them is to figure out what size rectangle to draw, but it's a great learning process of testing and revising. When we return in January, I'll be
doing this type of problem with the kids, so I'll share how it goes.
Thanks again, Robert!
Reply to this
• December 27, 2012 7:59 AM Robert Kaplinsky wrote:
Thank you for the very detailed response Fawn. All of these are good ideas. I hadn't seen the Five Practices post and I will have to check out that book as well.
I appreciate your insight. Some of it validates what I am already doing and some of it forces me to reflect on what I am doing and why I am doing it.
Reply to this
• January 2, 2013 12:57 PM Sue VanHattum wrote:
My cousins were reading that book this summer. They seemed to love it, but I had no desire to go there. (Straight erotica not being of much interest to me...) I really enjoyed the Ramanujan book
That is the most realistic work problem I've ever seen. (Bookmarking.)
Reply to this
1. January 2, 2013 8:57 PM fawnnguyen wrote:
Holy cow, I'm loving the Ramanujan book!! It's an oddly and deeply moving biography. Almost halfway through, wish I'd started it earlier during break. Thank you, Sue.
Reply to this
Recent Comments
|
{"url":"http://fawnnguyen.com/2012/12/26/20121226.aspx?ref=rss","timestamp":"2014-04-16T13:33:25Z","content_type":null,"content_length":"57496","record_id":"<urn:uuid:7f87ebed-c1cd-48e8-bd85-3158def35cd5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matter, Energy, and Radiation Hydrodynamics
Section 3.0 Matter, Energy, and Radiation Hydrodynamics
Nuclear Weapons Frequently Asked Questions
Version 2.17: 5 December 1997
COPYRIGHT CAREY SUBLETTE
This material may be excerpted, quoted, or distributed freely
provided that attribution to the author (Carey Sublette) and
document name (Nuclear Weapons Frequently Asked Questions) is
clearly preserved. I would prefer that the user also include the
URL of the source.
Only authorized host sites may make this document publicly
available on the Internet through the World Wide Web, anonymous FTP, or
other means. Unauthorized host sites are expressly forbidden. If you
wish to host this FAQ, in whole or in part, please contact me at:
This restriction is placed to allow me to maintain version control.
The current authorized host sites for this FAQ are the High Energy
Weapons Archive hosted/mirrored at:
and Rand Afrikaans University Engineering hosted at:
Back to Main Index
3.0 Matter, Energy, and Radiation Hydrodynamics
This section provides background in the fundamental physical phenomena that govern the design of nuclear weapons, especially thermonuclear weapons. This section does not address nuclear physics which
are introduced in Section 2, and discussed further in Section 4. It addresses instead the behavior of matter at high densities and temperatures, and the laws controlling its flow.
Although the reader may be able to follow the discussions of physics and design in Section 4 without it, familiarity with the principles discussed here is essential for genuine insight into the
design of nuclear weapons. Since the same principles tend to crop up repeatedly in different contexts, and it is inconvenient to explain basic physics while addressing engineering considerations, I
provide an overview of the non-nuclear physical principles involved below. Section 2 provides a discussion of the nuclear reactions involved.
Readers with a grounding in physics will find much of this discussion too elementary to be of interest. Please skip anything you are already familiar with.
3.1 Thermodynamics and the Properties of Gases
Thermodynamics concerns itself with the statistical behavior of large collection of particles, a substantial quantity of matter for example. A literal reading of the term "thermodynamics" implies
that the topic of discussion is the motion of heat. In fact, thermodynamics specifically addresses the condition of thermal equilibrium, where the motion of heat has ceased. The principle motion of
interest is the randomized motion of the particles themselves, which gives rise to the phenomenon called "heat". It is this equilibrium condition of random motion that can be accurately characterized
using statistical techniques. It is important to realize that the topic of thermodynamics is the study of the properties of matter at least as much as it is of heat.
There are two very important principles of thermodynamics, called the First and Second Laws of Thermodynamics. These are often stated in the form:
1. The total energy in the universe is constant;
2. The entropy of the universe is always increasing.
A more useful statement of the First Law in practical situations is to say that the change in total energy of a system is equal to the work done on the system, plus the heat added to the system. The
Second Law states that the amount of heat in a closed system never decreases. The implications of these laws are discussed further below.
3.1.1 Kinetic Theory of Gases
The gaseous state is the simplest form of matter to analyze. This is fortunate, since under the extreme conditions encountered in chemical and nuclear explosions, matter can usually be treated as a
gas regardless of its density or original state.
The basic properties of a gas can be deduced from considering the motions of its constituent particles (the kinetic theory). The pressure exerted by a gas on a surface is caused by the individual
molecules or atoms bouncing elastically off that surface. This pressure is equal to the number of molecules striking the surface per unit of time, multiplied by the average momentum of each molecule
normal (i.e. at right angles) to the surface. The number of impacts per second is proportional (~) to the particle density of the gas (rho, particles per unit volume), and how fast the molecules are
traveling (the average molecular velocity v):
Eq. 3.1.1-1
The average momentum is proportional to v times the mass of the particles (m). The pressure is thus:
Eq. 3.1.1-2
P ~ rho*v*v*m.
Actually we can state that:
Eq. 3.1.1-3
P = rho*v*v*m/3
since the contribution of molecular velocity normal to the surface in three dimensions is 1/3 of the squared magnitude.
Since v*v*m/2 is the particle kinetic energy (KE_p), we can also say:
Eq. 3.1.1-4
P = rho*KE_p*(2/3)
That is, pressure is proportional to the average particle kinetic energy and the particle density, or equal to two-thirds of the total kinetic energy, KE, in a given volume of gas (the kinetic energy
density). This is usually expressed as:
Eq. 3.1.1-5
P = 2/3(KE/V), or
PV = 2/3 KE,
where P is the pressure.
Now the thing we call temperature is simply the average kinetic energy of the particles of a gas. A constant of proportionality is used to convert kinetic energy, measured in joules or ergs, into
degrees Kelvin (K). Together these considerations give us the Ideal Gas Law:
Eq. 3.1.1-6
PV = NkT, where
P = pressure, V = volume, N = number of particles, k = Boltzmann's constant (1.380 x 10^-16 erg/degree K), and T = temperature. N/V is of course the particle density (designated n).
The constant factor 2/3 was absorbed by Boltzmann's constant. As a result, if we want to express the average particle kinetic energy of a gas at temperature T we must say:
Eq. 3.1.1-7
KE_p = 3/2 kT
An ideal gas (also called a perfect gas) is one in which there are no interactions (that is, repulsive or attractive forces) between atoms. For such a gas, the Ideal Gas Law holds true. The simplest
case of a perfect gas is a perfect monatomic gas, one in which all of the energy in the gas is in the form of particle motion (i.e. the particles themselves do not absorb any energy). This is the
only case we have considered so far. Helium or argon are examples of ideal monatomic gases to a very good approximation (they are monatomic, and attractive forces only become significant close to
their liquefaction temperatures).
Molecular or polyatomic gases, ones in which the particles are molecules of two or more atoms, can absorb energy through rotation and vibration. Such gases are not monatomic, but they are still
ideal. Under some conditions gases can absorb energy internally by other processes, like ionization, which violate ideal gas behavior. When conditions are such that attractive forces become
significant (near liquid or solid condensation points) the ideal gas law also breaks down.
Perfect monatomic gases are of special interest to us here, not only because they are particularly simple to analyze, but because under many extreme physical regimes all matter tends to behave like a
perfect monatomic gas (kinetic energy dominates other forms of energy present).
3.1.2 Heat, Entropy, and Adiabatic Compression
Simply put, heat is the random motion of the particles in matter. In common usage we talk about something with a higher temperature as being "hotter". However temperature is not a universal measure
of the thing we call heat. Suppose we take a container of a perfect gas, and we squeeze it and reduce its volume. To squeeze it and compress the gas we must do work which, by the First Law of
Thermodynamics, is added to the internal energy of the gas. Since this is a perfect gas, all of the added energy appears as kinetic energy. That is, the temperature goes up. But have we actually
added heat to make it hotter?
The answer is no. We can get the energy back in the form of work, by letting it expand back to its original volume. The temperature will also drop back to the original state. This compression process
(called adiabatic compression) is reversible since we can return to the original state.
To increase the temperature of the container of gas without changing its volume, we must place it in contact with something that is hotter. The heat diffuses from the hotter object to the container.
As the gas in the container warms, the hotter object grows cooler.
How can we return the gas to its original state? We must place it in contact with something that is colder than the original gas temperature. The heat then diffuses to the colder object. Although the
gas in the container is now in its original state, the whole system is not. The hotter object is cooler, the colder object is warmer. This process is irreversible (we say "entropy of the system has
Temperature is a measure of heat in a gas only at constant volume. The generalized measure of heat is entropy. Entropy is defined as the ratio of the total energy of a system to its temperature. As
heat is added to a system this ratio increases. Work done on the system leaves the ratio unchanged.
Adiabatic compression is compression where the entropy is constant (no heat is added or removed). If flows of heat occur, then the process is non-adiabatic and causes irreversible change.
3.1.3 Thermodynamic Equilibrium and Equipartition
I have just talked about heat flowing from hotter objects to colder ones. This process implies that a system of objects tends to move to a state where all of the objects are at the same temperature.
When this occurs, heat ceases to flow. Such a state is called "thermodynamic equilibrium", and all systems tend to evolve toward this equilibrium naturally. The faster heat can flow in the system,
the faster this equilibrium is reached.
The idea of thermodynamic equilibrium is extremely general. It applies not only to "objects" - physically separate parts of a system - but all parts of a system - separate or not.
For example in a mixture of particles of different types, different gas molecules say, each type of particle will be in equilibrium with the others. That is, they will have the same temperature - the
same average kinetic energy. If each type of particle has a different mass from the others, then each must also have a unique average velocity for the kinetic energies of each type to be equal. One
implication of this is that when a gas becomes ionized, the electrons knocked loose become separate particles and will come into thermodynamic equilibrium with the ions and un-ionized atoms. Since
they are much lighter than atoms or ions, their velocities will be much higher.
We have also already applied the equilibrium principle in deriving the Ideal Gas Law. The total kinetic energy was divided equally among the three spatial directions of motion, e.g. they were in
equilibrium with each other. These spatial directions are called the "degrees of freedom" of a monatomic perfect gas. Since the kinetic energy of a particle in such a gas is 3kT/2, each degree of
freedom accounts for kT/2 energy per particle. This is also true of polyatomic gases, which have additional degrees of freedom (e.g. from vibration and rotation). Each available degree of freedom
will have kT/2 energy when in equilibrium. This is the theorem of equipartition of energy.
The actual number of available degrees of freedom in a polyatomic gas may vary significantly with temperature due to quantum-mechanical considerations. Each degree of freedom has a characteristic
energy of excitation, and if the value of kT/2 is not large enough then the excitation of a given state will be negligible.
3.1.4 Relaxation
To reach equilibrium between different particles and different degrees of freedom in a system, the different parts of the system must be able to exchange energy. The rate of energy exchange
determines how long it takes to establish equilibrium. The length of this equilibrating period is called the relaxation time of the system. A complex system will typically have several relaxation
times for different system components.
The farther a degree of freedom is from equilibrium, the faster it will converge toward the equilibrium state. Conversely, as it approaches equilibrium, the rate of convergence declines. This is
expressed by the standard relaxation equation:
Eq. 3.1.4-1
dE/dt = (E_eq - E)/t_relax
where E is the measure of the current energy of the degree of freedom (avg. kinetic energy, temperature, number of particles excited, etc.), E_eq is the equilibrium value, and t_relax is the
relaxation time.
The solution of this linear differential equation shows us that the difference between the current state and the equilibrium state declines exponentially with time:
Eq. 3.1.4-2
E = E_init*Exp[-t/t_relax] + E_eq*(1 - Exp[-t/t_relax])
Over each time interval t_relax, the difference E - E_eq declines by a factor of 1/e. Although according to this equation complete equilibrium is never formally reached, over a finite (usually small)
number of relaxation intervals the difference from equilibrium becomes undetectable.
What determines the value of t_relax? This is determined by how frequently a member of a degree of freedom can be expected to undergo an energy exchange event, and how effective that event is in
transferring energy.
For particles of similar mass, a single collision can transfer essentially all of the kinetic energy from one particle to the other. The relaxation time for bringing two populations of particles with
different kinetic energies into equilibrium is thus the average time between collisions. In air at normal temperatures and pressures, this time is about 0.1 nanoseconds. At higher densities and
temperatures, the distances traveled between collisions is shorter, and the velocities are higher, so the time is correspondingly shorter.
If colliding particles have greatly different masses, then the efficiency of each collision in exchanging energy is reduced by a factor equal to the mass ratio. In the case of electrons and ions,
since electrons are lighter than nucleons by a factor of 1836 (about) this ratio is 1/(1836*A), where A is the atomic mass. Unless the temperature of the electrons is much colder than that of the
ions though, the actual relative relaxation rate is much higher than this would indicate because of the high velocities of the light electrons. If they are not too far from equilibrium, the actual
relaxation time ratio between electrons and ions, and ions alone is about equal to the square root of the mass ratio: 1/(1836*A)^0.5.
3.1.5 The Maxwell-Boltzmann Distribution Law
So far we have talked about the average velocity and kinetic energy of a particle. In reality, no particle will have exactly the average energy. Even if we created a system in which every particle
initially had exactly the same energy (all were average), within a single relaxation interval the energy would be dramatically redistributed. Within a few more intervals a stable continuous energy
distribution would be established.
Statistical mechanics shows that the actual equilibrium distribution of particle energies can be described by the distribution law worked out first by Maxwell and refined by Boltzmann. The function
creates a roughly bell-shaped curve, with the peak (most probable) energy at kT. The function declines exponentially away from the peak, but never (formally) goes to zero at any energy greater than
zero, so small numbers of both very fast and very slow particles are present in an equilibrium gas.
The Maxwell-Boltzmann distribution for energy is:
Eq. 3.1.5-1
dN/dE = N*2*Pi*(1/(Pi*kT))^(3/2) Exp(-E/kT) E^(1/2)
where N is the number of particles present. Integrating the above equation over a given energy range gives the number of particles in that range.
Most of the terms in the above equation are simply normalizing factors to make the integral come out right (after all, integrating from zero energy to infinity must equal N). The factor that actually
determines the distribution law is called the Boltzmann factor: Exp(-E/kT). This distribution factor applies to any system of particles where each energy state is equally important in a statistical
sense (that is, no statistical weight is applied to any energy state). A gas where this is true (like the gases treated by classical kinetic theory) can be called a Boltzmann gas. There are two other
types of gases that follow different distribution laws which will be discussed later - the Bose gas and the Fermi gas.
Below is a plot for the Maxwell-Boltzmann particle density distribution, dN, with kT=1. The peak value of dN is at particle energy kT/2, but since the energy density distribution is proportional to
dN*E, the peak of the energy density distribution is actually 3kT/2.
Figure 3.1.5-1. Maxwell-Boltzmann Particle Density Distribution for Energy
3.1.6 Specific Heats and the Thermodynamic Exponent
The Ideal Gas Law describes the properties of gases with respect to temperature, that is the kinetic energy of motion. How do we describe the properties of a gas with respect to the total internal
energy? In the case of a monatomic gas this is easy of course, since the kinetic energy is the total internal energy:
PV = NkT = 2/3 KE
How should we handle polyatomic gases? Before we can do this we need some convenient way of measuring the thermodynamic properties of different gases.
As I have explained above, an ideal gas with additional degrees of freedom has a larger internal energy than does an monatomic gas at the same temperature. This internal energy, designated U, is also
proportional to the (absolute) temperature (this is another common way of expressing the concept of ideal gas). This allows us to establish a constant for each gas that describes how much thermal
energy is required to raise its temperature a fixed amount. This constant is called the specific heat.
There are actually two commonly used specific heat definitions for gases, the specific heat at constant volume (c_v) and the specific heat at constant pressure (c_p). C_v measures the amount of
energy required to raise the temperature in a sealed, fixed volume container. In such a container heating also causes the pressure to rise. C_p measures the amount of energy required to raise the
temperature of a gas that is allowed to expand sufficiently to maintain constant pressure.
These two specific heats are not independent. In fact, the ratio between them is fixed by the number of degrees of freedom of the gas. This gives us the constant that we use for describing the
thermodynamic properties of a gas - the thermodynamic exponent. This constant is represented by the lower case Greek letter gamma. It is defined by:
Eq. 3.1.6-1
gamma = c_p/c_v
and is equal to 5/3 for a monatomic gas. The thermodynamic exponent has many other names such as the adiabatic index, adiabatic exponent, isentropic exponent, and the polytropic exponent.
Recall that the internal energy density of a monatomic gas (KE/V) is given by:
P = 2/3 (KE/V)
Since KE is also the total internal energy we can say:
Eq. 3.1.6-2
P = 2/3 U/V
The factor 2/3 happens to be equal to gamma minus 1: 5/3 - 1 = 2/3. This is a special case of a law valid for all perfect gases. We can thus write the general law:
Eq. 3.1.6-3
P = (gamma - 1)*(U/V)
Why is gamma called the thermodynamic or adiabatic exponent? It is because of the following relationship that describes the state of matter undergoing adiabatic compression:
Eq. 3.1.6-4
P(V^gamma) = constant
The constant is determined by the gases' original entropy. This is sometimes called the polytropic law.
The thermodynamic exponent determines the compressibility of a gas. The larger the value of gamma, the more work is required to reduce the volume through adiabatic compression (and the larger the
increase in internal energy). An infinitely compressible gas would have an exponent of 1.
The thermodynamic literature often uses P-V diagrams that plot pressure (P) versus volume (V). A plot of the adiabatic function on a P-V diagram produces an adiabatic curve, also called an isentropic
curve since every point on the curve has the same entropy. In contrast, isothermal curves lie below the adiabatic curve with increasing pressure (assuming they start at the same P-V state) since a
gas must lose entropy to maintain the same temperature. Curves where entropy is increased with increasing pressure lie above (these are Hugoniot curves which will be discussed further in connection
with shock waves).
Gamma for a gas is related to the number of degrees of freedom (n_f) by:
Eq. 3.1.6-5
gamma = (2/n_f) + 1
Thus a monatomic perfect gas is 2/3 + 1 = 5/3 as noted above. A diatomic gas has a maximum of 7 degrees of freedom, but only some of them may be excited at a given temperature, with more states being
excited at higher temperatures.
If a gas releases energy during compression, thus adding additional kinetic energy, (due to a chemical reaction for example) then it will have a higher value of gamma.
Some example values of gamma are given in the table below.
│Table 3.1.6-1. Examples of Thermodynamic Exponents for Gases │
│Material │Exact Value│Approx. Value │
│Detonating Explosive Gas Mixture │- │2.5-3.0 │
│Perfect Monatomic Gas │5/3 │1.667 │
│Air │7/5 │1.400 │
│Photon Gas │4/3 │1.333 │
│Diatomic Gas (fully excited) │9/7 │1.286 │
│Infinitely Compressible Gas │1/1 │1.000 │
3.1.7 Properties of Blackbody Radiation
The equipartition of energy in an equilibrium system also extends to radiant energy present in the system. Photons are emitted and absorbed continually by matter, creating an equilibrium photon gas
that permeates it. This photon must have the same temperature as the rest of the system also.
The energy distribution in an equilibrium photon gas is determined by quantum mechanical principles known as Bose-Einstein statistics. Photons belong to a class of particles called bosons that, by
definition, obey these statistics. A key feature of bosons is that they prefer to be in the same energy state as other bosons. A photon gas is thus an example of a Bose gas. The distribution factor
for Bose-Einstein statistics is: 1/(Exp(E/kT) - 1).
This fact gives rise to an energy distribution among the particles in a photon gas called the blackbody spectrum which has a temperature dependent peak reminiscent of the Maxwell-Boltzmann
distribution. The term "blackbody" refers to the analytical model used to derive the spectrum mathematically which assumes the existence of a perfect photon absorber or (equivalently) a leakless
container of energy (called in German a "hohlraum").
The kinetic theory of gases can be applied to a photon gas just as easily as it can to a gas of any other particle, but we need to make a few adjustments. From Eq. 3.1.1-1 we had:
P ~= rho*v*momentum
Which gave us Eq. 3.1.1-3:
P = rho*v*v*m/3
once we had substituted m*v to represent the momentum of a particle. Since photons have zero mass, we must use a different expression to express the momentum of a photon. This is given by:
Eq. 3.1.7-1
momentum_photon = E_photon/c
where E_photon is the photon energy, and c is the photon velocity (i.e. the speed of light, 2.997 x 10^10 cm/sec). It is interesting to compare this to an equivalent expression for massive particles:
momentum = 2*KE/v. Substituting Eq. 3.1.7-1, and the photon velocity, into Eq. 3.1.1-3 give us:
Eq. 3.1.7-2
P_rad = rho*c*(E_photon/c)/3 = rho*E_photon/3
Since rho*E_photon is simply the energy density of the photon gas, we can say:
Eq. 3.1.7-3
P_rad = (U_rad/V)/3
From Eq. 3.1.6-3 it is clear that:
Eq. 3.1.7-4
gamma_rad = 1 + 1/3 = 4/3
We can relate the energy density of a blackbody to the thermal radiation emissions (energy flux) from its surface (or from a window into an energy container). Assuming the energy field is isotropic,
the flux is simply the product of the energy density and the average velocity with which the photons emerge from the radiating surface. Of course all of the photons have a total velocity equal to c,
but only photons emitted normal to the surface (at right angles to it) emerge at this velocity. In general, the effective velocity of escape is Cos(theta)*c, where theta is the angle between the
light ray and the normal vector. Now the fraction of a hemisphere represented by a narrow band with width d_theta around theta is Sin(theta)*d_theta. Integrating this from zero to 90 degrees gives
(in Mathematica notation): (U_rad/V)*Integrate[c*Cos(theta)*Sin(theta),{theta,0,90}] = (U_rad/V)*c/2 Since the flux is isotropic, half of it is flowing away from the surface. We are only concerned
with a the flux flowing out so we must divide it by another factor of two. This gives:
Eq. 3.1.7-5
S = c*(U_rad/V)/4
where S is the flux (emission per unit area).
At equilibrium the radiation energy density is determined only by temperature, we want then to have a way for relating a temperature T to U_rad/V. Using Eq. 3.1.7-3 and a mathematically precise
statement of the second law of thermodynamics, it is easy to show that U_rad/V is proportional to T^4. The standard constant of proportionality, called the Stefan-Boltzmann constant and designated
sigma, is defined so that:
Eq. 3.1.7-6
U_rad/V = (4*sigma/c)*T^4
This is a convenient way of formulating the constant, because it allows us to say:
Eq. 3.1.7-7
S = sigma*T^4
Eq. 3.1.7-7 is known as the Stefan-Boltzmann Law. The Stefan-Boltzmann constant is derived from Planck's constant and the speed of light. It has the value 5.669 x 10^-5 erg/sec-cm^2-K, with T in
degrees K. Equation 3.1.6-3 of course becomes:
Eq. 3.1.7-8
P_rad = ((4*sigma)/(3*c))*T^4
It can easily be seen from the Stefan-Boltzmann Law that the amount of radiant energy present varies dramatically with temperature. At room temperature it is insignificant, but it grows very rapidly.
At sufficiently high temperatures, the energy present in the blackbody field exceeds all other forms of energy in a system (which is then said to be "radiation dominated"). The average photon energy
is directly proportional to T, which implies the photon density varies as T^3. In radiation dominated matter we can expect the number of photons present to be larger than the number of all other
particles combined.
If both ordinary particles and photons are present, we have a mixture of Boltzmann and Bose gases. Each contribute independently to the energy density and pressure of the gas mixture. Since the
kinetic energy pressure for a perfect gas is:
PV = NkT -> P = nkT
and the kinetic energy is:
Eq. 3.1.67-9
P = 2/3 KE/V = 2/3 U_kin/V -> U_kin/V = 3/2 nkT
we have:
Eq. 3.1.7-10
P_total = nkT + ((4*sigma)/(3*c))*T^4
Eq. 3.1.7-11
U_total/V = 3/2 nkT + ((4*sigma)/c)*T^4
We can calculate the relative kinetic and radiation contributions to both pressure and energy at different particle densities and temperatures. For example in hydrogen at its normal liquid density,
radiation energy density is equal to the kinetic energy of the ionized gas at 1.3 x 10^7 degrees K.
The energy distribution of the radiation field with photon energy is given by Planck's Law, which is usually stated in terms of photon frequency instead of energy. The energy of a photon of frequency
nu is simply:
Eq. 3.1.7-12
E_phot = h*nu
where nu is in hertz (c/wavelength), and h is Planck's constant (6.62608 x10^-27 erg-sec). Planck's law (aka the Planck function) is usually given as:
Eq. 3.1.7-13
dE/dnu = ((8*Pi*h*nu^3)/c^3) * (1/(Exp((h*nu)/kT) - 1))
where dE/dnu the energy density/frequency derivative. The last factor in the equation is of course the Bose-Einstein distribution factor. Integrating over a range of nu gives the energy density in
that frequency range. For our purposes, it is often more convenient to express the energy density in terms of photon energy rather than frequency:
Eq. 3.1.7-1e
dE/dE_phot = ((8*Pi*E_phot^3)/(h^3 c^3)) * (1/(Exp(E_phot/kT) - 1))
The Planck distribution always has its peak (the maximum spectral power) at h*nu_max = 2.822 kT, while 50% of the energy is carried by photons with energies greater than 3.505 kT, and 10% of the
energy is above 6.555 kT. Most of the energy in the field is thus carried by photons with energies much higher than the average particle kinetic energy.
Below is a diagram of the Planck function at a temperature of 1 KeV, plotting the spectral energy density against the photon energy in KeV.
Figure 3.1.7-1. Blackbody Spectrum
3.2 Properties of Matter
I have already discussed one state of matter - gases - at some length. In this section I shift to the application of thermodynamic principles to other states of matter, and discuss some properties
that are not strictly thermodynamic in nature.
3.2.1 Equations of State (EOS)
An equation of state (EOS) provides a complete description of the thermodynamic properties of a substance; i.e. how the density, pressure, and internal energy of a substance relate to each other. The
Ideal Gas Law is a special case of an equation of state for gases. The generalized gas EOS given previously:
P = (gamma - 1)*(U/V)
expands the ideal law to all gases if an appropriate value of gamma is chosen.
Such a simple law is not really adequate for real substances over widely varying conditions. Even with comparatively simple substances such as gases, the effective value of gamma can change. As
molecular gases increase in temperature more degrees of freedom may become excited, the gases may disassociate into atoms, and the atoms may become ionized. All of these processes drive down the
value of gamma by absorbing energy that would otherwise appear as kinetic motion. By considering the regime of interest, we can usually choose a suitable value of gamma to permit the use of the
simple gas equation. More sophisticated approaches are to provide terms for each mechanism that contributes to the total internal energy.
3.2.2 Condensed Matter
The term "condensed matter" refers to two of the three common states of matter: solids and liquids. It describes the fact that the matter is not gaseous, it has condensed to a compact form bound
together by interatomic attractive forces. At zero pressure (or atmospheric pressure, which is the same thing for practical purposes) condensed matter exists in equilibrium. The negative pressure
generated by the binding forces is exactly balanced by positive forces generated by the mutual repulsion of the outer electron shells (Coulomb repulsion) and the thermal motion of the atoms.
Condensed matter thus does not expand to infinitely low density under zero pressure like a gas, it has a definite zero pressure density.
Another important difference between condensed matter and gases is the strength of the internal repulsive forces. Coulomb repulsion is much stronger than the kinetic forces produced by thermal motion
in gases under ordinary conditions, which agrees with the common experience that a brick is harder to compress than air.
If the thermal energy in matter is low enough, the position of atoms are held in fixed positions by Coulomb repulsion - it is a solid. When the thermal vibrations become sufficiently energetic, the
atoms break free from fixed lattice positions and can move around and the solid melts.
3.2.3 Matter Under Ordinary Conditions
The operative definition of "ordinary conditions" I am using here are the conditions under which condensed matter exists. It will be shown below that regardless of its composition or initial state,
at sufficiently extreme conditions of pressure or temperature matter ceases to be condensed and tends to behave like a perfect gas.
There are standard definitions of ordinary conditions: STP or Standard Temperature and Pressure (0 degrees C temperature, 760 mm Hg pressure); or 0 degrees K and zero pressure. The conditions of
normal human experience do not deviate much from STP, and the properties of most substances under these conditions are abundantly documented in numerous references. For our purposes "ordinary
conditions" extends up to temperatures of a few tens of thousands of degrees C, and pressures in the order of several megabars (millions of atmospheres). For comparison the conditions in the
detonation wave of a powerful high explosive do not exceed 4000 degrees C and 500 kilobars; the pressure at the center of the Earth is approximately 4 megabars.
Under our "ordinary conditions" the thermal energy of matter remains below both the binding and compressive energies. In this range matter is not appreciably ionized. Its mechanical strength is small
compared to the pressures of interest, and can usually be neglected.
Since increasing pressure also strengthens the repulsive forces between atoms by forcing them closer together, the melting point goes up as well. In the megabar range matter remains solid even at
temperatures of 20-30,000 degrees C. However it usually does not matter whether the condensed state is liquid or solid, the energy absorbed in melting being too small to notice compared to the
compressive and thermal energies.
Some materials undergo abrupt phase changes (discontinuous changes in structure and density) in this realm. When phase changes occur with escalating pressure, atoms suddenly rearrange themselves into
lower energy configurations that are denser. For example iron undergoes a phase change at 130 kilobars. The transformation of delta phase plutonium alloys into the denser alpha phase at pressures of
a few tens of kilobars is of particular significance.
Despite these differences and added complexities, we can still produce reasonable approximations for condensed matter equations of state using a "Gamma Law" similar to the gas law:
Eq. 3.2.3-1
P + P_0 = (gamma - 1)*U*(rho/rho_0)
where P is the compression pressure at the state of interest, P_0 is the internal pressure at STP (or some other reference state), rho is the density of the state of interest, and rho_0 is the
reference density. Note that P_0 is exactly balanced by the negative binding pressure under reference conditions.
This gives us an adiabatic law for condensed matter:
Eq. 3.2.3-2
(P + P_0)*(rho_0/rho)^gamma = constant = P_0
Another useful relationship is the equation for internal energy per unit mass (E) rather than energy per volume (U):
Eq. 3.2.3-3
E = U/rho_0 = (P + P_0) / ((gamma - 1)*rho)
The value of gamma, the "effective thermodynamic exponent", must be determined from experimental data. Unfortunately the value of gamma is not constant for condensed matter, it declines with
increasing density and pressure. It is virtually constant below 100 kilobars, but the decline is in the range of 15-30% at 2 megabars. Although the rate of decline varies with substance, the low
pressure value still gives a reasonable indication of compressibility of a substance at multi-megabar pressures. A common assumption in high pressure shock work is that the product of (gamma-1) and
density is constant:
Eq. 3.2.3-4
(gamma_0 - 1)*rho_0 = (gamma - 1)*rho;
an approximation which seems to work fairly well in practice. Using this approximation, Eq. 3.2.3-3 becomes particularly convenient since E varies only with P.
The thermodynamic exponent is usually represented in the literature of solid state or condensed matter physics by the "Gruneisen coefficient" designated with the upper case Greek letter GAMMA. The
relationship between them is:
Eq. 3.2.3-4
GAMMA = gamma - 1
Representative values of P_0, density_0, and low pressure gamma for some materials of particular interest are given below:
│Table 3.2.3-1. Gamma-Law Equations of State for Selected Materials │
│Material │Density_0 │Gamma │P_0 (kilobars) │
│Gold │19.24 │4.05 │510 │
│Aluminum │2.785 │3.13 │315 │
│Copper │8.90 │3.04 │575 │
│Detonating High Explosive │- │3.0 │- │
│Uranium │18.90 │2.90 │547 │
│Tungsten │19.17 │2.54 │1440 │
│Beryllium │2.865 │2.17 │604 │
│Lithium │0.534 │2.1 │- │
│Zirconium │6.49 │1.771 │580 │
│Perfect Monatomic Gas │- │1.667 │- │
3.2.4 Matter At High Pressures
As pressures continue to increase above several megabars, the electronic structure of the atom begins to break down. The Coulomb forces become so strong that the outer electrons are displaced from
the atomic nuclei. The material begins to resemble individual atomic nuclei swimming in a sea of free electrons, which is called an electron gas. This gas is governed by quantum mechanical laws, and
since electrons belong to a class of particles called fermions (which obey Fermi-Dirac statistical laws), it is an example of a Fermi gas.
In contrast to the Bose-Einstein gas of photons, where particles prefer to be in the same energy state, fermions cannot be in the same energy state. Even at absolute zero, the particles in a Fermi
gas must have non-zero energy. The distribution factor for Fermi statistics is: 1/(Exp(E/kT) + 1).
If all of the electrons are in their lowest energy state, which means the gas is cold (no additional thermal energy), it is said to be Fermi degenerate. A fully degenerate state is the lowest energy
state that a Fermi gas can be in. A degenerate Fermi gas is characterized by the Fermi energy, the highest energy state in the gas. This is given by:
Eq. 3.2.4-1
E_Fermi = 5.84 x 10^-27 (n^(2/3)) erg = 3.65 x 10^-15 n^(2/3) eV
where n is the electron density (electrons/cm^3). The average electron energy is:
Eq. 3.2.4-2
E_Favg = 3/5 E_Fermi
and the pressure produced, the Fermi pressure, is:
Eq. 3.2.4-3
P_Fermi = 2/3 n*E_Favg = 2/5 n*E_Fermi = 2.34 x 10^-33 (n^(5/3)) bars
Note that relationship between the average energy and the pressure is precisely the same as that for a classical perfect gas.
When the average electron energy exceeds the binding energy of electrons in atoms, then the electrons behave as a Fermi gas. If only some of the outer electrons are loosely bound enough meet this
criterion, then only these electrons count in determining the electron density in the equations above, the remainder continue to be bound to the atomic nuclei.
The Fermi energy of a gas is sometimes characterized by the "Fermi Temperature" (or degeneracy temperature). This is defined as T_Fermi such that:
Eq. 3.2.4-4
kT_Fermi = E_Fermi
This is not the actual temperature of the gas. Its significance is that if the kinetic temperature is substantially lower than T_Fermi then the kinetic energy is small compared to the Fermi energy
and the gas can be treated reasonably well as if it were completely degenerate ("cold").
To illustrate these ideas here are some examples: Uranium at twice normal density (37.8 g/cm^3) would have a Fermi energy of 156 eV, and a pressure of 895 megabars. This is much higher than the real
pressure required to achieve this density (5.0 megabars), and indicates that the uranium is not a Fermi gas at this pressure.
A pressure of 100,000 megabars corresponds to a Fermi energy of 1034 eV, and an average energy of 621 eV. The average energy is about the same as the ionization energy for uranium's 38th ionization
state. Thus we can expect about 41% of uranium's electrons to dissociate at this pressure, and contribute to the electron gas density (1.5 x 10^26 electrons/cm^3). This gives a density estimate of
1560 g/cm^3.
Deuterium at 1000 times normal liquid density (159 g/cm^3) is a true Fermi gas. It has E_Fermi = 447 eV (T_Fermi = 5.2 million degrees K), far higher than its ionization energy (13.6 eV), and P_Fermi
= 12,500 megabars. What this says is that at least 12.5 gigabars of pressure is required to achieve the stated density, and that as long as the entropy increase during compression keeps the
temperature below 5 million degrees, the gas can be considered cold and the compression process will be efficient. Pressures between 10 and 100 gigabars are representative of conditions required to
create fusion in thermonuclear weapons.
A useful rule-of-thumb about electron density in various materials can be obtained by observing that most isotopes of most elements have a roughly 1:1 neutron/proton ratio in the nucleus. Since the
number of electrons is equal to the number of protons, we can assume that most substances contain a fixed number of electrons per unit mass: 0.5 moles/gram (3.01 x 10^23 electrons). This assumption
allows us to relate mass density to the Fermi gas pressure without worrying about chemical or isotopic composition.
The stable isotopes of most light elements follow this rule very closely, for two that are commonly used as fuel in thermonuclear weapons (Li-6 and D) it is exact. Very heavy elements contain
somewhat fewer electrons per gram, by 25-30%. The largest deviations are the lightest and heaviest isotopes of hydrogen: 1 mole/gram for ordinary hydrogen, and 0.333 moles/gram for tritium.
Since the only way a cold Fermi gas can acquire additional energy is in the form of electron kinetic energy, when the thermal energy is substantially above T_Fermi, then the kinetic energy dominates
the system and the electrons behave like a classical Boltzmann gas.
Thus as the electronic shells of atoms break down, the value of gamma approaches a limiting value of 5/3 with respect to the total internal energy, regardless of whether it is thermal or quantum
mechanical in nature.
The total pressure present is the sum of the Fermi pressure, the kinetic pressure of the Boltzmann gas consisting of the nuclei and non-degenerate electrons, and the pressure of the photon Bose gas.
Similarly, the energy density is the sum of the contributions from the Fermi, Boltzmann, and Bose gases that are present.
Now when electrons are stripped from atoms through thermal ionization, we also have an electron gas which is technically a Fermi gas. We rarely consider thermally ionized plasmas to be Fermi gases
though, because usually the electron densities are so low that the thermal energy is much greater than the Fermi energy.
An important consequence of this is the phenomenon of "ionization compression". At STP most condensed substances have roughly the same atom density, on the order of 0.1 moles/cm^3; the densities can
vary considerably of course due to differing atomic masses. By the rule of thumb above, we can infer that electron densities also roughly mirror mass densities.
If two adjoining regions of STP condensed matter of different electron density are suddenly heated to the same extremely high temperature (high enough to fully ionize them) what will happen?
Since the temperature is the same, the radiation pressure in both regions will be the same also. The contribution of the particle pressure to the total pressure will be proportional to the particle
density however. Initially, in the un-ionized state, the particle densities were about the same. Once the atoms become ionized, the particle densities can change dramatically with far more electrons
becoming available for dense high-Z materials, compared to low density, low-Z materials. Even if the system is radiation dominated, with the radiation pressure far exceeding the particle pressures,
the total pressures in the regions will not balance. The pressure differential will cause the high-Z material to expand, compressing the low-Z material.
The process of ionization compression can be very important in certain thermonuclear systems, where high-Z materials (like uranium) are often in direct contact with low-Z materials (like lithium
It is interesting to note that when matter is in a metallic state, the outermost electrons are so loosely bound that they become free. These electrons form a room-temperature plasma in the metal,
which is a true Fermi gas. This electron plasma accounts for the conductance and reflectivity of metals.
3.2.4.1 Thomas-Fermi Theory
A widely used approximate theory of the high pressure equation of state was developed in 1927-1928 that ignores the electron shell structure of matter entirely. Called the Thomas-Fermi (TF) theory,
it models matter as a Fermi gas of electrons with a Boltzmann gas of nuclei evenly distributed in it, using a statistical description of how the electron gas behaves in the electrostatic field.
The Thomas-Fermi theory includes only the repulsive forces of the electron gas, and the thermal pressure, and ignores the attractive forces that hold solid matter together. It is thus a good
approximation of matter only at high enough pressures that repulsive forces dominate. Fortunately experimental EOS data is available at pressures extending into this range (several megabars). Various
adjustments to TF theory have been proposed to extend its range of application, such as the Thomas-Fermi-Dirac (TFD) model that includes attractive forces (others exist - Thomas-Fermi-Kalitkin, etc.)
TF theory was employed at Los Alamos together with the existing high-pressure EOS data (at that time only up to hundreds of kilobars) to perform the implosion calculations for the plutonium bomb.
Elements with high electron densities (which, from the above rule-of-thumb, is more or less equivalent to elements with high mass densities) are described reasonably well by the Thomas-Fermi model at
pressures above about 10 megabars.
3.2.5 Matter At High Temperatures
If the thermal or kinetic energy of the atoms in a substance exceeds the binding and compressive energies, then regardless of pressure it becomes a gas. In highly compressed condensed matter, this
occurs at several tens of thousands of degrees C. When the kinetic energy substantially exceeds the combined energies of all other forms of energy present, matter behaves as a perfect gas.
At sufficiently high temperatures, the outer electrons of an atom can become excited to higher energy levels, or completely removed. Atoms with missing electrons are ions, and the process of electron
removal is called ionization. Then number of electrons missing from an atom is its ionization state. Excitation and ionization occurs through collisions between atoms, collisions between atoms and
free electrons, and through absorption of thermal radiation photons. When all of the atoms have become ionized, then matter is said to be "fully ionized" (in contrast the phrase "completely ionized"
usually refers to an atom that has had all of its electrons removed).
The energy required to remove an unexcited electron is called the ionization energy. This energy increases with each additional electron removed from an atom due to the increase in ionic charge, and
the fact that the electron may belong to a shell closer to the nucleus. The ionization energy for the first ionization state is typically a few electron volts. Hydrogen has one of the highest first
ionization energies (13.6 eV), but most elements have first ionization energies of 4 - 10 eV. The energy required to remove the last electron from a plutonium atom (the 94th ionization state) in
contrast is 120 KeV. The first and last ionization energies for some elements common in nuclear weapons are:
│Table 3.2.5. Representative First and Last Ionization Energies │
│Element │First Ionization (eV) │Last Ionization (eV) │
│Hydrogen │13.598 │- │
│Lithium │5.39 │3rd: 122.4 │
│Beryllium │9.32 │4th: 217.7 │
│Oxygen │13.61 │8th: 871.1 │
│Uranium │6 │92nd: 115,000 │
│Plutonium │6.06 │94th: 120,000 │
There is a simple law for computing the ionization energy of the last electron (the Zth ionization state for atomic number Z):
Eq. 3.2.5-1
E_i_Z = Z^2 * 13.6 eV
For other ionization states, the other electrons bound to the nucleus provide partial screening of the positive charge of the nucleus and make the law more complex.
The energy required to excite an electron is less than the associated ionization energy (E_i). An excited electron is more easily removed from an atom, the energy required being exactly the
difference between the unexcited ionization energy and the excitation energy. Under high pressures, excited electrons are subjected to strong Coulomb forces which tend to remove them from the atom.
Also frequent atom-atom, atom-electron, and atom-photon interactions will tend to ionize the more weakly bound excited electron. Even if it is not removed, electrons tend to drop back to their ground
state after awhile with the emission of a photon. Excitation is thus unimportant in dense, high pressure gases.
The average ionization state of a gas depends on the ionization energy for each ionization state, the temperature of the gas, and the density. At a temperature T, the average particle is kT. If this
value is larger than the ionization energy of an electron attached to an atom, then an average collision will remove it (for hydrogen E_i corresponds to T = 158,000 degrees K). We can thus expect the
average ionization state to be at least equal to i, where i is the greatest ionization state with ionization energy less than or equal to kT. In fact ionization can be appreciably greater than this,
with higher states for denser gases at the same temperature due to more frequent collisions. If the gas density is comparable to the density of condensed matter, then the energy of i is typically in
the order of 3kT - 4kT. At the highest temperatures in fission weapons (50 to 100 million degrees K), uranium and plutonium can achieve ionization states of 80 to 85.
Ionization is a statistical process so often a mixture of ion states is present in varying proportions. At the densities and temperatures encountered here though, the effective spread in ionization
states is quite narrow, and we can assume that there will effectively be only one ionization state at a given temperature.
Because kT = 120 KeV at 1.4 billion degrees K, complete ionization of these atoms would normally be expected only in the most extreme conditions of thermonuclear weapons (densities of 200-500, and
temperatures of 300-350 million degrees), if at all. Note however, that at these extreme densities the atomic shells breakdown and matter exists as a Fermi gas rendering ionization moot.
Every electron dislodged from an atom becomes an independent particle and acquires its own thermal energy. Since the law E = NkT does not distinguish types of particles, at least half of the thermal
energy of a fully ionized gas resides in this electron gas. At degrees of ionization greater much than 1, the thermal energy of the atoms (ions) becomes unimportant.
Since an ordinary (as opposed to a quantum) electron gas can absorb energy only through kinetic motion, it is a perfect gas with gamma equal to 5/3. Although an electron gas is perfect, there are two
processes that tend to drive the effective value of gamma down below 5/3 when considering large increases in internal energy. First, it should be apparent that ionization (and excitation) absorbs
energy and thus reduces gamma. The second effect is due simply to the increase in N with increasing ionization state. The larger the number of free electrons, each sharing kT internal energy, the
larger the sink is for thermal energy. This second effect tends to overwhelm the absorption of ionization energy as far as determining the total internal energy of the gas, but ionization has very
important effects on shock waves (discussed below). These effects are especially pronounced in regimes where abrupt increases in ionization energy are encountered (e.g. the transition from an
un-ionized gas to a fully ionized gas; and the point after the complete removal of an electron shell, where the first electron of a new shell is being removed).
At high temperatures and ionization states (several millions of degrees and up), where large amounts of energy are required to dislodge additional electrons, both of these ionization effects can
often be ignored since the net increase in electron number (and absorption of ionization energy) is small even with large increases in internal energy, and the energy gap between successive electron
shells becomes very large.
At very high temperatures, the effect of radiant energy must be taken into account when evaluating the equation of state. Since the energy present as a blackbody spectrum photon gas increases as the
fourth power of temperature, while the kinetic energy increases approximately proportionally with temperature (it would be strictly proportional but for the increase in N through ionization), at a
sufficiently high temperature the radiation field dominates the internal energy of matter. In this realm the value of gamma is equal to that of a photon gas: 4/3.
3.3 Interaction of Radiation and Matter
Photons interact with matter in three ways - they can be absorbed, emitted, or scattered - although many different physical processes can cause these interactions to occur. For photons with thermal
energies comparable to the temperatures encountered in thermonuclear weapons, the only significant interactions with matter are with electrons. The mechanisms by which photons and electrons interact
are conveniently divided into three groups: bound-bound, bound-free, and free-free. Bound-bound refers to interactions in which both the initial and final states of the electron involved are bound to
an atom. Bound-free describes interactions in which one state is bound to an atom, and the other is a free electron (it doesn't matter whether it is the initial or final state). Free-free
interactions are ones in which the electron remains free throughout.
Now each mechanism of photon interaction can operate in a forward or reverse direction. That is, a process that absorbs a photon can also operate in reverse and cause photon emission (making
absorption and emission two sides of the same coin); or a scattering process that removes energy from a photon can also add energy. This principle is called microscopic reversibility.
Frequently, when solving practical problems, we would rather not consider each individual mechanism of interaction by itself. It is often preferable to have coefficients that describe the overall
optical properties of the medium being considered. Thus we have absorption coefficients, which combine all of the individual absorption mechanisms, emission coefficients, and scattering coefficients.
Normally we just talk about absorption and scattering coefficients (designated k_a and k_s) because we can combine the absorption and emission processes, which offset each other, into a single
coefficient (calling this combined coefficient an "absorption coefficient" is just a matter of convenience). Henceforth, "absorption coefficient" will include both absorption and emission unless
otherwise stated. To characterize the optical properties of the medium (also called the "opacity") with a single number we can use the "total absorption coefficient" (also called the attenuation or
extinction coefficient), which is the sum of the absorption and the scattering coefficients.
Since, with few exceptions, the cross section of each mechanism of interaction varies with photon energy, the optical coefficients vary with photon energy as well. If we are dealing with a
monoenergetic flux of photons (like laser beams) then we need to have absorption and scattering coefficients for that particular photon frequency. If the flux contains photons of different energies
then we must compute overall coefficients that average the spectral coefficients over the spectral distribution.
The process of averaging opacity over the photon frequency spectrum is straightforward only if we can assume that all photon emissions are spontaneous (independent of other photons). This is valid if
the medium is optically thin, that is, much less than an absorption mean free path in extent. This type of mean free path is called the Planck mean free path. If the medium is optically thick (much
larger in extent than the absorption mean free path) then this assumption is not valid. Due to the quantum statistical behavior of photons, the presence of photons increases the likelihood of photon
emission to a value above the spontaneous level. This effect ("stimulated emission") is responsible for both the existence of the blackbody spectrum, and of the phenomenon of lasing. When this effect
is taken into account during the averaging process, the result is known as the Rosseland mean free path.
In addition to the total absorption coefficient k, the opacity of a medium can be measured using the mean opacity coefficient K. K is a density normalized measure, given in units of cm^2/g (that is
total absorption cross section per gram), while k is the actual opacity of the medium at its prevailing density and is given in units of cm^-1. Although K is normalized for mass density, it is often
more informative for our purposes to express it in terms of particle or atom density (cm^2/mole). The coefficients K and k are related to the total photon mean free path (l_phot) by:
Eq. 3.3-1
l_phot = 1/(K*rho) = 1/k
3.3.1 Thermal Equilibrium
In a system in thermal equilibrium the principle of detailed balancing also applies. This principle holds that each process is exactly balanced by its exact opposite so that the net state of the
system is unchanged.
We have already seen that in an equilibrium system, the intensity of the radiation field and its spectrum are governed by the blackbody radiation laws which hold without reference to the actual
mechanisms of photon emission and absorption. This indicates that in a fundamental sense, these mechanisms are basically irrelevant to determining the state of the radiation field in the system. The
rates at which they occur are governed by quantum principles so that they always generate a blackbody field. If the optical coefficients as a function of photon energy are known then computing the
overall coefficients across the whole radiation spectrum is straightforward.
Now having said this, a little qualification is in order. The mechanisms of absorption and emission can produce local features in the blackbody spectrum. For example, a strong absorption line can
create a narrow gap at a particular frequency. The energy missing in this gap will be exactly balanced by the increased intensity in the remainder of the spectrum, which will retain the same relative
frequency-dependent intensities of the ideal black body spectrum.
A second caveat is that the blackbody spectrum only applies to systems in thermal equilibrium. Specific mechanisms can dominate non-equilibrium situations, and can occur without significant
counterbalance by the reverse process. Laser emission and fluorescent emission are common examples of non-equilibrium processes.
Although the specific interaction mechanisms in an equilibrium system do not affect the radiation spectrum, they still affect the optical coefficients because the photon-matter interaction cross
sections, and thus the spectral coefficients, do depend on the mechanisms involved. The physics of these mechanisms is often extremely complex (especially the bound-bound and bound-free processes),
even the process of developing simplifying approximations is hard. It is thus often very difficult to determine what the values of the optical coefficients should be.
If a system approximates thermal equilibrium then the blackbody spectrum can be used to compute overall optical coefficients for the medium. The coefficients vary with temperature, not only because
the peak of the blackbody spectrum varies with temperature, but because the interaction mechanisms are usually temperature dependent also (and may be density dependent as well).
Approximate thermal equilibrium occurs where the change in the radiation field is gradual with respect to both distance and time (these are necessary and sufficient). Gradual change with distance
means that the change in energy density is relatively small over the distance l_phot. Gradual change with time means that the energy density does not change much over a radiation relaxation period.
Radiation relaxation is usually so fast that this last condition is of little importance. Since typically the spectral mean free path changes with frequency, it is possible for only a portion of the
spectrum to be in equilibrium. If the portions of the spectrum that are not in equilibrium make only a small contribution to the energy density, they can be ignored.
The conditions for thermal equilibrium exist in the interior (at least one optical thickness from the surface) of an optically thick body where the transport process is dominated by scattering. This
situation also ensures the existence of local thermodynamic equilibrium (LTE), that is the radiation field and matter are in thermodynamic equilibrium at every point in the medium. LTE guarantees
that the spectral distribution of the flux will be a blackbody spectrum, and further that the spectrum at any point will be determined by the temperature at that point.
3.3.2 Photon Interaction Mechanisms
It is useful to survey the mechanisms by which matter and energy interact in more detail, to gain a deeper understanding of the factors that affect opacity. In some systems of interest (such as fully
ionized matter) the coefficient can be calculated directly from the underlying interaction mechanisms, in others it may be extremely difficult and require experimental validation.
If fact often it is not practical to calculate opacity values directly from basic principles. In these cases they must be determined by direct measurements in the systems of interest. It is also
possible to estimate upper and lower bounding values, but these are often very loose.
3.3.2.1 Bound-Bound Interactions
These interactions occur when an electron bound to an atom (or ion) moves between energy levels as a result of photon capture or emission. Capture raises the electron to a higher energy level,
emission causes the electron to drop to a lower one. The phenomena known as fluorescence is the process of capturing a photon, followed by the emission of a lower energy photon as the excited
electron drops back to its ground state through some intermediate energy absorbing process.
Photon capture by this mechanism requires that that the photon energy correspond exactly to the energy difference between the energy level an electron is currently occupying, and a higher one. The
capture cross section for a photon meeting this criteria is extremely large, otherwise it is zero. Quantum uncertainty gives the absorption line a narrow finite width, rather than the zero width an
infinite precision match would indicate.
In principle any atom has an infinite number of possible energy levels. In practice, at some point the binding energy to the atom is so weak that the electron is effectively free. The hotter and
denser matter gets, the lower is the energy level (and the greater is the binding energy) where this occurs.
In hot dense gases that are not completely ionized (stripped of electrons) line absorption contributes significantly to the opacity of the gas, and may even dominate it.
Fermi gases resemble completely ionized gases in that no electron is bound to a nucleus. In both cases bound-bound transitions between atomic quantum energy levels cannot occur. Fermi gases have
quantum energy levels of their own however, and bound-bound transition between these energy levels are possible.
3.3.2.2 Bound-Free Interactions
The process in which a bound electron absorbs a photon with energy at least equal to its binding energy, and thereby is dislodged from the atom is called the photoelectric effect. This process can
occur with any photon more energetic than the binding energy. The cross section for this process is quite significant. When matter is being heated by a thermal photon flux (that is, the matter is not
in thermal equilibrium with the flux), photoelectric absorption tends to dominate the opacity.
The reverse process by which atoms capture electrons and emit photons is called radiative electron capture. The balance between these two processes maintains the ionization state of an equilibrium
3.3.2.3 Free-Free Interactions
There are two principal mechanisms by which photons interact with free electrons. These are a photon emission process called bremsstrahlung (and its reverse absorption process), and photon
scattering. Naturally, these are the only processes that occur in a completely ionized gas (atoms are completely stripped). In a highly ionized gas (all atoms are ionized, and most but not all of the
electrons have been removed) these processes often dominate the opacity. Theoretically free-free interactions are much easier to describe and analyze than the bound-bound and bound-free processes.
3.3.2.3.1 Bremsstrahlung Absorption and Emission
The term "bremsstrahlung" is German and means "slowing down radiation". It occurs when an electron is slowed down through being scattering by an ion or atom. The momentum, and a small part of the
energy, is transferred to the atom; the remaining energy is emitted as a photon. Inverse bremsstrahlung (IB) occurs when a photon encounters an electron within the electric field of an atom or ion.
Under this condition it is possible for the electron to absorb the photon, with the atom providing the reaction mass to accommodate the necessary momentum change. In principal bremsstrahlung can
occur with both ions and neutral atoms, but since the range of the electric field of an ion is much greater than that of a neutral atom, bremsstrahlung is a correspondingly stronger phenomenon in an
ionized gas.
In a bremsstrahlung event, we have:
m_e*d_v_e = m_i*d_v_i
due to the conservation of momentum (m_e and d_v_e are the electron mass and velocity change, m_i and d_v_i are for the ion). Which gives us:
m_i/m_e = d_v_e/d_v_i
Since m_i/m_e (the mass ratio between the ion and electron) is 1836*A (where A is the atomic mass of the ion), the velocity change ratio is 1/1836*A. Kinetic energy is proportional to m*v^2, so the
kinetic energy change for the ion is only about 1/1836*A of the energy gained or lost by the electron. Bremsstrahlung/IB is thus basically a mechanism that exchanges energy between photons and
electrons. Coupling between photons and ions must be mediated by ion-electron collisions, which requires on the order of 1836*A collisions.
Unlike the bound-bound and bound-free processes, whose macroscopic cross section is proportional to the density of matter, the bremsstrahlung/IB cross sections increase much faster with increasing
density. It thus tends to dominate highly ionized, high density matter.
The absorption coefficient k_v from bremsstrahlung (assuming a Maxwellian electron velocity distribution at temperature T) in cm^-1 is:
Eq. 3.3.2.3.1-1
k_v = 3.69 x 10^8 (1 - Exp(-h*nu/k*T))(Z^3 * n_i^2)/(T^0.5 * nu^3)
Eq. 3.3.2.3.1-2
k_v = 2.61 x 10^-35 (1 - Exp(-pe/k*T))(Z^3 * n_i^2)/(T^0.5 * pe^3)
where Z is the ionic charge, n_i is the ion density, T is the electron temperature (K), and nu is the photon frequency, h is Planck's constant (6.624 x 10^-27 erg cm^2/sec), k is Boltzmann's
constant, pe is photon energy (eV), and the other units are CGS.
We can compute the effective overall absorption coefficient by averaging across the frequency spectrum:
Eq. 3.3.2.3.1-3
k_1 = 6.52 x 10^-24 (Z^3 * n_i^2)/(T^3.5)
The absorption mean free path (in cm)is simply 1/k_1:
Eq. 3.3.2.3.1-4
l_1 = 1.53 x 10^23 (T^3.5)/(Z^3 * n_i^2)
The total amount of energy emitted per cm^3 per sec (assuming electron and photon thermal equilibrium) is:
Eq. 3.3.2.3.1-5
e = 1.42 x 10^-27 Z^3 n_i^2 T^0.5 ergs/cm^3-sec
The mean free path formulae given above is based on the assumption that all photon emissions are spontaneous (i.e. the medium is optically thin). Compensating for stimulated emission gives us the
Rosseland mean free path for bremsstrahlung, which is longer than the spontaneous value by a constant factor:
Eq. 3.3.2.3.1-6
l_R = 4.8 x 10^24 (T^3.5)/(Z^3 * n_i^2)
3.3.2.3.2 Scattering
All of the processes described so far are absorption and emission processes which involve photon destruction and creation, and necessarily exchange substantial amounts of energy between matter and
the radiation field. Photon scattering does not, in general, involve significant exchanges in energy between particles and photons. Photon direction and thus momentum is changed, which implies a
momentum and energy change with the scattering particle. But photon momentum is usually so small that the energy exchange is extremely small as well.
3.3.2.3.2.1 Thomson Scattering
The only way a photon can interact with an electron in the absence of a nearby atom or ion is to be scattered by it. In classical physics this scattering process (which is due to the electron acting
as a classical oscillator) cannot change the photon energy and has a fixed cross section known as the Thomson cross section, which is 6.65 x 10^-25 cm^2. This is multiplied by the electron density to
obtain the scattering coefficient k_s. The scattering mean free path is then:
Eq. 3.3.2.3.2.1-7
l_s = 1.50 x 10^24 /(Z * n_i)
Referring to Eq. 3.3.3.1-3, above we can see that at a sufficiently high temperature, the bremsstrahlung absorption coefficient may become smaller than the Thomson coefficient, which will then tend
to control radiation transport.
3.3.2.3.2.2 Compton Scattering
When the photon energy becomes comparable to the electron rest-mass (511 KeV), the photon momentum is no longer negligible and an effect called Compton scattering occurs. This results in a larger
energy-dependent scattering cross section. Compton scattering transfers part of the energy of the photon to the electron, the amount transferred depends on the photon energy and the scattering angle.
511 KeV is an energy much higher than the typical thermal photon energies encountered in our domain of interest. At high thermonuclear temperatures (35 KeV) a significant proportion of photons in the
upper end of the Planck spectrum will undergo this process.
In a hot gas, where electrons have substantial kinetic energy, inverse Compton scattering also occurs which transfers energy from the electron to the photon.
3.3.3 Opacity Laws
As noted earlier, the actual opacity of a material is the sum of the absorption and scattering coefficients:
Eq. 3.3.3-1
k_t = k_a + k_s
The total photon mean free path is then:
Eq. 3.3.3-2
l_phot = 1/k_t = 1/(k_a+k_s) = 1/((1/l_a)+(1/l_s)) = (l_a*l_s)/(l_a+l_s)
The absorption and scattering coefficients, k_a and k_s, are in turn the sum of all of the component absorption and scattering processes described in the subsections above. In some cases, not all of
these are present to a significant extent, but in others there may be many contributors. In performing photon transport calculations though, it is very undesirable to individually model and compute
each interaction process. These processes tend to be rather complicated in form and, at worst, are almost intractably difficult to compute. Simulations that include photon transport as but one
component would be exorbitantly expensive computation-wise if these processes were always included explicitly. And it is completely impossible to conduct analytical studies of transport processes
without introducing radical simplifications.
Formulating laws of simpler form is essential to the practical study of photon transport. These laws are approximations that describe the dependency of the total opacity on temperature and density in
the range of interest. They are not universally valid and care must be taken to ensure that the law chosen is actually applicable in the range where it is to be applied.
A popular form of an opacity law is:
Eq. 3.3.3-3
K = (K_0 * rho^C * T^-m) + K_1
where K, K_0, and K_1 are the opacities in cm^2/g, rho is density, T is temperature, and C and m are constants. According to this law, at high temperatures the opacity converges to K_1, while as T
approaches zero the opacity goes to infinity (and thus the MFP drops to zero).
Opacity laws are simplest when we can assume complete ionization of the material. For an optically thick medium we can easily derive these factors from Equation 3.3.2.3.1-6 and 3.3.2.3.1-7: m = 3.5;
C= 1; and
K_0 = 2.1x10^-25 * Z^3 * (Avg/A)^2;
K_1 = 6.65x10^-25 * Z * (Avg/A);
where Z is the atomic number, Avg is Avogadro's Number, A is the atomic mass. The first term is due entirely to bremsstrahlung, and the second term (K_1) is due to Thomson scattering.
For fusion fuels, usually mixtures of hydrogen, lithium, and helium, complete ionization can be assumed due to their relatively low ionization temperatures, and this law is applicable. Higher Z
materials that may be used in the construction of nuclear weapons, like carbon, aluminum, iron, lead, tungsten, and uranium must take into account the effects of bound electrons. For these higher Z
materials an m value of 1.5 to 2.5 is typical. For very high Z materials, like uranium, an m value of 3 is commonly assumed. Since absorption through bound electrons is due to photon-atom (or
photon-ion) interactions, for a given ionization state opacity should be roughly proportional to atom (ion) density giving us C = 0. Strict proportionality doesn't occur since the ionization state is
affected by density.
The appropriate values of K_0 for high Z materials at the densities and temperatures of interest are not readily available however. In the United States, opacities for elements with Z > 71 are
still classified. Even for elements with Z < 71, for which the opacities are not classified, U.S. government data has not yet been released, and little information from other sources appears
available in the public domain.
Parameters for the opacity law can be derived for some elements with Z up to 40 from figures published in Los Alamos Report LAMS-1117. This document covers temperatures (0.5 - 8 KeV) and densities
(1-30 g/cm^3 for carbon, 3.5-170 g/cm^3 for copper) in the range of interest for radiation implosion systems.
│Table 3.3.4-1. Opacity Law Parameters │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
│Element │Z │m │C │K_0 │K_1 │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
│Carbon │6 │4.9 │0.92 │0.074 │0.22 │Aluminum│13│5.0│0.93│1.4│0.21│Iron│26│2.9│0.62│6.6│0.15│Copper│29│2.7│0.55│7.0│0.14│Zirconium│40│1.8│0.72│1.2│0.15│
Although some general trends seem apparent in the table above, they cannot be extrapolated easily to much higher Z values. We can see that zirconium disrupts trends in C and K_0 indicated by Z=6
through Z=29. As we ascend the periodic table and electron shells are successively filled, and new ones started, abrupt changes in ionization behavior, and hence opacity will occur.
3.3.4 Radiation Transport
The transport of radiation through matter often alters the physical state of matter in significant ways. Matter absorbs and emits radiation which leads to it heating up and cooling down. The
existence of significant radiation transport implies the existence of energy density and temperature gradients, which in turn imply the existence of pressure gradients. Given sufficient time,
pressure gradients may lead to significant fluid flow, and corresponding changes in the density of matter. Problems that include both the motion of matter and radiation are the domain of radiation
hydrodynamics, which is discussed in another subsection below. This subsection generally treats the properties of matter as being static (not affected by the transport process), although the effects
of heating are considered under radiation heat conduction.
When a ray of photons pass through a layer of matter, the initial flux is attenuated according to the law:
Eq. 3.3.4-1
I_x = I_0 * Exp[-x/l_phot]
The flux attenuates exponentially, decreasing to 1/e over a thickness equal to the photon MFP. The ratio x/l_phot is called the "optical thickness", usually designated "tau". If the optical
coefficient is not constant through a layer, then its optical thickness may be found simply be integrating k_t along the photon path.
The energy flux actually deposited in a thin layer of thickness dx located at x_1 is:
Eq. 3.3.4-2
I_x = k_a*dx*I_x_1 = k_a*dx*I_0*Exp[-x_1/l_phot]
since only the absorption cross section actually deposits energy in the layer.
This model of pure attenuation assumes that photon emission by the layer is not significant. If photon emission does occur then instead of merely attenuating towards zero, the photon flux will
converge to an equilibrium value, e.g. the radiation field and matter will relax to state of equilibrium. The photon MFP provides the distance scale required for establishing equilibrium, and
accordingly is sometimes called the "relaxation length". Since the velocity of photons is c (the speed of light), the relaxation time is l_phot/c.
An additional limitation of this model which remains even when photon emission is taken into account is that it treats only a "ray" of photons - a beam in which all photons travel in the same
direction. In practice, there is always an angular distribution of the photon flux around the net direction of radiation flow. When the flux drops off rapidly as the angle from the net direction of
flow increases, then the "ray" model remains a good approximation.
When there is a substantial flux at large angles from the net direction of flow, then other models of photon transport are needed. Photon fluxes with large angular spreads inevitably arise when the
flux travels through a medium that is several optical thicknesses across, since photons are quickly scattered in all directions. Two important approaches to approximating such situations are
radiation diffusion and radiation heat conduction.
A final limitation should be noted concerning the "ray" model. For a beam of photons originating outside of the optically thin medium and travelling through it, the spectral distribution of the flux
can be anything at all - a monoenergetic flux like a laser beam, a Boltzmann distribution, etc. A single absorption and scattering cross section can only be used if the spectral cross sections are
appropriately weighted for the flux spectrum.
3.3.4.1 Radiation Diffusion
Radiation diffusion (based on the application of diffusion theory) describes photon transport where the energy density gradient is small, i.e. in conditions of approximate local thermal equilibrium
(LTE). This situation exists in the interior (at least one optical thickness from the surface) of any optically thick body.
The rate of energy is determined by the diffusion equation:
Eq. 3.3.4.1-1
S = -((l_phot*c)/3) * Grad(U)
where S is the energy flux (flow rate), c is the speed of light, U is the energy density, and Grad(U) is the energy density gradient (the rate at which energy density changes with distance along the
vector J). Due to the gradual change in state in diffusion, if the thickness of the body is x then the flux can be approximated by:
Eq. 3.3.4.1-2
S = ((l_phot*c)/x) * U
That is, the flux is inversely proportional to the optical thickness.
During diffusion, photons are scattered repeatedly and advance in a random walk, travelling much farther than the straight-line distance. In the diffusion process a photon undergoes on the order of
(x/l_phot)^2 interactions which the material while advancing the distance x. The characteristic diffusion time t_diff is thus (x^2/(c*l_phot)). When the physical scale of a system is much larger than
the photon MFP, the photon field is effectively trapped and is always in local equilibrium with the state of the material, and it is possible for the material to flow faster than the photon flux
itself. In the case of stars, where l_phot is on the order of a millimeter, but x is on the order of a million kilometers, it can take millions of years for photons created in the interior to diffuse
to the surface (they are absorbed and reemitted many times during the journey of course)!
It is interesting to note that the Rosseland mean free path is equal to the spectral mean free path when the energy of photons is given by:
hv = 5.8 kT
This means that when a body is optically thick, and radiant energy transport occurs through radiation diffusion, the main energy transport is due to photons with energies much higher than the average
particle kinetic energy. On the other hand when a body is optically thin, and energy is radiated from the entire volume, then most of the energy is transported by photons with energies only
moderately higher than the average kinetic energy.
This phenomenon is almost always true to some extent, and can greatly affect radiation transport. If the MFP increases with energy (as it does with bremsstrahlung in optically thick bodies), then
energy transport is due to very energetic photons which may only comprise a small fraction of the radiation field. On the other hand, the converse may be true. This is the case during early fireball
growth in nuclear explosion, when only the radiation in the normal optical range can escape to infinity from the X-ray hot fireball.
3.3.4.2 Radiation Heat Conduction
This approximation of radiation transport is very closely related to the diffusion approximation, and like diffusion applies when LTE exists. The principle difference between radiation heat
conduction and diffusion is that the conduction equations are based on temperature gradients, rather than energy density gradients. This is important because at LTE, the temperature of matter and
radiation are equal, and the physical state and thermodynamic properties of matter are a function of temperature. If we want to take into account the effects of radiation heating on matter then we
must treat the problem in terms of temperature.
3.3.4.2.1 Linear Heat Conduction
Before considering radiation heat conduction, it is helpful to first look at ordinary heat conduction - the familiar conduction process encountered in everyday life.
The ordinary conductive heat flux is given by:
Eq. 3.3.4.2.1-1
S = -kappa * Grad(T)
where S in the flux, kappa is the coefficient of thermal conductivity of the material, and Grad(T) is the temperature gradient. Since the conductive flux is proportional to the temperature gradient
this is called linear heat conduction.
The basic behavior of linear heat conduction can be observed by considering a uniform medium at zero temperature bounded by a flat surface on one side. If an arbitrarily thin layer at this surface
were instantly heated by the application of a pulse of energy E (in erg/cm^2), how would the heat flow through the medium? Initially the temperature at the surface is some finite (but arbitrarily
high) temperature T_0. At the boundary of this heated layer the temperature gradient is infinite since it drops from T_0 to zero degrees in zero distance. Since the flux (rate of flow) is
proportional to the temperature gradient, the rate of flow is infinite as well. At any point where a non-zero temperature is adjacent to the unheated medium this will be true. Heat thus instantly
diffuses throughout the medium, establishing a temperature profile (and temperature gradient) that is highest at the heated surface and drops to zero only at infinity. The rate of flow drops with the
temperature gradient though, so at great distances the actual flow is extremely small. The temperature profile assumes a Gaussian distribution in fact, with nearly all the thermal energy located in a
region "close" to the surface.
Figure 3.3.4.1-1. Profile of Linear Heat Conduction
What does "close" mean? The scale of the region is determined by the elapsed time since the temperature jump was introduced, and the thermal diffusivity of the medium (which is usually designated by
chi). The thermal diffusivity is given by:
Eq. 3.3.4.2.1-2
chi = kappa/(rho * c_p)
where kappa is the thermal conductivity of the medium, rho is its density, and c_p is the specific heat at constant pressure. The thermal diffusivity of a gas is approximately given by:
Eq. 3.3.4.2.1-3
chi = (l_part * v_mean)/3
where l_part is the particle (molecule, atom, etc.) collision mean free path, and v_mean is the mean thermal speed of the particles.
The actual temperature distribution function is:
Eq. 3.3.4.2.1-4
T = Q * Exp[(-x^2)/(4*chi*t)] / (4Pi * chi * t)^0.5
where Q is in deg/cm and is given by:
Eq. 3.3.4.2.1-5
Q = E/(rho*c_p)
If the process occurs at constant pressure, or:
Eq. 3.3.4.2.1-6
Q = E/(rho*c_v)
Q provides a scale factor for the heat profile since it is the surface energy density, divided by the ability of an infinitely thin layer to absorb heat.
if constant volume is maintained (c_v is the specific heat at constant volume). From this we can see that the depth of the region where most of energy is concentrated is on the order of:
Eq. 3.3.4.2.1-7
x ~ (4*chi*t)^(1/2)
At a depth of a few times this value the temperature becomes very small. The depth x can thus be taken as a measure of the depth of the penetration of heat.
Of course the prediction of infinite heat propagation velocity (even at infinitesimally small energy flows) is not physically real. In a gas it is actually bounded by the ability of "hot" particles
to travel several times the MFP without undergoing collisions, and is thus limited to the velocity of the particles present, and is related to the local velocity of sound (the thermal energies and
temperatures involved here are so small that they are utterly insignificant however).
3.3.4.2.2 Non-Linear Heat Conduction
We can easily derive a temperature gradient based equation for radiation transport from the radiation diffusion equation Eq. 3.3.4.1-1 by substituting the blackbody radiation energy density
expression (see Eq. 3.1.7-6) for U:
Eq. 3.3.4.2.2-1
S = -((l_phot*c)/3) * Grad(U)
= -((l_phot*c)/3) * Grad((4*sigma*T^4)/c)
To put this equation in the form of Eq. 3.3.4.2.1-1 we need only compute the appropriate value for kappa:
Eq. 3.3.4.2.2-2
kappa = ((l_phot*c)/3) * dU/dT = ((l_phot*c)/3) * c_rad
= (16 * l_phot * sigma * T^3)/3
We can see immediately from the above equation that radiation heat conduction is indeed non-linear: kappa has two components that vary with T. The first component is the photon mean free path,
l_phot, which is determined by the (temperature dependent) opacity and drops to zero at zero temperature. The second component, dU/dT, is the radiation specific heat (also designated c_rad):
Eq. 3.3.4.2.2-3
c_rad = dU/dT = (16 * sigma * T^3)/c
which varies as the third power of T and thus also drops to zero at T=0. One immediate consequence of this is that as T approaches zero, so does the thermal conductivity of the medium. The
instantaneous permeation of a medium by heat energy as predicted by the linear conduction model thus does not occur, in non-linear conduction there is always a definite boundary between heated and
unheated matter that advances at a finite speed.
The existence of a well defined boundary with a finite speed means that radiation heat conduction creates a "thermal wave" (also called a Marshak wave, or a radiation diffusion wave) with the
boundary marking the edge of thermal wave front, and the finite speed being the wave velocity. On the other hand, since radiation conduction is mediated by photons which travel at the speed of light,
the actual bounding velocity of thermal waves is much higher than classical kinetic heat conduction which is limited by the speed of sound. The temperature profile of a thermal wave differs markedly
from the Gaussian distribution of linear heat conduction. In a thermal wave the temperature in the heated zone is essentially constant everywhere, except near the front where is rapidly drops to
zero. The thickness of the zone where this temperature drop occurs is the wave front thickness (see Figure 3.3.4.2.2-1).
Figure 3.3.4.2.2-1. Profile of a Thermal (Marshak) Wave
Before we can progress further in studying thermal waves we must obtain expressions for kappa and chi that incorporate the opacity of the medium. We are particularly interested in an expression that
is valid at low temperatures, since this is what determines the rate of advance of the heated zone. At low temperatures the opacity becomes very large, so we can neglect the constant factor K_1 in
3.3.-1. The photon MFP is the reciprocal of the opacity-density product, so we can represent l_phot by:
Eq. 3.3.4.2.2-4
l_phot = 1/((K_0 * rho^C * T^-m) * rho) = A * T^m, m > 0
where A is a constant given by:
Eq. 3.3.4.2.2-5
A = 1/(K_0 * rho^(C+1))
Substituting Eq. 3.3.4.2.2-4 into Eq. 3.3.4.2.2-2 give us:
Eq. 3.3.4.2.2-6
kappa = (16 * A * T^m * sigma * T^3)/3 = (16 * A * sigma * T^n)/3 = B*T^n,
where n = m + 3, and B is a constant given by:
Eq. 3.3.4.2.2-7
B = (16 * A * sigma)/3 = (16*sigma) / 3*(K_0 * rho^(C+1))
In a fully ionized gas where bremsstrahlung dominates n = 6.5, in the multiply ionized region it is usually in the range 4.5 < n < 5.5.
For radiation conduction the expression for chi (Eq. 3.3.4.2.1-2) must use c_v rather than c_p:
Eq. 3.3.4.2.2-8
chi = kappa/(rho * c_v) = ((l_phot*c)/3) * c_rad/(rho*c_v)
= (B * T^n)/(rho * c_v) = a * T^n,
where a is a constant:
Eq. 3.3.4.2.2-9
a = B/(rho * c_v)
Note that chi now includes both the specific heat of matter and the specific heat of radiation since two fluids are involved in the transport process.
We can now give an expression for the temperature profile in a thermal wave:
Eq. 3.3.4.2.2-10
T = ((n*v/a) * dx)^(1/n)
This equation gives T where dx is the distance behind the wave front, where n and a are as defined above, and v is the front velocity. This equation indicates that the slope dT/dx is extremely steep
when dx is near zero, then flattens out and eventually increases extremely slowly. The scale of the front thickness where this transition occurs is related to the value of (n*v/a). If there some
bounding temperature for T, T_1, then we can express the effective front thickness (dx_f) by:
Eq. 3.3.4.2.2-11
dx_f = (a * T_1^n)/(n*v) = chi[T_1]/(n*v)
Determining exact values for T_1 and v (the position of the front, x_f, as a function of time) require a more complete statement of the problem conditions, but a qualitative expression of the
relationship between x_f, time, and the scale parameters Q and a, can be given:
Eq. 3.3.4.2.2-12
x_f ~= (a * Q^n * t)^(1/(n+2)) = (a * Q^n)^(1/(n+2)) * t^(1/(n+2))
For the front velocity v:
Eq. 3.3.4.2.2-13
v = dx_f/dt ~= x_f/t ~= [(a*Q^n)^(1/(n+2))]/[t^((1/(n+2))-1)]
~= (a*Q^n)/X_f^(n+1)
These relations show that the front slows down very rapidly. With the parameters a and Q being fixed, the penetration of the front is the (n+2)th root of the elapsed time. If n~=5 then x_f ~ t^(1/7),
and v ~ t^(-6/7) ~ x_f^(1/6). That is, the distance travelled is proportional to the seventh root of the elapsed time, and the velocity is the inverse sixth root of distance. Since the total quantity
of heat (determined by Q) is fixed, the temperature behind the wave front is a function of how far the front has penetrated into the medium. The driving temperature drops rapidly as the wave
penetrates deeper, leading to a sharp reduction in thermal diffusivity (chi), and thus in penetration speed.
It should be noted that if Eq. 3.3.4.2.2-12 is reformulated so that the (temperature dependent) parameter chi replaces Q we get (to within a constant multiple) the same law for the penetration of
heat that was seen in linear heat conduction (Eq. 3.3.4.2.1-7):
Eq. 3.3.4.2.2-14
x ~ (chi*t)^(1/2)
A specific situation of interest where Q is fixed is the instantaneous plane source problem described for linear heat conduction - an instantaneous pulse of thermal energy applied to a plane bounding
the medium of conduction. In this case an exact solution to this problem is possible, but the result includes a term that is a very complicated function of n (see Zel'dovich and Raizer, Physics of
Shock Waves and High-Temperature Hydrodynamic Phenomena, for details). Below is the solution equation with an exponential approximation used to simplify it.
Eq. 3.3.4.2.2-15
x_f = Exp[0.292 + 0.1504*n + 0.1*n^2]*(a * Q^n * t)^(1/(n+2))
The only difference between this equation and Eq. 3.3.4.2-13 is the addition of the function of n.
Another interesting case is the problem of radiation penetration into a plane with constant driving temperature. This situation arises physically when the plane surface is in thermal equilibrium with
a large reservoir of heat behind it. Examples where this situation include certain phases of fission bomb core disassembly, and certain phases of radiation implosion in thermonuclear weapons. Since
the peak driving temperature is constant, the value of chi remains constant, and we get immediately from Eq. 3.3.4.2.2-14:
Eq. 3.3.4.2.2-16
x ~ (chi*t)^(1/2) ~ (a * T^n * t)^(1/2)
Under these conditions the radiation wave still decelerates at a respectable pace, but nearly as fast as in the single pulse example. An estimate of the constants for this equation are available for
temperatures around 2 KeV (23 million degrees K) when the material is cold uranium from the declassified report LAMS-364, Penetration of a Radiation Wave Into Uranium. This is a set of conditions of
considerable interest to atomic bomb disassembly, and of even greater interest to radiation implosion in thermonuclear weapons. When x is in cm, T is in KeV, and t is in microseconds, the equation
Eq. 3.3.4.2.2-17
x = 0.107 * T^3 * t^(1/2)
For example, after 100 nanoseconds at 2 KeV the wave will have penetrated to a depth of 0.27 centimeters. Note that this equation implies that for uranium n=6.
If the driving temperature actually increases with time, instead of simply remaining constant, then the radiation wave's propensity to slow down with time and distance will become even more reduced.
If we continue to magnify the rate of temperature increase with time, then at some point the radiation wave velocity will remain constant, or even increase with time. As one example of this
situation, we can consider the case where the temperature behind the wave front is driven by a constant flux S into the medium. In this situation the temperature behind the wave constantly increases.
The temperature (averaged across the heated zone) is approximately:
Eq. 3.3.4.2.2-18
T ~ [(S * x_f)/(a * c_v * rho)]^(1/(n+1))
~ [S^2/(a * c_v * rho)]^(1/(n+2)) * t^(1/(n+2))
and the heated zone thickness is:
Eq. 3.3.4.2.2-19
x_f ~ [(S^n * (a * c_v * rho)]^(1/(n+2)) * t^((n+1)/(n+2))
For n=5, x_f~t^(6/7), T~t^(1/7)~x_f^(1/6), and v~t^(-1/7). This is a very slowly decelerating radiation wave.
An important question is "Under what conditions can the assumption of a stationary medium of conduction be made?". We explicitly assumed that the process of heating the material did not change its
density, nor set in motion through the pressure gradients created. We have already seen that a constant density and temperature, there will be a characteristic pressure produced through particle
kinetic energy and radiation pressure. We will see in the following section, Section 3.4 Hydrodynamics, that a given pressure and density will produce a pressure wave, which may be a shock wave, with
a constant velocity. If the radiation front is decelerating, then at some point, if the medium is thick enough, and the heat transmission process is uninterrupted, this pressure wave must overtake
the radiation front. When this occurs, we move into the realm of radiation hydrodynamics proper, discussed in detail in Section 3.5 Radiation Hydrodynamics.
3.4 Hydrodynamics
Hydrodynamics (also called gas or fluid dynamics) studies the flow of compressible fluids, and the changes that occur in the state of the fluid under different conditions. Radiation gas dynamics adds
the energy transfer effects of thermal radiation. This becomes significant only when the gas is hot enough for radiation transport to be appreciable compared to the energy transport of fluid flow.
Before dealing with the effects of radiation, it is necessary to review the basics of conventional hydrodynamics.
3.4.1 Acoustic Waves
Any local pressure disturbance in a gas that is not too strong will be transmitted outward at the speed of sound. Such a spreading disturbance is called an acoustic wave. This speed, designated c_s,
is a function of gas pressure, density (rho) and gamma; or equivalently by the temperature and R (the universal gas constant per unit mass):
Eq. 3.4.1-1
c_s = (gamma*pressure/rho)^0.5 = (gamma*R*T)^0.5
Since we have previously dealt with k (Boltzmann's constant), n (particle density), and rho (mass density) but not R is useful to note that:
Eq. 3.4.1-2
R = k*n/rho = k*m_part
where m_part is the average mass of a particle.
If the local pressure disturbance is cyclical in nature, creating alternating high and low pressures, a series of acoustic waves is produced that evokes the sensation of sound. The pressure of sound
waves is usually quite weak compared to ambient air pressure, for example even a 130 dB sound (an intensity that inflicts pain and auditory damage on humans) causes only a 0.56% variation in air
3.4.2 Rarefaction and Compression Waves
A disturbance that decreases gas pressure is called a rarefaction wave, a disturbance that increases pressure is a compression wave. With either type of acoustic wave the change in state of the gas
is essentially adiabatic.
When a gas is adiabatically expanded the speed of sound decreases, when it is adiabatically compressed c_s increases. The speed of sound thus decreases behind the leading edge of a rarefaction wave,
and increases behind the leading edge of a compression wave. Because of this rarefaction waves tend to stretch out as the propagate, each portion of the disturbance progressively lagging farther and
farther behind the leading edge which propagates at the speed of the undisturbed gas. Lagging portions of compression waves tend to catch up with the leading edge since they propagate at higher
speeds through the disturbed (compressed) gas. The pressure profile of the compression wave thus becomes steeper and steeper. When a portion of the wave catches up with the leading edge, the slope at
that point becomes infinitely steep, i.e. a sudden pressure jump occurs. A wave that causes an instantaneous pressure increase is called a shock wave.
3.4.3 Hydrodynamic Shock Waves
Compression waves are fundamentally unstable. They naturally tend to steepen in time and eventually (if they propagate long enough) will become infinitely steep: a shock wave. On the other hand,
rarefaction waves are stable and a rarefaction shock (a sudden pressure drop) is impossible.
3.4.3.1 Classical Shock Waves
The idealized representation of shock waves used in shock wave theory assumes the pressure jump at the front of the wave is a mathematical discontinuity. When a particle of matter passes through an
ideal shock front it undergoes an instantaneous change in state. The pressure it is subjected to increases, it is compressed to higher density, its temperature increases, and it acquires velocity in
the direction of shock wave travel.
A classical shock wave is distinguished from non-classical shocks by the assumption that the only process of energy transport or change of state present is the shock compression itself.
A diagram of such a shock wave is given below. The subscript 0 designates the initial unshocked state, and 1 designates the shock compressed state.
Figure 3.4.3.1-1. Classical Shock Wave
There is a fundamental difference between continuous compression waves and shock waves. The change of state in a compression wave is always adiabatic no matter how steep the pressure profile. It can
be shown theoretically that in an instantaneous pressure jump adiabatic compression is impossible, there must be an increase in entropy.
Of course truly instantaneous changes cannot occur in reality. So, just how sudden is this shock transition really? It turns out that the shock transition occurs over a distance which is on the order
of the molecular collision mean free path in the substance propagating the shock wave. That is to say, only one or two collisions are sufficient to impart the new shocked state to matter passing
through the shock front. In condensed matter, this path length is on the order of an atomic diameter. The shock front is thus on the order of 1 nanometer thick! Since shock waves propagate at a
velocity of thousands of meters per second, the transition takes on the order of 10^-13 seconds (0.1 picoseconds)!!
Physically the change in entropy in real shock waves is brought about by viscous forces and heat conduction connected with the discrete nature of matter at the scale of the shock front thickness.
Since even very fast continuous compression occurs many orders of magnitude slower than shock compression, with a compression front thickness correspondingly thicker, viscosity is almost always
negligible (thus validating the idea of adiabatic compression).
The shock velocity D is always supersonic with respect to the unshocked matter. That is, D is always higher than the speed of sound c_s for the material through which the shock is propagating.
The shock velocity is always subsonicwith respect to the compressed material behind the front. That is, D - u_p is always less than the speed of sound in the shock compressed material. This means
that disturbances behind the shock front can catch up to the shock wave and alter its behavior.
Obviously a shock wave does work (expends energy) as it travels since it compresses, heats, and accelerates the material it passes through. The fact that events behind the shock front can catch up to
it allows energy from some source or reservoir behind the shock wave (if any) to replenish the energy expended (a fact of great significance in detonating high explosives).
Mathematically it is often simpler to view a shock front as stationary, and assume that matter is flowing into the shock front at a velocity u_0 equal to -D, and that it is compressed and decelerated
to u_1 by passing through the front (u_1 = u_p - D). This is actually the natural way of viewing shock waves in certain situations, like the supersonic flow of air into a nozzle.
It is also often convenient to use the "specific volume" instead of density to simplify the mathematics. Specific volume v_0 is defined as 1/rho_0; v_1 = 1/rho_1.
There is a simple relationship between u_0, u_1, and the density ratio between the shocked and unshocked material (rho_1/rho_0). It can easily be understood by considering the requirement for the
conservation of mass. Since matter cannot accumulate in the infinitely thin shock front, the mass flow rate on either side must be the same. That is:
Eq. 3.4.3.1-1
u_0*rho_0 = u_1*rho_1
which gives:
Eq. 3.4.3.1-2
u_0/u_1 = rho_1/rho_0 = D/(D - u_p)
We can gain some intuitive idea of the behavior of a shock wave by examining some limiting cases, very weak and very strong shocks. When we talk about the strength of a shock wave a number of
different measures of "strength" can be used. In ordinary gases (i.e. not condensed matter) the pressure ratio p_1/p_0, or the Mach number (D/c_s) are convenient. In either case a very weak shock was
a value not much higher than 1, a very strong shock has a value of 10-100 (easy to reach for a gas at atmospheric pressure).
Since the "initial pressure" in condensed matter cannot be measured directly, we use the absolute pressure increase instead of pressure ratios for measuring shock strength. Absolute shock velocity,
or more rarely Mach number, is also used. To reach similar limiting "strong shock" conditions for condensed matter, shock velocities of >30 km/sec, and pressures of 10-100 megabars are required, and
are beyond the reach of chemical explosives (maximum values are about 400 kilobars and 9 km/sec).
A very weak shock wave travels only very slightly faster than sound, it creates a small pressure increase behind the front, imparts a small velocity to the material it passes through, and causes a
extremely small entropy increase. The velocity of sound behind a weak shock is about the same as it is in front. The behavior of a weak shock wave is thus not much different from an acoustic wave (in
fact an acoustic wave can be considered to be the limiting case of a weak shock).
As shock strength increases, the proportions by which the energy expended by the wave is divided between compressive work, entropic heating, and imparting motion also changes. The density of matter
crossing the shock front cannot increase indefinitely, no matter how strong the shock. It is limited by the value of gamma for the material. The limiting density increase is:
Eq. 3.4.3.1-3
density_ratio = rho_1/rho_0 = v_0/v_1 = (gamma + 1)/(gamma - 1)
Consequently, for a perfect monatomic gas the limit for shock compression is 4. Material with higher gammas (e.g. essentially all condensed matter) has a lower compressive limit. In regimes where
abrupt increases in ionization energy absorption occur (initial ionization, initial electron removal from a new shell), the effective gamma can be driven down to the range of 1.20 to 1.25, leading to
density increases as high as 11 or 12. As shocks continue to increase in strength above such a regime the kinetic excitation again dominates, and the value of gamma rises back to 5/3 (until the next
such regime is encountered).
There is also a limiting ratio between the shock velocity D and u_p:
Eq. 3.4.3.1-4
u_p = D*(density_ratio - 1)/density_ratio
For a perfect gas this is 3/4 of the shock velocity.
In a strong shock, the limiting partition of energy between kinetic energy and internal energy (compression plus entropy) in the compressed material is 50/50. Since the degree of compression (and
thus compressive work) has a fixed limit, entropy must increase without limit. In an extremely strong shock (one where the limiting ratios are approached), almost all of the energy expended goes into
kinetic energy and entropic heating.
There are three basic equations, called the Rankine-Hugoniot shock relations (jump conditions), that taken together define the classical shock transition. They are easily derived from conservation
laws, in fact one of them has already been given (using the conservation of mass):
Eq. 3.4.3.1-5a Rankine-Hugoniot Relation 1a
u_0*rho_0 = u_1*rho_1
Eq. 3.4.3.1-5b Rankine-Hugoniot Relation 1b
D*rho_0 = (D - u_p)*rho_1
The mass entering the shock front in a unit of time dt (equal to A*dt*u_0*rho_0) undergoes a velocity change of u_0-u_1, and thus a momentum change of change A*dt*u_0*rho_0*(u_0-u_1). Conservation of
momentum requires that the momentum change be balanced by the force acting over the time dt (momentum is force*time). This force is (p_1 - p_0)*A*dt. We can then derive the second shock relation:
Eq. 3.4.3.1-6 Rankine-Hugoniot Relation 2
p_1 - p_0 = u_0*rho_0*(u_0-u_1) = D*rho_0*u_p
The kinetic energy entering the shock front in a unit of time dt is equal to mass*(u_0^2)/2, or A*dt*(u_0^3)*rho_0/2. The conservation of energy requires that the energy exiting the front be the
same, divided between the kinetic energy leaving the front and the change in internal energy. The kinetic energy leaving the shock front is obviously mass*(u_1)^2/2, and the energy converted to
internal energy must then be mass*(u_0^2 - u_1^2)/2. Dividing by the mass we get the third shock relation:
Eq. 3.4.3.1-7 Rankine-Hugoniot Relation 3a
E_1 - E_0 = 1/2 (u_0^2 - u_1^2) = 1/2 (2*u_p*D - u_p^2)
where (E_1 - E_0) is the change in internal energy per unit mass. Another useful form of this relation, obtained by making the appropriate substitutions, is:
Eq. 3.4.3.1-8 Rankine-Hugoniot Relation 3b
E_1 - E_0 = 1/2 (p_1 + p_0)(v_0 - v_1)
R-H relations 1 and 2 are independent of the material being shocked. Solving the third relation requires having an equation of state for the material. The form given in Eq. 3.4.3.1-8 is convenient
when we are dealing with traditional equations of state, which are expressed in terms of pressure and volume. For an EOS suitable for use with Eq. 3.4.3.1-7, see Subsection 3.4.3.3 below (Linear
Equation of State for Shock Compression).
If we use the gas law EOS, we can write an expression for specific volume as a function of the initial state of the material (p_0, v_0), gamma, and the shock pressure (p_1):
Eq. 3.4.3.1-9
v_1 = [p_0*v_0*(gamma + 1) + p_1*v_0*(gamma - 1)]/
[p_0*(gamma - 1) + p_1*(gamma + 1)]
Plotted on the a P-V diagram (pressure vs. volume, see 3.1.5 above) this function defines a Hugoniot curve. This curve is the locus of all possible shock states (produced by shocks of varying
strength) from a given initial state. The shock velocity D happens to be equal to the slope of the straight line (called the Rayleigh Line) connecting state 0 and state 1 on the Hugoniot. Relation 1b
can then give the particle velocity u_p.
For convenience, D can be calculated from:
Eq. 3.4.3.1-10
D^2 = v_0^2 * (p_1 - p_0)/(v_0 - v_1)
Or, if we are dealing with a gas we can include the equation of state and omit v_1 explicitly:
Eq. 3.4.3.1-11
D^2 = (p_1(1 + gamma) - p_0(1 - gamma))v_0/2
The Hugoniot curve lies above the isentropic curve defined by:
Eq. 3.4.3.1-12
p_1*(v_1^gamma) = p_0*(v_0^gamma),
although the two curves are very close together for weak shocks.
Below are curves for isentropic compression, and the shock Hugoniot for a perfect gas with gamma = 5/3. Note that the Hugoniot is very close to the isentrope at low pressures and volume changes, but
diverges rapidly as V/V_0 approaches 0.25 (the limiting shock compression).
Figure 3.4.3.1-2. Isentrope and Hugoniot Curves for a Perfect Gas
Since force*distance defines work done, and any two points in the P-V diagram define changes in pressure and volume, the area under a curve connecting two points gives the energy change involved in
moving between the two states. Expressed in differential form this is:
Eq. 3.4.3.1-13
dE = -p * dv
Adiabatic changes move along the isentropic curve, and the energy change is the area below it. Shock jumps however move instantaneously between the two points, and the energy change must be
calculated by the finite difference form:
Eq. 3.4.3.1-14
delta E = -(p_1 + p_0)/2 * delta v.
This is the area under the Rayleigh Line connecting the two states. The difference between this area and the area under the isentropic curve passing through state 0 and p_1 gives the increase in
entropy due to shock compression.
Here is an example of some actual shock Hugoniot data. It is a plot of the full Los Alamos Scientific Laboratory Shock Hugoniot data set for uranium (from LASL Shock Hugoniot Data, ed. S.P. Marsh,
1980) supplemented with a few higher pressure data points found in Skidmore, I.C. and Morris, E., "Experimental Equation of State Data for Uranium and its Interpretation in the Critical Region" (
Thermodynamics of Nuclear Materials, 1962). The pressure is in kilobars, and goes up to 6.45 megabars. For convenience (mine), the orientation of the plot differs from that traditionally used for
Hugoniot curves (such as the perfect gas Hugoniot above). The raw data is plotted with two approximation functions:
Eq. 3.4.3.1-15
For p < 165 kilobars
V/V_0 = 1 - 0.00059916*p
For 165 kilobars < p < 6.45 megabars
V/V_0 = 0.68632 + 0.28819*Exp[-p/600] - 0.000027021*p
(with pressure p in kilobars) that fits it quite accurately.
Figure 3.4.3.1-3. LASL Data Set for Uranium
3.4.3.2 Detonation Waves
A detonation wave is a shock wave in an explosive that is supported by the chemical energy released by reactions triggered by the change of state at the shock front. The ability of an explosive to
support a detonation wave is the definition of "high explosive" (HE). Low explosives, like black powder ("gun powder") and smokeless powder, merely burn very fast without producing true shock waves.
The chemical reaction in a high explosive takes much longer than the shock transition. The time scale is typically between 1 nanosecond and 1 microsecond, and is thus 1,000 to 1,000,000 times slower
than the shock jump. The thickness of the reaction zone thus ranges from on the order of several microns to several millimeters. The ability for the energy released by the high explosive reaction
products to reach the detonation front, and thus maintain it, is due to the fact that shock front propagation is subsonic with respect to the shocked explosive.
A given explosive has a characteristic detonation velocity, D. This is the steady state velocity of a detonation wave travelling through the explosive. Representative values for the explosives of
major interest in the design of nuclear weapons are given below, density and form are given for each measured velocity:
│ Table 3.4.3.2-1. Detonation Velocities of Explosives │
│Explosive │D (m/sec)│Density/Form│
│HMX (cyclotetramethylene tetranitramine) │9110 │1.89/pressed│
│RDX/cyclonite (cyclotrimethylenetrinitramine) │8700 │1.77/pressed│
│PETN (pentaerythritol tetranitrate) │8260 │1.76/pressed│
│Composition B (63 RDX/36 TNT/1 wax) │7920 │1.72/cast │
│TATB/TATNB (triamino trinitrobenzene) │7760 │1.88/pressed│
│DATB/DATNB (diamino trinitrobenzene) │7520 │1.79/pressed│
│TNT (trinitrotoluene) │6640 │1.56/cast │
│Baratol (76 barium nitrate/24 TNT) │4870 │2.55/cast │
│Boracitol (60 boric acid/40 TNT) │4860 │1.59/cast(?)│
│Plumbatol (70 lead nitrate/30 TNT) │4850 │2.89/cast │
The detonation velocity is closely related the energy density of the explosive. Generally the more powerful the explosive, the higher the detonation velocity. Not surprisingly, higher detonation
velocities correspond to higher detonation pressures.
Shock waves travelling at velocities higher than the steady detonation velocity (called a strong detonation) will tend to slow down with time to the steady state value (called a Chapman-Jouget, or
CJ, detonation). Shock waves that are slower than the detonation velocity (a weak detonation) will tend to accelerate (as long as they are strong enough to initiate the explosion process to begin
Besides the energetic reactions of the explosive, expansion of the compressed reaction product gases behind the detonation front also affect the detonation velocity. If the detonation wave is
propagating down a cylinder of explosive, gases can expand out to the side immediately behind the wave front. This robs energy from the detonation wave and tends to slow it down. The narrower the
cylinder, the greater the reduction in detonation velocity. If the cylinder is narrow enough, the detonation front cannot be sustained and it dies out entirely. In most explosives the drop in
velocity becomes significant when the diameter is in the range of 1-5 cm, with failure occurring with diameters in the range of 0.25-1.5 cm. A notable exception to this is PETN, which does not suffer
velocity decreases until the diameter is below 0.2 cm, with failure at only 0.08 cm (HMX does nearly as well). For this reason PETN is used to make a flexible detonating cord called "primacord". The
velocities in the table above were generally measured using 2.5 cm cylinders, and thus may not represent the maximum CJ detonation velocities.
Some insight into the detonation process can be gained by considering a detonation beginning at the rear surface of an unconfined block of explosive which is wide enough for the edge effect to be
negligible. Also assume the detonation front is a plane wave (not curved). The explosive is accelerated in the forward direction by passing through the detonation front, in other words, it gains
forward momentum. The conservation of momentum requires that there be a balancing rearward flow. This is brought about by the expansion of the detonation gases out the rear surface of the explosive
(a process known as escape).
The expansion of the reaction products (and a corresponding drop in pressure) actually begins as soon as the explosive passes through the shock front. As expansion proceeds with increasing distance
from the shock front, the pressure steadily drops, and the internal energy of the gases is progressively converted to kinetic energy directed opposite to the shock wave direction of travel. The rear
surface of the reaction zone (called the CJ surface) is defined by the point where the expansion has accelerated the gases rearward enough that they are travelling at exactly the local speed of sound
relative to the shock front (energy released behind this point cannot reach the front). Behind this surface the gases (which are still moving forward) continue to expand and decelerate until a point
is reached where they are motionless, from this point on the gases move rearward with increasing velocity.
The region of decreasing pressure behind the shock front is sometimes called the Taylor wave.
In a plane detonation propagating through a solid high explosive, the thickness of the zone of forward moving gases grows linearly with detonation wave travel, and is about half the distance the
detonation wave has traversed since initiation.
Figure 3.4.3.2-1. Detonation Wave Profile
3.4.3.3 Linear Equation of State for Shock Compression
There is a specialized but very useful equation of state that describes the behavior of materials under shock compression. This is called the Linear EOS or the Linear Hugoniot. This EOS describes a
very simple relationship between the velocity of the shock wave, and the particle velocity behind the shock:
Eq. 3.4.3.3-1
D = C + Su_p
where D is the shock velocity, u_p is the particle velocity, C is the speed of sound in the material at zero pressure, and S is a constant for the particular material.
Conservation of mass requires that the degree of compression, rho/rho_0, be equal to:
Eq. 3.4.3.3-2
rho/rho_0 = D/(D - u_p)
Substituting the Linear EOS for U_p, with appropriate rearrangements gives us:
Eq. 3.4.3.3-3
rho/rho_0 = S/(S + C/D - 1)
This implies that the limiting shock compression as D becomes large compared to C is:
Eq. 3.4.3.3-4
rho/rho_0 = S/(S - 1)
The Linear EOS turns out to be quite accurate for virtually all materials under a wide range of conditions, reaching into extreme shock pressures. That this is so is really quite surprising. A wide
variety of physical processes occur as materials are shocked from STP to extreme pressures and temperatures. In particular, it will be recalled from section 3.2.2 that the compressibility of the
material (indicated by gamma) steadily declines with pressure. No strong theoretical basis exists to explain the linear relationship, or the value of S.
A recent LANL report on this (General Features of Hugoniots, by J.D. Johnson, LA-13137-MS, April 1996) shows that the linear range extends up to the astonishing pressure of ~5 gigabars, and shock
velocities of ~200 km/sec.
A correspondence can be drawn though between the limiting density given by the linear EOS, and the limiting density of shock compression in a perfect gas:
Eq. 3.4.3.3-5
rho/rho_0 = S/(S - 1) = (gamma + 1)/(gamma - 1)
Which implies:
Eq. 3.4.3.3-6
gamma = 2S - 1, or S = (gamma + 1)/2
If we use the value of gamma for a perfect monatomic gas, 5/3, we get:
Eq. 3.4.3.3-7
S = 4/3 = 1.333
Interestingly, the value of S for the elements varies from about 0.9 to 1.9 (most between 1.1 and 1.6), with the distribution centered around 1.33.
Deviations in the linear relationship occur when significant phase changes occur. These show up as kinks in the slope, but affect the overall trend only slightly.
│Table 3.4.3.3-1. Linear Equation of State Parameters │
│Material │C (m/sec) │S │rho_0 │
│Li-6 Deuteride │6590 │1.09 │0.789 │
│Beryllium │7990 │1.124 │1.845 │
│Lithium │? │1.15 │0.534 │
│Uranium Hydride │2190 │1.21 │10.92 │
│Tungsten │4040 │1.23 │19.24 │
│Thorium │2180 │1.24 │11.68 │
│Perfect Monatomic Gas │- │1.333 │- │
│Uranium │2510 │1.51 │18.93 │
│Lead │2050 │1.52 │11.34 │
│Gold │3260 │1.56 │19.24 │
│Mercury │1800 │1.75 │13.54 │
│Iron │3770 │1.9 │7.84 │
│Plutonium │2300 │? │19.80 │
3.5 Radiation Hydrodynamics
Many elements of radiation hydrodynamics of have already been discussed: the properties of blackbody radiation and photon gases, and the mechanisms of interaction between radiation and matter. This
section focuses specifically on radiating fluid flow, in particular the limiting case of compressive flows - shock waves, and on how radiation can give rise to hydrodynamic flow.
3.5.1 Radiative Shock Waves
Classical shock waves assume the only process that affects the state of the shocked material is the compression jump at the shock front. In an extremely strong shock the temperature of the material
behind the front becomes high enough to radiate significantly, and the thermal radiation transport of energy must be taken into account. The key distinguishing feature of all radiative shock waves is
that the radiation of the shocked gas preheats the matter ahead of the kinetic shock jump. Conditions in radiative shock waves are always extreme enough to allow us to treat the shocked material as a
gas, and usually a perfect gas.
Classical shocks can be quite luminous of course, but the actual energy flux from radiation is insignificant compared to the hydrodynamic energy flux, resulting in negligible preheating. In air and
most other gases the preheating from a classical shock wave is virtually zero because the gas is also transparent to the emitted radiation.
High explosives are incapable of generating radiative shocks. Radiative shock waves arise in collisions at celestial velocities (meteors, comets, etc.), astrophysics (stellar dynamics), and nuclear
Radiative shock waves can be conveniently divided into four classes in order of increasing strength:
1. Subcritical Shocks
2. Critical Shocks
3. Supercritical Shocks
4. Radiation Dominated Shocks
Ionization effects occur in radiative shock regimes. Initial ionization occurs in the subcritical region. Additional shell ionization effects occur in the supercritical region, and (depending on the
element involved) perhaps in the subcritical region as well.
3.5.2 Subcritical Shocks
In these shocks the preheating is relatively small. More precisely, the temperature of the preheated gas is always less than the final (equilibrium) temperature of the gas behind the shock front. An
illustrative shock temperature profile is given below.
Figure 3.5.2-1. Subcritical Shock Wave
We can see that a small amount of preheating occurs immediately ahead of the shock. The thickness of the preheating layer is approximately equal to the average photon mean free path in the cold gas.
The shock jump heats the gas further, creating a temperature spike just behind the front. This layer of superheated gas cools by radiation to the final shock temperature, creating the preheated layer
in the process. The superheated gas layer is also about one photon mean free path in thickness.
In air at STP, radiation transport begins to be significant when the shocked gas temperature reaches some 50,000 degrees K. This corresponds to a shock velocity of 23 km/sec (Mach 70), and a
preheating temp of 6300 K. The preheating layer will be only 1 mm or so thick.
Subcritical radiative shocks are strong enough to achieve compressions close to the limiting value of (gamma + 1)/(gamma - 1); a factor of 4 for perfect monatomic gases. In strong subcritical shocks
in low-Z gases (e.g. air) an effective gamma value of 1.25 is typical due to energy absorption in ionization, resulting in a density increase of 9.
The temperature increase in the preheating layer increases the gas pressure above its initial state, and the resulting pressure gradient generates a weak compression wave in front of the shock. The
density increase ahead of the shock front is always small in subcritical shocks. For example, a Mach 70 shock in air results in a density increase on the order of 1% ahead of the front. Most of the
compression occurs in the shock jump, but it continues to increase as the superheated gas cools to the final temperature (accompanied by a proportionality much smaller pressure increase as cooling
The relationship between the temperature and density at different points in a radiative shock wave, and the final temperature and density, is given by the general law:
Eq. 3.5.2-1
T/T_final = eta(1- eta)/(eta_final * (1 - eta_final))
where T_final is the final temperature; eta is the volume ratio V_shock/V_0 (the reciprocal of the density ratio); and eta_final is final (and maximum) value of shock compression for the gas in
question. This shows that when the preheating is small, so is the compression. When the shock jump compresses the gas to a density close to the final value, the peak temperature is not much higher
than the final one.
These effects increase as shock strength comes closer to the critical value (a shock temp of 285,000 K for air at STP). When the final shock temp is 100,000 K (D = 41 km/sec), preheating reaches
25,000 K; at 150,000 K (D = 56.5 km/sec), preheating is 60,000 K; at 250,000 K (D = 81.6 km/sec), preheating is 175,000 K. Above about 90,000 K the preheating layer begins thick enough that it
obscures the superheated shock in visible light, an observer in front of the shock sees only the lower temperature preheated layer.
hen the radiation energy density can be neglected, T_final can be calculated from the equation:
Eq. 3.5.2-2
T_final = (2*(gamma - 1)*rho*D^2)/(n*k*(gamma + 1)^2)
where n is the particle density.
Applying the above equation to regimes where strong ionization effects occur presents problems. Ionization increases the particle density, and absorbs additional energy thus driving down the
effective value of gamma. If pre-shock values of gamma and n are used in these cases the calculated value of T_final may be greatly overestimated. In ionization regimes closed form equations are not
satisfactory and numerical methods are required for exact answers. It also fails close to critical shock conditions when the radiation energy intensity can no longer be neglected. This failure occurs
around the point where T_final reaches 70% of T_critical.
Overall, subcritical shocks are not notably different from classical shocks. The energy transport is still dominated by the shock compression, and the equations governing the shock behavior deviate
little from the classical equations.
3.5.3 Critical Shock Waves
At critical strength, the radiative preheating equals the final shock temperature. At this point the radiation energy flux equals that of the hydrodynamic energy flux. Above this point we enter the
regime of radiation hydrodynamics proper. The increasing dominance of radiation transport now begins to approximate the conditions of thermal waves (radiation diffusion or Marshak waves) discussed in
Section 3.3.4.2.2 Non-Linear Heat Conduction. Here the driving temperature for the Marshak wave is provided by the superheated layer behind the shock front, and the Marshak wave itself is the
preheating layer in front of the shock.
Figure 3.5.3-1. Critical Shock Wave
In air initially at STP, critical strength and is reached at a temperature of 285,000 K (T_critical). This corresponds to a shock velocity of 88.1 km/sec. It is this temperature and shock condition
that marks the moment of "hydrodynamic separation" encountered during the cooling of the early fireball of a nuclear explosion, where the hydrodynamic shock takes over from radiation flux in driving
the fireball expansion, and causes the brightness of the fireball to fall from its first peak to a temporary minimum.
Determining the exact value of the critical shock velocity and temperature generally requires numerical techniques.
The thickness of the preheated layer becomes greater than a mean free path as shocks near critical strength since the energy flux from the preheated layer becomes so large that it contributes to
preheating air even further ahead. The density increase ahead of the shock front is still small.
3.5.4 Supercritical Shock Waves
In these shocks the principal transport of energy is carried out by radiation through the leading Marshak wave. Nearly all of this energy transport is completed before the arrival of the hydrodynamic
shock front. The compression wave in the preheated region is still relatively weak however, virtually all of the compression occurs as matter crosses the shock front. Recall though, that in shocks of
this strength the energy expended in compression is negligible compared to that expended in heating and shock acceleration.
Figure 3.5.4-1. Supercritical Shock Wave
The preheating temperature is the same as the final temperature in all supercritical shocks. As shock intensity increases above critical strength the preheated layer becomes progressively thicker,
while the superheated layer becomes thinner. So in a supercritical shock the preheating layer is typically several free paths thick, while the superheated shock layer is less than one free path
thick. This is because of the T^4 relationship with radiation flux and radiation density, which causes the shock superheated layer to cool extremely rapidly. The maximum temperature behind the shock
wave is:
Eq. 3.5.4-1
T_shock = T_final * (3 - gamma)
or 4/3 final for a perfect gas. Since the preheating zone is optically thick, energy transport ahead of the shock front is governed by radiation heat conduction.
Since the shock front temperature is constant, we have here the situation of radiation transport with constant driving temperature governed by equations Eq. 3.3.4.2.2-16 and 3.3.4.2.2-17 (3.3.4.2.2
Non-Linear Heat Conduction). According to these laws, the Marshak wave decelerates rapidly as it moves ahead of the driving temperature front. Because of this there must be a point where the Marshak
wave velocity has slowed to exactly the same velocity as the shock front, this point determines the thickness of the preheating layer.
In air, a shock with a final temperature of 750,000 K has a preheated layer some 14 mean free paths thick (about 2 m). At 500,000 K the layer is only 3.4 MFPs thick (40 cm).
It is a supercritical shock that is responsible for the first flash of a nuclear explosion. Even though the fireball is only a few meters across in this phase (for a kiloton range explosion), and
even though air is transparent only to an extremely small fraction of the emitted photons, this is still sufficient to make the early fireball as bright (or brighter) at a distance as the later
fireball at its maximum size.
3.5.5 Radiation Dominated Shock Waves
In extremely large shocks the radiation pressure and energy density exceeds the kinetic pressure and energy of the gas. At this point we basically have a shock in a photon gas. We might say that the
gas particles (ions and electrons) are just "along for the ride". Accordingly, it is the properties of a photon gas (gamma = 4/3) that dominates the situation. The maximum shock compression is thus:
(4/3 + 1)/(4/3 - 1) = 7.
In uranium at normal density (18.9), as might be encountered in the core or tamper of an implosion bomb near maximum power output, the point where radiation and kinetic energy densities are the same
is reached at 4.8 x 10^7 degrees K. In a lithium deuteride Fermi gas at a density of 200 (typical of compressed fusion fuel) this point is 1.35 x 10^8 K.
In radiation dominated shocks the preheating effect is so large that the one hallmark of classical shock waves - the sudden jump in pressure and density at the hydrodynamic shock front - diminishes
and completely disappears in a sufficiently strong shock. This point is finally reached when the ratio between radiation pressure and gas pressure is 4.45. This corresponds to a temperature of 1.75 x
10^8 degrees K in U-238 (density 18.9), or 2.2 x 10^8 K in LiD (density 200).
Since fission bomb core temperatures can exceed 100 million degrees, and fusion fuel can exceed 300 million degrees, we can expect radiation dominated shock waves to be the normal means of energy
transport through matter within a bomb. The temperatures can be expected to cool to being simply a supercritical shock about the time it escapes the bomb casing.
The absence of a sudden pressure jump indicates that the hydrodynamic component of a radiative shock wave has completely disappeared. In this case the radiation dominated shock wave is a pure Marshak
wave setting in motion hydrodynamic flow.
Figure 3.5.5-1. Radiation Dominated Shock Wave (Marshak Wave)
The shock velocity relationship Eq. 3.4.3.1-9 given for classical shock waves: D^2 = v_0^2 * (p_1 - p_0)/(v_0 - v_1) is still valid. By replacing specific volumes with initial density and eta_max;
and substituting the sum of kinetic pressure and the blackbody radiation pressure for (p_1 - p_0) we get:
Eq. 3.5.5-1
D_rad^2 = ((4*sigma/3*c)*T^4 + nkT)/(rho*(1 - eta_max))
= ((gamma + 1)*((4*sigma/3*c)*T^4 + nkT))/(2*rho)
where k is the Boltzmann's constant (1.380 x 10^-16 erg/degree K), sigma is the Stefan-Boltzmann constant (5.669 x 10^-5 erg/sec-cm^2-K), and T is temperature (degrees K), c is the speed of light,
and n is particle density. Since this equation includes the combined energy densities of thermal radiation and particle kinetics, it can be used for other radiative shocks as well.
The speed of sound in radiation dominated matter is:
Eq. 3.5.5-2
c_s_rad = [gamma*((4*sigma/3*c)*T^4 + nkT)/rho]^0.5
Using this shock velocity relationship we can calculate the shock velocity in a material such as uranium. At 10^8 degrees K it is 1480 km/sec; at 4.8 x 10^7 K it has slowed to 533 km/sec; at 10^7 K
it is a mere 166 km/sec. Although 1480 km/sec is very fast, being 160 times higher than the detonation velocity of the best conventional explosives, it is interesting to note that the energy density
in the nuclear explosive is some 100 million times higher. The principle reasons for this disparity are that velocity only increases with the square root of energy density, and the high mass density
of the fission explosive.
3.5.6 Thermal Waves with Hydrodynamic Flow
To review briefly, Section 3 initially treated the subjects of radiation transport and hydrodynamics separately:
• Section 3.3.4.2, Radiation Heat Conduction, discussed the process of heat transport by radiation in the absence of hydrodynamic flow;
• Section 3.4, Hydrodynamics, discussed shock hydrodynamics in the absence of radiation transport.
So far in this section we have discussed how shocks of increasing strength give rise to radiation transport as an important mechanism. Now I want to turn this around, and consider how radiation
heating can give rise to hydrodynamic shocks. This is very important when we are dealing with the heating of cold matter by a radiation flux before a shock wave has become established, or in any
situation where a radiation flux is the essential force in driving the hydrodynamic phenomena (especially if the flux varies with time).
As we have seen, the behavior of the driving temperature behind a thermal wave with respect to time determines the how the wave velocity varies with time. For a constant temperature radiation
diffusion has a marked tendency to slow with time and penetration distance, decreasing as t^-(1/2).
Now the pressure in the heated zone is the sum of the kinetic and radiation pressures (given by Eq. 3.1.7-10):
P_total = n*k*T + (4*sigma/3*c)*T^4
and the velocity of a compression wave produced by this pressure (from Eq. 3.5.5-1) is:
D = (((gamma + 1)*P_total)/(2*rho))^(1/2)
assuming as we have with other very strong shocks that the initial pressure is negligible compared to P_total, and the compression approaches its limiting value. This shows that for a constant
temperature T_0, the compression wave velocity is also constant. If the process continues long enough, the wave must eventually overtake the thermal wave front and drive ahead of it.
This shock, that now leads the original thermal wave, may very likely be itself a radiating shock wave which is preceded by a thermal wave. But the difference is this - the new temperature driving
this thermal wave (call it T_1) is different from the original temperature T_0 that created the driving pressure for the shock. The new T_1 is produced by the process of shock compression, and is
likely much cooler than T_0. Also the location of the driving temperature source is no longer fixed with respect with to the medium of conduction, it now advances with the shock wave at velocity D.
Conversely, the process of radiation conduction behind the shock front has been greatly altered also. The original thermal wave is now penetrating in a moving medium, that has been compressed and
The pressure behind the leading shock remains in equilibrium, but the physical state of the medium is not uniform. Behind the point where the shock overtook the thermal wave, the pressure is created
by a high temperature (T_0) at the original density of the medium. Ahead of this point, the density is higher, but the temperature is lower.
Now suppose that the T_0 is not constant, but is increasing with time. Depending on the rate of increase, the thermal wave may have a constant velocity, or be increasing as well. But this also
creates increasing pressure, and an increasing shock velocity. Will the shock overtake the thermal wave now? Through some sophisticated analysis techniques, general laws have been developed that
describe the conditions under which the pressure wave will overtake the thermal wave, or not.
Let the driving temperature vary by the power law:
Eq. 3.5.6-1
T_0 = constant * t^q
where q > 0 (if q=0 we have the case of constant temperature described above). From Eq. 3.3.4.2.2-14 (which shows the relationship between depth of penetration x, and chi and t) and Eq. 3.3.4.2.2-8
(which shows that chi ~ T^n)we get:
Eq. 3.5.6-2
x ~ (chi*t)^(1/2) ~ T^(n/2) * t^(1/2) ~ t^((n*q + 1)/2)
where n is the opacity-related exponent for the medium of conduction. The speed of the thermal wave is:
Eq. 3.5.6-3
dx/dt ~ x/t ~ t^((n*q - 1)/2)
If we assume that kinetic pressure dominates radiation pressure (as must be the case at least for the early phase whenever cold material is heated) then we can also say:
Eq. 3.5.6-4
D ~ T^(1/2) ~ t^(q/2)
Analysis shows (see Zel'dovich and Raizer, Physics of Shock Waves and High-Temperature Hydrodynamic Phenomena, pg. 677) that if:
(nq - 1)/2 < q/2
(showing that the shock velocity increases with time faster than thermal wave velocity), and
q < 1/(n - 1)
then the situation described above holds - the thermal wave initially runs ahead of the shock, but the shock eventually overtakes it, and leads the thermal wave for all later times.
The opposite situation exists if:
(nq - 1)/2 > q/2
(showing that the thermal wave velocity increases with time faster than shock velocity), and
q > 1/(n - 1).
In this case the shock initially leads, but is later overtaken by the thermal wave, after which the significance of the hydrodynamic motion steadily shrinks as the thermal wave moves farther and
farther ahead (i.e. the ratio of shocked material to thermal wave heated material becomes small).
In the case that:
(nq - 1)/2 = q/2,
q = 1/(n - 1)
the speeds of the shock and thermal waves increase at the same rate. Separate phases of hydrodynamic an heat conduction transport do not exist, and the medium is heated and set in motion almost
3.6 Shock Waves in Non-Uniform Systems
So far I have discussed shock waves passing through a uniform medium where physical properties like density do not change. We are necessarily interested in the behavior of shocks in non-uniform
systems since nuclear weapon design techniques include:
• Compression of one material (uranium say) by shock waves generated in another (high explosive); and
• Generation of shock waves by collisions between two materials.
3.6.1 Shock Waves at an Interface
The dimensions of the components of a nuclear weapon are small. This means that shock waves propagating in these components must eventually reach a boundary (in contrast explosion shocks in the
atmosphere or ocean may die out without encountering a boundary). Since the most important physical property of a material for determining shock behavior is density, two basic types of interfaces
• an interface where the shock enters a material of lower density; and
• an interface where the shock enters a material of higher density.
A shock wave is not affected by an interface ahead of the shock until the leading edge of the wave reaches it. In classical shocks this means that the shock propagates in a uniform manner until the
shock front is actually at the interface at which point it will undergo an instantaneous change in behavior. In radiation shocks the leading edge is the edge of the Marshak wave preheating zone. This
results in a more gradual interaction.
Two conditions that must be satisfied when a shock reaches an interface are:
• the pressures on both sides of the interface must equal; and
• the particle velocities on both sides of the interface must equal.
3.6.1.1 Release Waves
When shock waves meet a boundary with a lower density material a release wave is generated. This is a rarefaction wave, a reduction or release in pressure that converts internal energy of the shock
compressed gas into kinetic energy.
The limiting case of a release wave is the arrival of a shock at a vacuum interface, also called a free surface. We can claim that the shock disappears in any meaningful sense at this point. All we
have is a mass of compressed, moving material adjacent to a vacuum. The compressed material immediately begins to expand, converting the internal energy into even greater kinetic motion. In gases
this expansion process is called escape. The expansion in turn creates a rarefaction wave that propagates back into the shock compressed material at the local speed of sound c_s.
3.6.1.1.1 Free Surface Release Waves in Gases
Let us assume that material supporting the shock wave is a gas. Since the pressure change in a rarefaction wave is always continuous (no instantaneous pressure drops), the pressure at the leading
edge of the escaping gas is zero (in keeping with the requirement of equal pressures at the gas/vacuum interface) and the process of converting internal energy into kinetic energy is complete. The
front edge thus immediately accelerates to the final maximum velocity (escape velocity) when it reaches the free surface. Farther back in the release wave the pressure increases and the velocity
decreases. At the release wave boundary (which moves backward at c_s) the pressure and velocities are the same as in the original shock wave.
The escape velocity of a gas is equal to:
Eq. 3.6.1.1.1-1
u_escape = (2*c_s)/(gamma - 1)
If we use a frame of reference in which the unshocked gas was stationary we get:
Eq. 3.6.1.1.1-2
u_escape = (2*c_s)/(gamma - 1) + u_particle
and the rear edge of the release wave travels backwards at
Eq. 3.6.1.1.1-3
v_release = u_particle - c_s
The release wave thus stretches itself out with time at a speed of:
Eq. 3.6.1.1.1-4
((2/(gamma - 1)) + 1)*c_s
which for a perfect monatomic gas is 4*c_s.
3.6.1.1.2 Free Surface Release Waves in Solids
If the material is not a gas but a solid (and the shock has not vaporized it), the situation is a bit different. The solid cannot expand indefinitely, so expansion halts when it reaches the normal
zero pressure density. The release propagates backward at c_s as before. In metals at moderate shock strengths (1 megabar and below), this expansion causes a phenomena called "velocity doubling",
since the release wave doubles the velocity imparted by the shock. This doubling law begins to break down when shock strengths are high enough to cause substantial entropy increases.
Another important difference in solids is the existence of tensile strength. We have seen that in detonation waves that the pressure drops rapidly a short distance behind the shock front. When high
explosives are used to generate shocks in metal plates a phenomenon called "spalling" can occur due to the interaction between the tensile properties of the metal, the release wave, and this shock
pressure drop.
As the rarefaction wave moves back into the plate, it causes the pressure to drop from the shock peak pressure to zero. As it moves farther back into the plate it continues to cause pressure drops of
the same absolute magnitude. When it encounters pressures below the shock peak this means that the pressure in the plate goes negative, that is, tensile stress is produced (you can think of this as
the faster moving part of the plate pulling the slower moving part of the plate along). If the negative pressure exceeds the tensile strength of the metal, the plate will fracture. The fracture may
simply open a void in the plate, but often the velocity doubled layer will peel off entirely and fly off the front of the plate.
If the original plate was actually a stack of two or more plates, the shock propagation into the plate stack would occur as before. However when the release wave reaches the boundary between plates,
and if tensile stress exists at that point, the front plate will fly off and a new release wave will begin at the front side of the plate behind it.
3.6.1.1.3 Shock Waves at a Low Impedance Boundary
If the boundary is with a second material of lower density, rather than a true vacuum, we have a low impedance boundary. If this second material has a density much lower than the first, then the
situation is essentially the same as the free surface case as far as the first material is concerned. The pressure at the release wave front will be negligible compared to the shock pressure and the
escape velocity will be virtually the same.
The increased particle velocity at the front of the release wave acts like a piston in the second material, driving a shock wave ahead of itself. Since the particle velocity is steady behind a shock
front, and the particle velocity on both sides of the material interface must be the same, we can see that the particle velocity behind the shock front in the second material will be the same as the
velocity of the release wave front (i.e. the velocity of material ahead of a piston is the same as the piston itself). If we assume that the second material is a gas, and that the shock is strong
enough to reach the limiting compression, then the second shock wave velocity will be:
Eq. 3.6.1.1.3-3
D_2ndshock = u_escape * (gamma + 1)/2
The pressure in this shock will be negligible compared to the original shock of course.
If the density of the second material is not negligible compared to the first, the result is intermediate between the case above and the case of a shock propagating through a material of unchanging
density. The pressure at the interface drops, but not as drastically. The velocity of the release wave front is lower, and creates a shock in the second material that is slower, but with a higher
pressure. The rear of the release wave travels backwards at the same speed (c_s), but since the pressure drop at the release front is not as sharp, and it moves forward more slowly, the pressure
gradient in the release wave is not as steep.
3.6.1.2 Shock Reflection
When shock waves meet a boundary with a higher density material a shock wave is reflected back into the first material, increasing its pressure and entropy, but decreasing its velocity.
3.6.1.2.1 Shock Waves at a Rigid Interface
The limiting case of an interface with a higher density material, is an infinitely rigid boundary. With such an interface there is no shock wave transmitted into the second material, all of the
energy is reflected. This causes a doubling of pressure, and a decrease in particle velocity to zero. An interface between air and a solid (like the ground) is essentially a rigid interface.
3.6.1.2.2 Shock Waves at a High Impedance Boundary
If the second material is not enormously denser than the first, a transmitted shock is created in the second material, along with the reflected shock in the first. The shock reflection causes an
increase in pressure and decrease in particle velocity in the first material. Continuity in pressure at the interface indicates that the transmitted shock must also be higher in pressure than the
original shock, but the particle velocity (and shock velocity) will be lower.
3.6.2 Collisions of Moving Bodies
The detonation of high explosives is one way of generating shock waves. Another is high velocity collision between two materials.
When two bodies come into contact at a given velocity, the material at the colliding surface of each body must instantly change velocity so that the relative motion of the two surfaces is zero (due
to symmetry we may consider either one to be stationary, or both to be moving, at our convenience, as long as the collision velocity is the same). A pressure must be generated by the collision
exactly sufficient to accomplish this velocity transformation.
To illustrate the situation, first consider the head on collision between two moving plates of identical composition, and equal mass and velocity magnitude |V|. At the moment of contact the colliding
surfaces will come to rest, and a shock wave will begin to propagate into each plate of sufficient strength to cause a velocity change of magnitude |V|. If |V| is not too great, then the shock
velocity will be close to the acoustic speed in the plates. When the shocks reach the opposite surface of each plate, all portions of both plates will be at rest. A release wave will begin at the
plate free surfaces, resulting in velocity doubling. When the release waves reach the original impacting surfaces, both plates will have returned to their original (zero pressure) physical state, but
they will now have velocities of magnitude |V| headed in opposite directions to their original motion; they will have "bounced" off each other.
What we have just described is an elastic collision, a collision in which no kinetic energy is dissipated. This the limiting case for colliding bodies at low velocities. If the collision velocities
are below the acoustic speed in the bodies then the collision forces are transmitted by acoustic waves. If the velocities become too large then true shock waves are formed. At still higher
velocities, entropic heating and equation of state effects start to deviate from the velocity doubling rule. At this point the collisions become progressively less elastic with increasing speed.
Typically we encounter colliding bodies in physical systems in which we view one body as stationary. The impacting body travels at velocity V, and after collision shock waves propagate into the
bodies with u_p equal to V/2 (if they are of identical composition). The actual shock front velocity depends on the degree of compression, which can be found by consulting the Rankine-Hugoniot
relations, and the initial state and the EOS of the material.
3.6.2.1 Collisions of Bodies With Differing Impedance
The description above assumed that the colliding bodies had identical densities and compressibilities, which is often not true. When a low density body collides with a high density body, a higher
velocity shock is transmitted into the lower density body. If a more high compressibility body collides with low compressibility body, then the shock-mediated velocity change of the high
compressibility body will be much greater (the situation is identical to that described in 3.6.1.2.2 Shock Waves at a High Impedance Boundary, except that the high compressibility/low impedance
material has not been previously shocked).
An interesting situation occurs if a low impedance material is trapped between two high impedance bodies during a collision. This may occur if one or both high impedance bodies have low impedance
material covering their surface, or it they are not colliding in an effective vacuum and a stationary fluid is caught between them. For specificity, let us assume that a moving high impedance body
collides with a stationary body consisting of a low impedance and high impedance layer.
At the moment of collision, a relatively low pressure, high velocity shock is set up in the low impedance material, which is greatly accelerated. The colliding body in decelerated to a much smaller
extent. The shock is transmitted through the low impedance layer, and is reflected at the high impedance boundary, bringing the material to a near-halt again. The intensified reflected shock travels
back to the first body, and is reflected again. Each reflection increases the pressure of the shock, and the density of the low impedance material, and incrementally transmits momentum from the first
high impedance body to the second. After enough reflections have occurred the first will have been decelerated, and the second body accelerated sufficiently to bring them to rest with respect to each
other. In the process, the low impedance material will have been highly compressed by repeated reflected shocks.
3.6.3 Collisions of Shock Waves
Shock reflections at an interface are an example of a shock wave collision. The general case of head on shock collisions presents no additional difficulties. From the point of collision, waves must
propagate in either direction. The properties of each wave must be such that they bring the pressures and particle velocities on both sides of the collision interface into equilibrium. Symmetrical
shock collisions act exactly like a shock wave encountering an infinitely rigid interface from the point of view of both shocks.
Entirely new phenomena occur though when shock collisions are not head on. The discussion below assumes we are dealing with strong shocks and/or dense matter (the effects are somewhat different for
low density gases and moderate or weak shocks).
At very small angles, the point of collision moves very rapidly along the interface (V/Sin alpha; where alpha is the collision angle, 0 for head on). If the speed with which the collision point moves
is supersonic with respect to the material behind the reflected shocks then the only new phenomena are that the shocks are reflected at an angle, and the pressure behind the reflected shocks is
As alpha increases, the speed of travel of the collision point decreases and the reflected shock pressure increases. At a certain critical value of alpha (generally around 70 degrees) the collision
point travels at sonic velocity or less, and a new shock front is formed by the collision (a phenomenon called the Mach effect). This shock, called the Mach stem, bridges the two principal colliding
shocks, replacing the V-shaped two-shock collision with a three sided shock collision. The stem travels parallel to the direction of travel of the collision point.
At the critical angle where the Mach front is formed the pressure reaches to a maximum, which can be some six to eight times the original shock pressure. Above the critical angle, the pressure drops
again. When alpha reaches 180 degrees the two shocks are no longer colliding and the pressure drops back to its original value.
3.6.4 Oblique Collisions of Moving Bodies
A phenomenon of some importance related to the Mach effect arises when oblique collisions are between two bodies in a void, not shocks travelling through a continuous medium. In this case there is
nothing for a shock to travel through ahead of the collision point, so a Mach stem cannot form. Instead at the critical angle where the collision point is sonic or subsonic with respect to the
reflected shock, a high velocity jet of material is ejected along the collision point trajectory. This is variously called the Neumann, Monroe, hollow charge, or shaped charge effect. When the
collision is cylindrical or conical instead of planar, a jet is always created regardless of the angle of collision.
This effect is used in conventional weapons systems such as anti-tank missiles to create jets capable of penetrating thick armor. The effect can also arise in implosion systems when shock wave
irregularities occur. In this case it is very undesirable because it disrupts the implosion.
3.7 Principles of Implosion
The term "implosion", which denotes a violent inward collapse or compression, fits many different physical schemes for rapidly compressing materials to high densities. An implosion may be adiabatic
or shock induced, or both; the geometry of the compression may be one, two, or three dimensional. This subsection surveys these various implosive systems, describing their principles and properties.
3.7.1 Implosion Geometries
An compression process can be symmetric about one, two, or three spatial axes.
A good example of one dimensional compression (linear compression) is the compression of the fuel/air mixture in the cylinder of an internal combustion engine. If r_0 is the original length of the
gas column in the cylinder, and r_1 is the length after compression, then the density increase is:
Eq. 3.7.1-1
rho_1/rho_0 = r_0/r_1
That is to say, it is inversely proportional to the change in scale (the relative change in length).
Two dimensional compression (cylindrical compression) can be thought of as squeezing a tube so that its radius decreases uniformly (it doesn't get squashed flat). If r_0 denotes the original radius,
and r_1 the radius after compression then we can say:
Eq. 3.7.1-2
rho_1/rho_0 = (r_0/r_1)^2
I.e. it is inversely proportional to the SQUARE of the change in scale.
Three dimensional compression (spherical compression) can be thought of as squeezing a sphere so that its radius decreases uniformly. In this case we can say:
Eq. 3.7.1-3
rho_1/rho_0 = (r_0/r_1)^3
I.e. it is inversely proportional to the CUBE of the change in scale.
For the same change in scale, a higher dimensional implosion produces a much greater degree of compression. The relatively sluggish linear case in fact is rarely though of as being an "implosion".
The spherical implosion gives the most rapid compression and, being symmetrical in all directions, is also relatively easy to analyze theoretically. It is the most widely used geometry for implosion
in nuclear weapon designs.
3.7.2 Classes of Implosive Processes
In addition to geometry, another critical factor in characterizing implosion systems is the pressure-time curve that actually causes the compression.
At one extreme, we have gradual homogenous adiabatic compression. In this case the pressure exerted on the implosion system increases continuously, at a slow enough rate that the pressure within the
implosion system is uniform everywhere. This type of implosion does not increase entropy and allows arbitrarily high degrees of compression, if sufficient force can be exerted.
The other extreme is shock compression. Here the pressure increase is instantaneous, and entropy is always increased. There is a theoretical limit to the degree of compression achievable in this
manner, regardless of the shock pressures available.
Other important cases can be defined that lie between these extremes, depending of the pressure-time function. For example, pressure curves that increase exponentially with time (physically
unachievable, generally speaking) allow infinitely high compression in arbitrarily short length of time.
In characterizing the properties of different classes of implosions we use spherical compression as the model, due to its theoretical simplicity, and practical interest.
The reader interested in exploring classes of implosions and their properties is referred to the excellent papers by J. Meyer-ter-Vehn (and others):
Meyer-ter-Vehn, J.; Schalk, C. 1982. Selfsimilar Spherical Compression Waves in Gas Dynamics, Z. Naturforsch. 37a, 955-969.
Meyer-ter-Vehn, J. 1992. Physics of Inertial Fusion, in Proceedings of the International School of Physics "Enrico Fermi", Course CXVI, Status and Perspectives on Nuclear Energy: Fission and Fusion,
3.7.3 Convergent Shock Waves
This is the most obvious method of achieving implosion, and was the first two and three dimensional implosion system to be developed (during the Manhattan Project). The idea is to create a
cylindrical or spherical inward directed shock wave which converges on the material to be compressed.
In a convergent shock wave the flow of matter behind the shock front is also convergent (inward directed). This means that following the shock compression there is an adiabatic compression phase in
which the kinetic energy of the shocked material is converted into internal energy, causing the flow to decelerate. The pressure behind the shock front, rather than remaining constant as in a
classical shock wave, continues to rise. This increasing pressure is transmitted to the shock front, causing it to continuously strengthen and accelerate.
In theory the acceleration of the shock front continues without limit as the shock approaches the center, achieving infinite velocity, pressure, and temperature. Since the area of the shock front is
diminishing, the actual total energy in the shock front decreases, reaching zero at the center.
Although the increase in shock strength is unbounded in principle, subject only to the granularity of atomic structure, in practice the symmetry of the implosion breaks down long before then,
providing a much lower limit to the maximum degree of energy cumulation. In contrast to plane or divergent shocks, which are stable, convergent shocks are unstable and irregularities will grow with
The increase in density at the shock front is bounded of course, and so is the maximum density due to adiabatic compression. In fact, when the shock reaches the center, the density is everywhere
uniform. The temperature and pressure profile established at this point increases without limit toward the center.
The degree of shock intensification depends on the geometry (planar, cylindrical, or spherical), and the equation of state of the material. It is higher for less compressible materials.
The scaling laws make use of the parameter alpha given in the table below (assuming a gas law EOS):
│ Table 3.7.3-1. Alpha Values for Varying Gammas and Geometries │
│Gammas: │1│6/5 │7/5 │5/3 │3 │
│Cylindrical │1│0.861 │0.835 │0.816 │0.775 │
│Spherical │1│0.757 │0.717 │0.688 │0.638 │
The velocity of the converging shock front is given by:
Eq. 3.7.3-1
V = K1*r^[(alpha-1)/alpha]
where alpha is selected from the above table, and the proportionality constant K1 is determined by initial conditions (V_0 and r_0). If the value of gamma is 3 (close to that of uranium) in a
spherical implosion then alpha is 0.638. This gives us:
Eq. 3.7.3-2
V_Usphere = K1*r^(-0.567)
The shock velocity is thus approximately proportional to the inverse square root of the radius.
The pressure behind the shock front is given by:
Eq. 3.7.3-3
P = K2*r^[2*(alpha-1)/alpha]
Again K2 is established by initial conditions (p_0, r_0). For gamma=3, this is:
Eq. 3.7.3-4
P_Usphere = K2*r^(-1.134)
The pressure is thus approximately proportional to the inverse of the radius.
Side note:
The scaling laws for implosion were obtained by consulting several mutually supporting sources that analyze the theoretical properties of converging shock waves, which cross checks with published
data on converging shock waves. These laws also appear generally consistent with the known properties of the Fat Man implosion bomb.
There is a curious belief found in the public domain nuclear weapons literature that implosion shock velocity scales with the inverse square (NOT square root) of the radius, and thus the pressure
scales as the inverse FOURTH power of the radius (see for example Lovins, 1980, Nature V. 283, pg. 821). Since the area of the shock front is proportional to r^2, and the energy density in a
shock wave is proportional to the pressure, this belief leads to the astonishing (and impossible) conclusion that the TOTAL energy in the shock increases rapidly, and without limit (i.e. it is a
source of "free energy", in contravention to the laws of thermodynamics).
The origin of this belief appears to be a comment in a biography of Edward Teller (Energy and Conflict: The Life and Times of Edward Teller), judging by a statement made by De Volpi in
Proliferation, Plutonium, and Policy, 1979, pg. 300.
The time for the shock wave to reach the center from r_0, and V_0 is
Eq. 3.7.3-4
t_c = (r_0/V_0)*alpha
The location of the shock front at time t is:
Eq. 3.7.3-5
r(t) = (t_c - t)^alpha * K3
Where K3 can be calculated by:
Eq. 3.7.3-6
K3 = (r_0^(1-alpha) * V_0^alpha)/(alpha^alpha)
The compression achieved at the shock front has the classical shock compression limit: nu = (gamma + 1)/(gamma - 1).
Now the maximum degree of compression would be achieved if all of the kinetic energy were converted into internal energy, bringing the imploding gas to rest. We can make a somewhat naive estimate of
what this compression limit is by remembering that in an extremely strong shock, energy is divided equally between kinetic energy and internal energy. Then by using the gas law EOS and the adiabatic
compression law, we can calculate the additional density increase if the kinetic energy were converted to internal energy adiabatically:
Eq. 3.7.3-7
rho_2/rho_1 = 2^(1/gamma-1)
where rho_1 is the shock compressed density and rho_2 is the adiabatically compressed density.
This gives us for gamma = 5/3 a density increase of
Eq. 3.7.3-8
rho_2/rho_1 = 2^1.5 = 2.82, or rho_2/rho_0 = 11.3.
For gamma = 3 (more typical of condensed matter) the increase is:
Eq. 3.7.3-9
rho_2/rho_1 = 2^0.5 = 1.41, or rho_2/rho_0 = 2.82.
This estimate is naive because the state of zero flow is never achieved. When the shock reaches the center, the gas is still flowing inward everywhere. An accurate value for rho_2/rho_0 at this
moment, when gamma = 5/3, is 9.47.
In an extremely intense implosion shock that ionizes the material through which it passes the initial shock compression is much greater than in an ideal gas. The shocked gas acts as an ideal
monatomic gas with a gamma of 5/3. If we assume an initial compression of 12 (3 times higher than for an ideal monatomic gas) then the final density after convergence will be on the order of 30.
3.7.3.1 Convergent Shocks With Reflection
When a converging shock reaches the center of the system it should, theoretically, create a reflected shock moving outward driven by the increasing pressure toward the center. This shock is a
relatively slow one (compared to the imploding phase). The reflected shock front reverses the inflow of gas. Behind the front the gas is expanding outward, again quite slowly compared to its inward
velocity. The density reaches a new maximum at the shock front, but falls rapidly behind it reaching zero at the center. This is a consequence of the increasing entropy toward the center, but is
augmented by the fact that pressure drops behind a diverging shock front. The new density maximum is thus a local phenomenon at the shock front. Matter is actually being driven outward from the
center and for any volume around the center, the density falls continuously once the reflected shock front has passed. You can think of the implosion as having "bounced" at the center, and is now
rebounding outward.
The compression achieved at the shock front for gamma = 5/3 in spherical geometry is rho_3/rho_0 = 32.0 For 7/5 it reaches a value of 216. For cylindrical geometry and gamma = 5/3, rho_3/rho_0 is
22.8. If we get a factor of 4 initial compression enhancement due to an ionizing shock, and a gamma of 5/3 afterward, then the compression for spherical geometry will be 96, and 68.4 for cylindrical
Whether the high densities achieved at the outgoing shock front are "useful" or not depends on the characteristic length scale of the physical processes being enhanced by the compression. In a
fission reaction the scale is given by the neutron mean free path, the average distance a neutron travels between interactions, which is typically a few centimeters. If the thickness of the shock
compressed layer is less than this, the reaction will be more affected by the average density of the core than by the local density at the front. In a fusion reaction the reaction rate at a given
temperature is governed by the local density of the fuel, so the reaction rate will be enhanced by the rebounding shock.
It appears that the peak compression at the reflected shock is less than that predicted by theory and it not hard to guess why. Before this second symmetric shock phase begins the implosion has to
pass through a singularity in the center where the shock front decreases to zero size, and increases to infinite strength. Since implosion symmetry breaks down before this point, we actually get a
turbulent flow that dissipates some of the inflowing kinetic energy as heat. We thus get an expanding stagnation front with higher entropy increases, and lower compressions than theory would
The problem of non-ideal behavior near the center can be alleviated by placing a high impedance boundary around the center of the system - a small high density object - to reflect the shock back
outward before it converges to the point where asymmetry becomes severe. This also reduces the amount of energy deposited as heat at the center, which is probably an advantage, but it starts to
change the character of the implosion from the simple ideal shock implosion model.
If the main imploding mass is surrounded by a higher density imploding shell (which presumably originated the initial convergent shock) then additional compression is achieved when the outgoing shock
is reflected at this high density interface. If a high density central mass is also present a series of alternating inward and outward reflected shocks can be created that achieve very high
densities. In this case it is the kinetic energy of the decelerating high density layer (acting as a pusher/tamper) being brought to rest that is supplying the additional compressive work. Such a
system is of course much more complex than a simple convergent shock implosion.
3.7.4 Collapsing Shells
A variation of implosion that is of considerable interest is the collapse of a hollow cylindrical or spherical shell. We consider two cases here:
1. a shell initially at a uniform velocity, but not acted on by outside forces;
2. a shell initially at rest, but reacting to an external pressure.
It is apparent that if another body is located at the center of the hollow cavity, a collapsing shell can be used to create a convergent shock with reflection.
3.7.4.1 Shell in Free Fall
The first case is often referred to as a "free falling" shell, although it is not really falling, just imploding due to inertia. We assume that every particle in the shell has a velocity vector
directed towards the center of the hollow shell, and that the magnitude is uniform everywhere.
This situation is typical of a thin shell that has just been accelerated by an imploding shock wave after the velocity doubling pressure release has been completed.
As the shell implodes the area of its outer surface gets smaller. This means that the shell is either getting thicker, or it is getting denser, or both.
If it is getting thicker, then the inner surface is moving farther than the outer surface, which means that it is also moving faster than the outer surface. Since they were initially moving at the
same speed, the inner surface must be accelerating. Due to the conservation of momentum and energy, this means that the outer surface is also slowing down. This is the only situation possible in a
shell made of incompressible material.
This transfer of momentum creates a pressure gradient in the shell. Moving from the outer surface inward, the pressure increases from zero at the surface to a maximum that marks the division between
the accelerating and decelerating regions. The pressure then drops to zero again at the inner surface. This maximum pressure will be much closer to the inner surface than the outer surface. Real
materials are compressible to some extent so the pressure increase inside the shell is also accompanied by an increase in density.
3.7.4.2 Shell Collapse Under Constant Pressure
To investigate the second case, let us assume that the shell is initially at rest with no forces acting upon it. Then let a uniform external pressure be applied suddenly (we will ignore the mechanism
by which the pressure is generated). The sudden pressure jump will create an acoustic of shock compression wave that will move inward, accelerating and compressing the shell. When the wave reaches
the inner surface, a release wave will be generated that will move outward at the speed of sound. When the edge of the release wave, the point where the pressure drop begins, reaches the outer
surface it will halt (we assume it cannot propagate beyond the shell, or alter the pressure being exerted). A pressure profile will have been established in the shell, dropping from the maximum
applied pressure at the outer surface to zero at the inner surface. Similarly, a continuous density and velocity profile will exist.
If the shock was strong enough to convert the shell into a gas, then the inner surface will begin imploding at escape velocity when the shock reaches it. The leading edge of the implosion, travelling
at high speed, but with zero density and pressure will reach the center of the void far ahead of the outer surface. If the shell remains solid, then the inner surface will undergo "velocity
doubling", and drop to its zero pressure density.
The external pressure, not balanced by any internal pressure until the inner surface reaches the center, will cause the shell to continue to accelerate. The acceleration will be transmitted to the
inner surface by the pressure gradient.
3.7.5 Methods for Extreme Compression
What densities are actually achievable with real implosion systems and real materials? We must consider actual compressibility limitations, and how close physical implosion systems can get to the
ideal limiting case.
Although the value of gamma for condensed matter at zero pressure (and compression) is greater than the 5/3 of a perfect gas, this is partially offset by the fact that the value of gamma always falls
as the density of condensed matter increases (approaching 5/3 as a limit). However, extremely high shock pressures are required to approach the limiting shock compression value, even using the STP
gammas. For example the STP value of gamma for uranium (2.9) drops to 2.4 when its density has been doubled. Using the STP value we can estimate a limiting compression for uranium of ((2.9 + 1)/(2.9
- 1) = 2.05.
This degree of compression is actually achieved at a shock strength of 7 megabars, approaching the practical limits of high explosive implosion. A static pressure of 5 megabars would have been
sufficient to achieve this density through adiabatic compression. The 2 megabars of "lost compression" in the shock is due to energy expended in heating the uranium. The inefficiency of shock
compression starts to increase rapidly above this point. A 20 megabar shock compresses uranium by a factor of 2.5, comparable to 12 megabars for shockless compression. In extremely strong shock
waves, the energy of the shock is divided equally between heating and kinetic energy, with a negligible portion going into actual compression.
To achieve much higher compressions we have two options: successive shock waves, or adiabatic (i.e. non-shock induced) compression.
The description of a single shock implosion given in subsection 3.7.2 shows that adiabatic compression naturally follows the implosion shock, but the amount of compression achievable is still
limited. If much higher densities are required some method of enhancing the adiabatic compression process is required.
By dividing the pressure increase into two or more shock steps, the inefficiency of very strong shocks can be dramatically reduced. In fact, two shocks are sufficient to eliminate most of the
entropic heating for pressure increases up to many tens of megabars.
If the pressure increase is divided between a large number of weak shocks, the effect is essentially identical to adiabatic compression. In fact, true adiabatic compression can be considered the
limiting case of an infinite number of infinitely weak shocks. Conversely in very rapid adiabatic compression where the pressure gradients are steep, the inherent instability of compression waves
tends to cause the pressure gradient to break up into multiple shocks. Thus for practical purposes the multi-shock and adiabatic compression approaches are closely related, and either model can
suffice for analyzing and modelling extreme compression.
Two general styles of multi-shock implosion are possible. The first has already been described - a convergent shock system with reflection. In this system, each successive shock is going in the
opposite direction as the previous one, and requires shock drivers on each side. An alternative approach is unidirectional shock sequences. In this approach a succession of shocks travel in one
direction in "follow-the-leader" style.
In an optimal unidirectional multi-shock implosion system the shocks would be timed so that they all reach the center simultaneously (each successive shock is faster and will tend to overtake the
earlier shocks). If they overtake each other earlier, then a single intense shock will form. If they do not merge before reaching the center, then shock wave rebounds will collide with incoming
shocks, creating complicated shock reflection effects. Similarly the time pressure curve in adiabatic compression should avoid steepening into a single strong shock, or breaking up into a rebounding
shock sequence.
The maximum speed of high density compression is determined by the speed of the leading edge of the pressure gradient, and the thickness of the material to be compressed. In true adiabatic
(shockless) compression, this speed is the acoustic velocity in the uncompressed material. Since normal acoustic velocities are on the order of 1-3 km/sec, and material thicknesses are at least a few
cm, this compression time is on the order of 10-100 microseconds. And since nuclear bombs typically disassemble in a microsecond or less, it is clear that an initial strong shock with a velocity
approaching 100 km/sec is unavoidable.
Let us consider an example where the fuel being compressed is Li6D, and the total pressure increment is 10,000 megabars. The density of the fuel being compressed (designated rho_init) is 0.82, and if
the initial shock pressure (p_0)is 50 megabars then the shock velocity is roughly:
Eq. 3.7.5-1
D_0 = [p_0(gamma + 1)/2*rho_init]^0.5
= [((5 x 10^7 bars)*(10^6 dynes/cm^2/bar)*(1 + 5/3))/(2 * 0.82)]^0.5
= 9.0 x 10^6 cm/sec
Since this an ionizing shock in Li6D initially at STP, the density after initial shock compression (rho_0) may be as high as 12 or so.
From the adiabatic law:
P_1 * V_1^gamma = P_final * V_final^gamma
we get the following expression for the adiabatic compression ratio:
Eq. 3.7.5-2
(P_final/P_0)^(1/gamma) = V_0/V_final = rho_final/rho_0
If all of the remaining pressure increase is adiabatic and gamma = 5/3, then
rho_final/rho_0 = 200^(3/5) = 24.0
giving a final density of around 288.
The next question is how many discrete shocks are necessary to approach this performance. To examine this, we will assume a plane geometry for the shocks (i.e. they neither diverge or converge).
The compression ratio from a shock is:
Eq. 3.7.5-3
V_i/V_i+1 = [(gamma + 1)*P_i+1 + (gamma - 1)*P_i] /
[(gamma - 1)*P_i+1 + (gamma + 1)*P_i]
where i is the pre-shock state and i+1 is the post-shock state. If we divide the pressure jump from P_0 to P_final into n jumps with the same pressure ratio (P_r) then the ratio for a single jump is:
Eq. 3.7.5-4
P_r = P_i+1/P_i = Exp[ln(P_final/P_1) / n]
where i = 1, 2 ... n (this numbering explains why the initial shock was called the zeroth shock).
The compression ratio with each shock in the sequence is:
Eq. 3.7.5-5
rho_r = rho_i+1/rho_i = [(gamma + 1)*P_r + (gamma - 1)] /
[(gamma - 1)*P_r + (gamma + 1)]
and the total compression by this sequence is obviously:
Eq. 3.7.5-6
rho_final/rho_0 = (rho_i+1/rho_i)^n = (rho_r)^n
By computing various values of n, and assuming gamma = 5/3, for the earlier example we find:
│ Table 3.7.5-1. Total Compression Vs. Number of Shock Steps │
│ n │Density Ratio (rho_final/rho_1) │Pressure Ratio (P_r)│
│ 1│ 3.92│ 200.00│
│ 2│ 10.07│ 14.14│
│ 3│ 15.20│ 5.85│
│ 4│ 18.26│ 3.76│
│ 5│ 20.05│ 2.89│
│ 10│ 22.92│ 1.70│
│ 25│ 23.84│ 1.24│
│ 50│ 23.98│ 1.11│
│infinity│ 24.00│ 1.00│
We see from the table that on the order of five shocks can capture a large majority of the adiabatic compression. Another way of looking at it is: we are assured of losing little compression if a
continuous pressure gradient breaks up into shocks each with a pressure jump of less than 3 or so.
In this analysis I have implicitly assumed that the shock sequence is unidirectional (i.e. the compression is caused by a sequence of shocks travelling in the same direction). This need not
necessarily be the case. The implications for compression efficiency in the reflected shock described system at the end of Section 7.2.3.1 are of course the same.
In a unidirectional shock sequence, to achieve maximum compression throughout the layer each shock should be timed so that they all reach the opposite interface of the material simultaneously. If the
thickness of the material is d, then the transit time for the initial shock is:
Eq. 3.7.5-7
t_transit_0 = d/D_0
Each shock in the following sequence travels through material that has been compressed and accelerated. For the ith shock we can say:
Eq. 3.7.5-8
D_i = [[(gamma + 1)*(P_r^i)*P_0 + (gamma - 1)*(P_r^(i-1))*P_0] /
[2*(rho_r^(i-1))*rho_0]] + (1 - (1/rho_r))*D_i-1
except for the first shock in the sequence where we have:
Eq. 3.7.5-9
D_1 = [[(gamma + 1)*P_0 + (gamma - 1)*P_0] / 2*rho_0] +
(1 - ((gamma-1)/(gamma+1)))*D_0
Since the driving interface of the layer is accelerated by the initial shock, the distance required to traverse the layer is less for subsequent shocks. When combined with the increasing shock
velocities in the initial frame of reference, and multiplicative nature of the pressure increases, we can easily see that the driving pressure must rise very rapidly close to t_transit_0.
Brueckner and Jorna [Laser Driven Fusion, Reviews of Modern Physics, Vol. 46, No. 2; April 1974; pg. 350] derive an expression for how pressure must change with time using an assumption of adiabatic
compression, treating i as a continuous variable, taking gamma as 5/3, and apparently also treating d as being constant:
Eq. 3.7.5-10
P(t_i)/P_0 = [[3*(t_i/(t_transit_0 - t_i)) / (8*(5^0.5))] + 1]^5
Plotting this function of P(t_i) vs. t_i shows this quasi-asymptotic rise.
The geometry of the systems we are actually interested are not really planar, but spherical or cylindrical instead. Still, this model provides valuable insight. Since the fuel volume is proportional
to r^3 (for spherical geometry) or r^2 (for cylindrical geometry), it follows that most of the fuel mass resides at a fairly small radial distance from the surface. Half of the fuel is within 0.103
radii of the surface of a spherical mass, 0.146 radii of a cylindrical one. The effects of shock front convergence at these small radial distances is slight.
The fact that the opposite side of a fuel layer containing most of the fuel is not a free surface also relaxes the pressure history requirements. It does not matter whether the shock sequence or
pressure gradient converges to a single shock at this point since the shocks will continue onward toward the center. The pressure increase does not need to be as fast or steep. It is important to
avoid the buildup of excessively strong shocks in this outer layer of fuel, but farther in toward the center it does not matter much since inefficient compression here has little effect on the
overall compression.
How to achieve the desired pressure-time curve in a fusion bomb where the shock production system (the fission trigger) destroys itself in a very short burst of energy, is a difficult and intriguing
Possible approaches include using:
1. a radiation barrier between the primary and secondary with variable opacity to create a suitable time-dependent energy flux;
2. a pusher that transfers momentum and energy to the fuel over time by multiple reflected shocks and adiabatic compression;
3. layers of materials of different densities (shock buffers) to create multiple reflected shocks (this option can be considered a variant of the second); or
4. modulating the production of energy to provide the desired quasi- exponential pressure-time curve directly (e.g. tailoring the neutron multiplication rate, use of fusion boosting, use of multiple
staging, or some combination of these).
Finally, we should not over look the importance of ionization compression (Section 3.2.4 Matter At High Pressures). When matter is heated until it becomes highly ionized, dense high-Z material will
tend to expand and compress low-Z material due to the particle pressure of the ionized electrons. As an example the initial densities of uranium and lithium deuteride are 0.058 and 0.095 moles/cm^3
respectively. If adjacent bodies of uranium and lithium deuteride are heated until the uranium reaches its 80th ionization state (81 particles per atom, counting the ion), the lithium and deuterium
will be completely ionized, and the particle densities become
1. Uranium: 0.058 * (80 + 1) = 4.7
2. Lithium Deuteride: 0.095 * ((3 + 1) + (1 + 1)) = 0.57
a particle density ratio of 8.2. The uranium will tend to expand, compressing the lithium deuteride, until the particle densities are the same. Ionization compression can thus lead to compression
factors above 8 in this case.
3.8 Instability
Instabilities arise when force is exerted (or work is done) at a fluid interface. In unstable situations small deviations from symmetry or planarity tend to grow with time, magnifying the original
imperfections. Since in the real world perfect symmetry or plane surfaces never exist (if only due to the atomic granularity of matter), unstable situations inevitably disrupt themselves with time.
The key question is whether the initial irregularity magnitude, the rate of growth, and the length of time the unstable situation lasts are together sufficient to cause significant disruption.
3.8.1 Rayleigh-Taylor Instability
Probably the best known instability is the Rayleigh-Taylor instability. It arises when a denser fluid exerts force on a less dense one. The classic example of this is trying to float water on oil.
Even if a (nearly) perfect layer of water can be initially established on top of the oil layer, it quickly sinks through to the bottom. The water-on-oil system is not stable.
The water sinks to the bottom because, as is generally true in physical systems, the water-on-oil system evolves to a configuration of lower potential energy. The energy gained by the oil rising is
more than offset by the energy given up by the water sinking. No matter how perfect the water layer is initially, it will eventually sink to the bottom unless other stabilizing forces exist (like
surface tension).
The smoother the interface is, the slower the instability develops. By the same token it is a rapidly accelerating process. Each increase in irregularity leads to even more rapid growth. The growth
in irregularities is in fact exponential with time. Even absolutely perfect interfaces cannot prevent this from occurring since Brownian motion will create atomic scale irregularities where none
initially existed.
This issue arose during the Manhattan Project when the forces involved in atomic bomb assembly and disassembly were being considered. Atomic bombs are made of materials of different densities. Large
forces are generated by high explosive implosion, and immensely larger ones are generated by the fission energy release. Do instability problems arise that constrain the design, or limit the
efficiency of the bomb?
A typical law governing the growth of Rayleigh-Taylor instabilities is:
Eq. 3.8.1-1
P = e^t*[a*k*(rho_1 - rho_2)/(rho_1 + rho_2)]^0.5
where P is the perturbation amplification, a is acceleration, rho_1 and rho_2 are the densities of the denser and lighter liquids, t is time, k is the perturbation wave number (determined by the
scale of the perturbation relative to the scale of the whole interface), and e is the natural logarithm base. From this we can see that the exponential growth rate is affected by the relative and
absolute densities of the fluids, and the acceleration (that is to say, the force exerted across the interface).
It is evident from the above equation that for a fixed velocity increment, the amplification of irregularities is lower for higher accelerations (there is less time for growth). In the limiting case
of shock acceleration, it does not occur at all. It is only when unbalanced forces exist for significant time intervals that the instability becomes a problem.
This can occur during adiabatic compression, the collapse of thick shells, or when expansive forces are being generated by explosive reactions. The possible instability of the fissile core/tamper
interface in both Manhattan Project bomb designs caused substantial concern (later shown to be unwarranted).
Despite considerable attention paid to this potential problem, it appears that it is largely a non-issue in weapons design. Due to the extremely short time scales involved, and the dominance of shock
compression phenomena, significant instability problems do not appear to arise often. In fission bombs this is mostly a problem during the adiabatic compression phase behind the convergent shock wave
of an implosion. It can also be an important factor during the confinement and expansion of the fusion fuel in a thermonuclear secondary, where a low density fuel first halts the pusher implosion,
then drives the pusher outward.
3.8.2 Richtmyer-Meshkov Instability
Unlike Rayleigh-Taylor, which does not arise in true shock compression, Richtmyer-Meshkov instability is a phenomenon only found during shock compression (or a very steep compression wave that
approximates a true shock). This instability is caused by irregularities in an interface between two media of differing density, and the effects of shock velocity variation with density. When the
shock arrives at the interface, the portions of the transmitted shock front in the denser medium will lag behind the portions in the less dense one. If the shock is initially travelling in a less
dense medium, there will be a reflected shock as well. The reflected shock will also be irregular since the first part of the shock to encounter the interface will be reflected back before the later
parts. An initially flat shock thus gives rise to transmitted, and possibly reflected shocks, that are no longer flat, and this waviness may tend to grow with time. In a plane shock, once the shock
has finished crossing the interface, or has finished being reflected, the shock will tend to return to planarity due to the inherent stability of plane shock waves. Convergent shocks are not stable
though, so Richtmyer-Meshkov can be a serious problem in implosion systems. In addition, the shock front irregularities generate vortexes through the Helmholtz instability effect behind the front
which can lead to mixing that continues long after the shock has passed by.
Richtmyer-Meshkov instability also differs from Rayleigh-Taylor in that it will arise no matter which side of the interface the compression force arrives from. Shocks travelling from low density
regions to high density ones, and from high density regions to low density both cause this phenomenon (although the later case causes transmitted shocks only). The growth of the instability with time
is linear instead of exponential though, which tends to limit its severity.
3.8.3 Helmholtz Instability
This instability is found in systems where flow occurs parallel to the interface between two fluids (this sliding motion is also called "shear"). Slight wavy irregularities in the interface cause the
flow on either side to be diverted form a straight flow into a longer path, just as air is diverted as it flows over a curved airplane wing. By Bernoulli's principle the pressure in this region must
drop, as it does to create lift with a wing. These pressure imbalances cause the wavy irregularities to grow in amplitude. When the height of irregularities reach a certain fraction of their
wavelength they will curl, just like a breaking ocean wave. The rotational motion of the curl quickly becomes a vortex, which mixes the two fluids together. The effect of Helmholtz instability is to
create a mixing layer consisting of a series of vortices maintained and driven like roller bearings by the fluid flow on either side.
Helmholtz instability arises in implosion systems as a side effect of other processes. Mach reflection from a surface creates a region of fluid flow behind the Mach front(the slip stream), which is
subject to Helmholtz instability. Shock reflection and refraction at an irregular interface associated with Richtmyer-Meshkov instability also produces regions of fluid flow that can cause Helmholtz
instability effects.
|
{"url":"http://nuclearweaponarchive.org/Nwfaq/Nfaq3.html","timestamp":"2014-04-18T13:06:47Z","content_type":null,"content_length":"207878","record_id":"<urn:uuid:95919c1e-b5d6-44be-9c77-6b55fd675944>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Welcome to Walter's Home Page
Hi, I'm Walter Vannini. I'm a programmer and I'm based in the San Francisco Bay Area.
Some of my strengths are simulation (including Computer Aided Manufacturing, modeling Petrochemical plants, animating building Construction), modeling (meshes, spline surfaces, NURBS, core CAD
internals like boolean operations) and visualization (3d graphics, OpenGL, scenegraph API's, point clouds, stereography). I started off as a mathematician who could do a little programming and I've
evolved into a programmer who knows some mathematics. I do the kinds of things you would expect of a mathematician, like implementing algorithms, customizing algorithms for specific situations, and
creating algorithms as needed. I also do things like add modern user interface features to legacy applications, use ATL to write ActiveX components, write native windows apps, programmatically deal
with XML, and programmatically query and modify databases. I've even written an image processing server side application for interactively applying virtual makeup to uploaded jpeg images via a web
browser. A lot of strange projects were funded during the dot com crazy years, and I have my share of stories!
I'm a big C++ enthusiast, and many of the projects I've had a chance to work on over the last ten years have involved C++ in a windows environment. I'm the president of the local ACCU (Association of
C and C++ Users) chapter, and an easy way to meet me is to attend a monthly meeting. I'm the guy introducing the speaker. Before my current C++ phase, I did lots of C programming in a variety of UNIX
environments, and to this day I'm very comfortable with the command line. I've also quickly learned and quickly forgotten a bunch of languages in the process of getting various contracts done.
Before becoming a professional programmer, I was a mathematics professor. My area of expertise is differential geometry, which originally attracted me because I was studying general relativity. It
continued to attract me because it combines so many interesting areas of mathematics: multilinear algebra, complex analysis, quaternions, ordinary differential equations, partial differential
equations, calculus of variations, algebraic topology, measure theory, and group theory. And it involves the generalization of familiar three dimensional geometry to higher dimensions. My interest in
differential geometry is the main reason I chose the mathematics department of SUNY at Stony Brook, one of the world's outstanding geometry centers, as the place to get my PhD in mathematics. To stay
in touch with the latest trends I often attend the Stanford mathematics seminars and I also regularly attend the local BAMA (Bay Area Mathematical Adventures) lectures popularizing mathematics. I
expect to keep learning about surprising and beautiful links between seemingly unrelated results for a long time to come.
I've always enjoyed teaching, and am happy to share information. I prefer to do it in person, but writing is also satisfying. On this site you'll find many notes I've written up on programming and
mathematics. I'm adding more items, so be sure to check the "what's new?" page every so often.
Finally, the easiest way to contact me is via email: walterv@gbbservices.com. As I've mentioned above, there are several local groups whose meetings I regularly attend. If you're attending any of
those meetings, please feel free to say hello. I'm also the secretary of the Silicon Valley SIGGRAPH chapter , so yet another place to meet me is at the monthly meetings at Apple headquarters.
July 17 2007 Last Updated
Back to top of page
|
{"url":"http://www.gbbservices.com/walter.html","timestamp":"2014-04-18T08:16:17Z","content_type":null,"content_length":"6365","record_id":"<urn:uuid:8601f712-06df-4b4b-b5b0-5fcd0bdc893c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics Magazine Contents—February 2013
Also in this issue: flight plans, infinite products, and power series, and a proposal to save Pi Day. —Walter Stromquist
Vol. 86, No. 1, pp.3-80.
Journal subscribers and MAA members: Please login into the member portal by clicking on 'Login' in the upper right corner.
Be Careful What You Assign: Constant Speed Parametrizations
Michelle L. Ghrist and Eric E. Lane
We explore one aspect of a multivariable calculus project: parametrizing an elliptical path for an airplane that travels at constant speed. We find that parametrizing a constant speed elliptical path
is significantly more complicated than a circular path and involves taking the inverse of the elliptic integral of the second kind. We compare our constant-speed parametrization to the standard
ellipse parametrization $$(x(t)=a\cos(wt), y(t)=b\sin(wt))$$ and generalize to parametrizing other constant-speed curves.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/math.mag.86.1.003
New Infinite Products of Cosines and Viète-Like Formulae
Samuel G. Moreno and Esther M. García
In this article, the authors show that Viète's formula is only the tip of the iceberg. Under the surface, they search for curious and interesting Viète-like infinite products, rare species made of
products of nested square roots of 2, but here with some minus signs occurring inside. To explore this fascinating world, they only use the simple trigonometric identity $$\cos x=2\cos((\pi+ 2x)/4)\
cos((\pi-2x)/4)$$, combined with a recent formula by L. D. Servi.
To purchase the article from JSTOR:http://dx.doi.org/10.4169/math.mag.86.1.015
Sitting Down for Dinner at a Twin Convention
Martin Griffiths
In this article we consider a particular combinatorial scenario involving $$n$$ sets of identical twins. We show how, subject to various assumptions and conditions on the possible groupings, formulas
may be obtained in terms of $$n$$ for the number of ways in which these $$2n$$ individuals can be seated at $$k$$ tables for any fixed value of $$k$$. This is achieved by deriving recurrence
relations and subsequently employing exponential generating functions.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/math.mag.86.1.026
The Lah Numbers and the $$n$$th Derivative of $$e^{1/x}$$
Siad Daboul, Jan Mangaldan, Michael Z. Spivey, and Peter J. Taylor
We give five proofs that the coefficients in the $$n$$th derivative of $$e^{1/x}$$ are the Lah numbers, a triangle of integers whose best-known applications are in combinatorics and finite difference
calculus. Our proofs use tools from several areas of mathematics, including binomial coefficients, Faà di Bruno's formula, set partitions, Maclaurin series, factorial powers, the Poisson probability
distribution, and hypergeometric functions.
To purchase the article from JSTOR:http://dx.doi.org/10.4169/math.mag.86.1.039
The Equation $$f(2x)=2f(x)f'(x)$$
A. F. Beardon
We discuss solutions of the equation $$f(2x)=2f(x)f'(x)$$, which is essentially a delay differential equation, with the boundary condition $$f'(0)=1$$, on the interval $$[0,\infty)$$. In particular,
we note that the only known solutions of this type that are positive when $$x$$ is positive are the functions $$c^{-1}\sinh(cx)$$, where $$c>0$$, and the function $$x$$.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/math.mag.86.1.048
Pi Day
Mark Lynch
In 1932, Stanislaw Golab proved that, for a large class of metrics in the plane, the perimeter of the unit disk can vary from 6 to 8. Hence, the ratio corresponding to pi can vary from 3 to 4. We
illustrate this result by defining a family of metrics that can be understood easily by any student of linear algebra.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/math.mag.86.1.052
Proof Without Words: Fibonacci Triangles and Trapezoids
Angel Plaza and Hans R. Walser
To purchase the article from JSTOR: http://dx.doi.org/10.4169/math.mag.86.1.055
The Scope of the Power Series Method
Richard Beals
This note examines the application of the power series method to the solution of second order homogeneous linear ODEs, and shows that it works in a straightforward way only under restrictive
conditions—in cases that reduce to the hypergeometric equation or the confluent hypergeometric equation. On the other hand, it is noted that these equations account for most "special functions."
To purchase the article from JSTOR: http://dx.doi.org/10.4169/math.mag.86.1.056
Proposals 1911-1915
Quickies Q1025-Q1028
Solutions 1886-1890
To purchase the article from JSTOR: http://dx.doi.org/10.4169/math.mag.86.1.063
To purchase the article from JSTOR: http://dx.doi.org/10.4169/math.mag.86.1.072
73rd Annual William Lowell Putnam Examination
To purchase the article from JSTOR: http://dx.doi.org/10.4169/math.mag.86.1.074
|
{"url":"http://www.maa.org/publications/periodicals/mathematics-magazine/mathematics-magazine-contents-february-2013?device=mobile","timestamp":"2014-04-16T20:38:03Z","content_type":null,"content_length":"25540","record_id":"<urn:uuid:d037e62e-fddd-4042-95e3-ebfa83c2ffb2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions - Given a grid of squares and the 4 corner cells coordinates, find all the others
Date: Dec 10, 2012 1:03 PM
Author: Francisco Seixas
Subject: Given a grid of squares and the 4 corner cells coordinates, find all the others
Hi there,
I have a photo of a grid of squares, 8 lines and 16 columns. I have also the x,y coordinates of the cells in each corner. How can I find the coordinates of the center of all the other cells? The grid may not be aligned with the x and y axis of the image.
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7935003","timestamp":"2014-04-19T02:36:48Z","content_type":null,"content_length":"1345","record_id":"<urn:uuid:995c1d4f-64b8-460b-997f-495509483806>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calc Constraint Problem
October 29th 2007, 09:09 PM #1
Junior Member
Apr 2007
Calc Constraint Problem
I have no idea what to do! Someone please help! and break down the steps for me please!
It takes a certain power
Suppose you have a certain amount of fuel and the fuel flow required to deliver a certain power is proportional to to that power. What is the speed
you have the function $P(v)=cv^2+\frac{d}{v^2}$ which should be minimized.
You get an extreme value (minimum or maximum) of a function if the first derivative equals zero:
$P'(v)=2cv-\frac{2d}{v^3}$. Solve the equation
$2cv-\frac{2d}{v^3}=0~\implies~2cv^4-2d=0~\implies~v^4=\frac dc~\implies~v=\sqrt[4]{\frac dc}$
October 29th 2007, 11:53 PM #2
|
{"url":"http://mathhelpforum.com/calculus/21612-calc-constraint-problem.html","timestamp":"2014-04-16T04:22:55Z","content_type":null,"content_length":"37375","record_id":"<urn:uuid:2c257423-7809-4117-a852-c39fb1864e17>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mechanical properties of carbynes investigated by ab initio total-energy calculations
Publication: Research - peer-review › Journal article – Annual report year: 2012
title = "Mechanical properties of carbynes investigated by ab initio total-energy calculations",
publisher = "American Physical Society",
author = "Castelli, {Ivano E.} and Paolo Salvestrini and Nicola Manini",
note = "©2012 American Physical Society",
year = "2012",
doi = "10.1103/PhysRevB.85.214110",
volume = "85",
number = "21",
pages = "214110",
journal = "Physical Review B (Condensed Matter and Materials Physics)",
issn = "1098-0121",
TY - JOUR
T1 - Mechanical properties of carbynes investigated by ab initio total-energy calculations
A1 - Castelli,Ivano E.
A1 - Salvestrini,Paolo
A1 - Manini,Nicola
AU - Castelli,Ivano E.
AU - Salvestrini,Paolo
AU - Manini,Nicola
PB - American Physical Society
PY - 2012
Y1 - 2012
N2 - As sp carbon chains (carbynes) are relatively rigid molecular objects, can we exploit them as construction elements in nanomechanics? To answer this question, we investigate their remarkable
mechanical properties by ab initio total-energy simulations. In particular, we evaluate their linear response to small longitudinal and bending deformations and their failure limits for longitudinal
compression and elongation.
AB - As sp carbon chains (carbynes) are relatively rigid molecular objects, can we exploit them as construction elements in nanomechanics? To answer this question, we investigate their remarkable
mechanical properties by ab initio total-energy simulations. In particular, we evaluate their linear response to small longitudinal and bending deformations and their failure limits for longitudinal
compression and elongation.
KW - Physics
KW - Generalized gradient approximation
KW - Linear carbon chains
KW - Nanotubes
KW - Polyynes
U2 - 10.1103/PhysRevB.85.214110
DO - 10.1103/PhysRevB.85.214110
JO - Physical Review B (Condensed Matter and Materials Physics)
JF - Physical Review B (Condensed Matter and Materials Physics)
SN - 1098-0121
IS - 21
VL - 85
SP - 214110
ER -
|
{"url":"http://orbit.dtu.dk/en/publications/mechanical-properties-of-carbynes-investigated-by-ab-initio-totalenergy-calculations(b3d150e6-c0f0-4e7c-9652-dbf1b59812fa)/export.html","timestamp":"2014-04-18T11:01:12Z","content_type":null,"content_length":"20566","record_id":"<urn:uuid:990ef020-aa73-4273-a141-8a0f97e7cafd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ardsley On Hudson Math Tutor
...I also focus on vocabulary. Once you learn the basics the entire subject becomes easier to master. I may never get you to like Chemistry, but you will pass the course.
76 Subjects: including algebra 1, algebra 2, ACT Math, calculus
...I am certified by the New Jersey Department of Education in Biological Science and have three years' tutoring experience. I am also able to tutor Algebra 1. I am available on weekends and after
3 pm weekdays during the school year.
12 Subjects: including prealgebra, algebra 1, biology, anatomy
...I graduated in 2005 and have been tutoring for more than 10 years. I am currently pursuing a second degree in physics towards a PhD. I have more than 10 years of experience with a very high
success rate.
23 Subjects: including precalculus, GMAT, linear algebra, algebra 1
...It is often the course where students become acquainted with symbolic manipulations of quantities. While it can be confusing at first (eg "how can a letter be a number?"), it can also broaden
your intellectual scope. It's a step out of the morass of arithmetic into a more obviously structured way of thinking.
25 Subjects: including calculus, statistics, discrete math, logic
...A huge part of my time is spent doing remedial work that often starts with fundamental study skills such as note taking, taking notes of notes, and using those notes to study for exams. Many
students know they should be highlighting important material when reading text books. Unfortunately much of what get highlighted is either the wrong material....or they highlight everything.
24 Subjects: including geometry, English, statistics, biology
|
{"url":"http://www.purplemath.com/ardsley_on_hudson_math_tutors.php","timestamp":"2014-04-20T21:45:11Z","content_type":null,"content_length":"23921","record_id":"<urn:uuid:338736a3-27d8-441e-9845-f56e27fbe962>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: NUROWSKI'S CONFORMAL STRUCTURES FOR (2,5)-DISTRIBUTIONS
Abstract. As was shown recently by P. Nurowski, to any rank 2 maximally nonholonomic
vector distribution on a 5-dimensional manifold M one can assign the canonical conformal
structure of signature (3, 2). His construction is based on the properties of the special 12-
dimensional coframe bundle over M, which was distinguished by E. Cartan during his famous
construction of the canonical coframe for this type of distributions on some 14-dimensional
principal bundle over M. The natural question is how "to see" the Nurowski conformal
structure of a (2, 5)-distribution purely geometrically without the preliminary construction
of the canonical frame. We give rather simple answer to this question, using the notion of
abnormal extremals of (2, 5)-distributions and the classical notion of the osculating quadric
for curves in the projective plane. Our method is a particular case of a general procedure
for construction of algebra-geometric structures for a wide class of distributions, which will
be described elsewhere. We also relate the fundamental invariant of (2, 5)-distribution, the
Cartan covariant binary biquadratic form, to the classical Wilczynski invariant of curves in
the projective plane.
1. Introduction
1.1. Statement of the problem. The following rather surprising fact was discovered by
P. Nurowski recently in [8]: any rank 2 maximally nonholonomic vector distribution on a 5-
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/200/2989879.html","timestamp":"2014-04-17T01:11:34Z","content_type":null,"content_length":"8788","record_id":"<urn:uuid:3971dc72-b579-4f3e-afaf-7906be44d1ed>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mount Vernon Square, Washington, DC
Washington, DC 20001
Traveling Ivy League Tutor w/ Unique Approach That Works for Anyone!
...Post-undergraduate, I began as a volunteer group tutor before also becoming a private one-on-one tutor as well. I enjoy every minute of it, and it's been one of the most rewarding experiences of
my life so far, one that has inspired me to become a secondary math...
Offering 10+ subjects including algebra 1
|
{"url":"http://www.wyzant.com/Mount_Vernon_Square_Washington_DC_algebra_1_tutors.aspx","timestamp":"2014-04-18T17:27:03Z","content_type":null,"content_length":"61570","record_id":"<urn:uuid:18d67085-b12a-40d0-b6b9-7c9aabca5472>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Antiderivatives word problem!
November 28th 2010, 12:30 PM #1
Junior Member
Sep 2010
Antiderivatives word problem!
The population of a bacterial colony t hours after observation begins is found to be changing at the rate dP/dt = 200e^0.1t+150e^-0.03t. If the population was 200,000 bacteria when observations
began, what will the population be 12 hours later?
what i did:
(1/12-0)[antiderivative 200e^0.1t+150e^-0.03t]
(1/12)[(200/1.1)e^1.1t + (150/0.97)e^0.97t]
(1/12)[((200/1.1)e^13.2 + (150/0.97)e^11.64) - ((200/1.1)e^0 + (150/0.97)e^0)]
is this right?? seems like a big number!
$\displaystyle \int 200e^{0.1t}+150e^{-0.03t}~dt = 2000e^{0.1t}-\frac{100}{3}\times 150e^{-0.03t}+C$
The population of a bacterial colony t hours after observation begins is found to be changing at the rate dP/dt = 200e^0.1t+150e^-0.03t. If the population was 200,000 bacteria when observations
began, what will the population be 12 hours later?
what i did:
(1/12-0)[antiderivative 200e^0.1t+150e^-0.03t]
(1/12)[(200/1.1)e^1.1t + (150/0.97)e^0.97t]
(1/12)[((200/1.1)e^13.2 + (150/0.97)e^11.64) - ((200/1.1)e^0 + (150/0.97)e^0)]
is this right?? seems like a big number!
$\dfrac{dP}{dt} = 200e^{0.1t} + 150e^{-0.03t}$
I don't know where you get the 1 and 12 from, if you assume the time observations began is time 0 your limits are 12 and 0.
You've also got the integration wrong, when talking about exponentials we do not add 1 to the power: $\int e^{ax} = \dfrac{1}{a}e^{ax} + C$ which can be shown by taking the derivative using the
chain rule
$\displaystyle \int^{12}_0 200e^{0.1t}+150e^{-0.03t}~dt = \left[2000e^{0.1t}-\frac{100}{3}\times 150e^{-0.03t}\right]^{12}_0$
Last edited by e^(i*pi); November 28th 2010 at 12:53 PM. Reason: pinching pickslides' latex because it looks better than mine :)
$\dfrac{dP}{dt} = 200e^{0.1t} + 150e^{-0.03t}$
I don't know where you get the 1 and 12 from, if you assume the time observations began is time 0 your limits are 12 and 0.
You've also got the integration wrong, when talking about exponentials we do not add 1 to the power: $\int e^{ax} = \dfrac{1}{a}e^{ax} + C$ which can be shown by taking the derivative using the
chain rule
$\displaystyle \int^{12}_0 200e^{0.1t}+150e^{-0.03t}~dt = \left[2000e^{0.1t}-\frac{100}{3}\times 150e^{-0.03t}\right]^{12}_0$
so using your equation I sub in 12 for t and then I sub in 0 for t and subtract that from the value where t = 12? This gives me 6151.852215 so does this mean the population 12 hours later is
about 6151.85 bacteria?
add the original 200000 ... the definite integral calculates only the change in the population.
November 28th 2010, 12:49 PM #2
November 28th 2010, 12:50 PM #3
November 28th 2010, 06:04 PM #4
Junior Member
Sep 2010
November 28th 2010, 06:09 PM #5
November 28th 2010, 06:41 PM #6
Junior Member
Sep 2010
|
{"url":"http://mathhelpforum.com/calculus/164646-antiderivatives-word-problem.html","timestamp":"2014-04-20T04:06:50Z","content_type":null,"content_length":"50425","record_id":"<urn:uuid:468c3982-8aae-42e6-a1f2-fd5059c2cae3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
|
genome sizes and number of genes summary
Tom Schneider toms at fcs260c2.ncifcrf.gov
Wed Aug 26 16:21:49 EST 1992
In article <9208252323.AA00711 at evolution.genetics.washington.edu>
joe at GENETICS.WASHINGTON.EDU (Joe Felsenstein) writes:
|Tom Schneider, in the midst of his posting on genome sizes writes
|> that the genome size is 4673600, while the number of genes is 3237.
|> This gives:
|> Rfrequency = log2(4673600/3237) = 10.5 bits per site.
|What does Rfrequency do for you? Is it intended to measure the potential
|information content per site? I would have thought that would be
|2*(4673600/3237) = 2888 bits per locus, or 2 bits per site (counting the
|four symbols A, C, G, T as equally frequent).
|I guess I should have been reading the information theory group.
|Joe Felsenstein, Dept. of Genetics, Univ. of Washington, Seattle, WA 98195
| Internet: joe at genetics.washington.edu (IP No. 128.95.12.41)
| Bitnet/EARN: felsenst at uwavm
(I'm cross posting, so I didn't edit your posting.)
Technically, Rfrequency is a measure of the reduction of entropy required for
the recognizer (the ribosome in this case) to go from the state of being
anywhere on the genome (ie, log2(G), where G = 4673600) to the state of it
being at any one of the functional sites (ie log2(gamma) where gamma = 3237).
This decrease is:
Rfrequency = log2(G) - log2(gamma)
= log2(G/gamma)
= -log2(gamma/G)
= -log2("frequency of sites in the nucleic acid")
hence the name. I originally called it Rpredicted, because it is a kind of
prediction of the information needed to locate the sites. It is NOT the
potential information content of the sites because one does not use any
sequence data to calculate it.
The measured information content of a site (ie, the sequence conservation at
the site) is called Rsequence. It is measured for the same state function, but
uses the sequences. I won't go more into that one here; there are a couple
papers on it if you are interested.
Your calculation is interesting, as you doubled the genomic size. In many
cases this is the proper thing to do, namely when one is considering binding
sites on DNA. However, ribosomes work on single stranded RNA, and in E. coli,
just about the entire genome is transcribed, but only from one strand. Hence
the number of potential ribosome binding sites is just 4.7e6.
Further, you have to take the logarithm of the number of choices made
to get your answer in bits. So you should have said:
log2(2*(4673600/3237)) = log2(2888) = 11.5 bits per site
Naturally, if you increase the search region by a factor of 2, you increase the
information needed to find the objects by 1 bit, since a bit defines the choice
between two equally likely things. Once again, this calculation does not take
into account that there are 4 bases. (That is important for Rsequence.)
The surprise comes when one compares Rfrequency to Rsequence; they are not
always the same. Therein lie lots of interesting stories... My first data
showed a number of cases where Rs~1*Rf, then one where Rs ~ 2*Rf and we just
published a case of Rs ~3*Rf! We now have a paper in press where Rs ~ Rf - 1
bit (donor splice sites have more than 1 bit less Rs than acceptors, but
acceptors equal Rf). I'll let you see if you can figure out what the biology
of these might be...
Tom Schneider
National Cancer Institute
Laboratory of Mathematical Biology
Frederick, Maryland 21702-1201
toms at ncifcrf.gov
More information about the Mol-evol mailing list
|
{"url":"http://www.bio.net/bionet/mm/mol-evol/1992-August/000275.html","timestamp":"2014-04-20T12:14:19Z","content_type":null,"content_length":"6191","record_id":"<urn:uuid:5365171c-5973-4666-9775-dde9bfe2c0d8>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simple trigonometry problem
July 21st 2012, 01:13 PM
Simple trigonometry problem
Hello, I am new here but I’m not big on many words so I’m just going to go right to the reason i am here.
I am having a small trigonometry problem.
I have this equation
and I am supposed to solve this where x is equal to or between 0 and 2pi
What I have tried is:
Now, I see that I have done something wrong because when i tried inputting this into my calculator using the wonderful "Solve" function I get answers nothing like the answers listen in the back
of the book. And second, I cannot see any way to get any more than two solutions from this. (Not like the four solutions I am supposed to get)
Now, I am having similar problems with all the other similar problems, so I am missing something important in general here. Anyone
July 21st 2012, 01:19 PM
Re: Simple trigonometry problem
How'd you get from .7 to .75?
Divide both sides by cos 2x to get
$\frac{\sin(2x)}{\cos(2x)} = 0.7 \Rightarrow \tan(2x) = 0.7$, then solve.
July 21st 2012, 01:46 PM
Re: Simple trigonometry problem
Thanks for your help! The .7 was actually I typo, it was supposed to be .75 though the exact number is not important to my question.
But I believe what I am having trouble with is more basic.
The way I would normally solve this (From the other chapters in my textbook) I would do something like this:
This then gives me 0.32 and that is indeed one of the answers in my book.
But how can i solve this to give me all four solutions (not using a graph)?
July 21st 2012, 02:25 PM
Re: Simple trigonometry problem
Four solutions over what interval? There are, of course, an infinite number of solutions over all real number because tangent is periodic with period $\pi$. On the interval 0 to $2\pi$ we have
both 2x= .65 and 2x= 0.64+ $\pi$ which give x= 0.32 and x= 0.64+ $\pi/2$= 1.89, approximately. Because of the division by 2, if we are looking for x between 0 and $2\pi$, then we can add $2x=
0.64+ 2\pi$ so that $x= .32+ \pi= 3.46$ and $2x= 0.64+ 3\pi$ so that $x= 0.32+ 3\pi/2= 5.03$.
July 23rd 2012, 08:22 AM
Re: Simple trigonometry problem
Thank you so much for all you're help!
July 30th 2012, 01:34 AM
Re: Simple trigonometry problem
That's great, I like your way ti describe answers of this trigonometric problems, in mu opinion these simple trigonometry problems seems so difficult but when we start to digging these problems
then they become so simple like cosine law and others.
|
{"url":"http://mathhelpforum.com/trigonometry/201228-simple-trigonometry-problem-print.html","timestamp":"2014-04-23T23:13:58Z","content_type":null,"content_length":"8689","record_id":"<urn:uuid:eba6434d-d18d-4344-913c-b96d88716357>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Primitive $k$th root of unity in a finite field $\mathbb{F}_p$
up vote 1 down vote favorite
I am given a prime $p$ and another number $k$ ($k$ is likely a power of $2$). I want an efficient algorithm to find the $k$th root of unity in the field $\mathbb{F}_p$. Can someone tell me how to do
finite-fields roots-of-unity
Umm, $\mathbb{Z}/p$ is not algebraically closed ... – David Roberts Dec 5 '12 at 23:31
And when you say "the" $k^{th}$ root of unity, you know there are $k$ of them, and they are not all equal in status? – David Roberts Dec 5 '12 at 23:32
Clarification: sorry my notation isn't perfect. I meant the finite field F_p i.e., a set of integers between {0, 1, ... p-1} where p is a prime. Basically, I am doing FFT over a finite field with k
elements. Doing so requires me to first find a primitive kth root of unity in F_p, right? so, my question is how do I do it. Most descriptions of FFT assume that the primitive root is known. –
codegeek234 Dec 5 '12 at 23:40
add comment
1 Answer
active oldest votes
Presumably, you are assuming that $k$ divides $p-1$, so that there is effectively a primitive $k$th root of unity in ${\bf F}_p$, even $\phi(k)$ of them ($\phi$ is Euler's
totient function). The simplest method I know to get your hand on one is as follows.
A. Factor $k=\ell_1^{n_1}\dots \ell_r^{n_r}$ as a product of distinct prime numbers with exponents.
B. For every $i=1,\dots,r$, do the following: Take a random element $x$ in ${\bf F}_p$ and compute $x^{(p-1)/\ell_i}$ in $F_p$, until the result is different from $1$. Then set
up vote 6 down vote $a_i=x^{(p-1)/\ell_i^{n_i}}$.
C. Set $a=a_1 a_2\cdots a_r$. This is a primitive $k$th root of unity in ${\bf F}_p$.
In practice, $k$ should be a power of $2$, $k=2^n$, so that $r=1$ and you only have to repeat step B once.
4 This is the wrong website for these kind of questions. Try math.stackexchange.com. Meanwhile, here is a 256-bit prime p with p-1 divisible by 2^32.
115792089237316195423570985008687907853269984665640564039457584008797892902913 – Felipe Voloch Dec 6 '12 at 1:26
@Felipe. Should I have kept my mouth shut? – ACL Dec 7 '12 at 22:42
add comment
Not the answer you're looking for? Browse other questions tagged finite-fields roots-of-unity or ask your own question.
|
{"url":"http://mathoverflow.net/questions/115560/primitive-kth-root-of-unity-in-a-finite-field-mathbbf-p?answertab=active","timestamp":"2014-04-16T10:51:18Z","content_type":null,"content_length":"55717","record_id":"<urn:uuid:83654988-184e-4193-bba3-65477c31f3db>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The RANDOMU function returns one or more uniformly-distributed, floating-point, pseudo-random numbers in the range 0 < Y <1.0.
The random number generator is taken from: "Random Number Generators: Good Ones are Hard to Find", Park and Miller, Communications of the ACM , Oct 1988, Vol 31, No. 10, p. 1192. To remove low-order
serial correlations, a Bays-Durham shuffle is added, resulting in a random number generator similar to ran1() in Section 7.1 of Numerical Recipes in C: The Art of Scientific Computing (Second
Edition), published by Cambridge University Press.
A long integer used to initialize the IDL random number generator. Both of the random number generators, RANDOMN and RANDOMU, simulate random floating point numbers using a seqence of long integers.
You can use Seed to start the sequence. IDL saves the sequence for subsequent calls to RANDOMN and RANDOMU. If Seed is a named variable, RANDOMN and RANDOMU will update it to the next long integer in
the sequence. To generate a random seed from the system time, set Seed equal to an undefined named variable.
NOTE: RANDOMN and RANDOMU use the same sequence, so starting or restarting the sequence for one starts or restarts the sequence for the other. Some IDL routines use the random number generator,
so using them will initialize the seed sequence. An example of such a routine is CLUST_WTS .
The formulas for the binomial, gamma, and Poisson distributions are from Section 7.3 of Numerical Recipes in C: The Art of Scientific Computing (Second Edition), published by Cambridge University
Set this keyword to a 2-element array, [ n , p ], to generate random deviates from a binomial distribution. If an event occurs with probability p , with n trials, then the number of times it occurs
has a binomial distribution.
Set this keyword to an integer order i > 0 to generate random deviates from a gamma distribution. The gamma distribution is the waiting time to the i th event in a Poisson random process of unit
mean. A gamma distribution of order equal to 1 is the same as the exponential distribution.
Set this keyword to the mean number of events occurring during a unit of time. The POISSON keyword returns a random deviate drawn from a Poisson distribution with that mean.
This example simulates rolling two dice 10,000 times and plots the distribution of the total using RANDOMU. Enter:
PLOT, HISTOGRAM(FIX(6 * RANDOMU(S, 10000)) + $
FIX(6 * RANDOMU(S, 10000)) + 2)
In the above statement, the expression RANDOMU(S, 10000) is a 10,000-element, floating-point array of random numbers greater than or equal to 0 and less than 1. Multiplying this array by 6 converts
the range to 0 £ Y < 6.
Applying the FIX function yields a 10,000-point integer vector with values from 0 to 5, one less than the numbers on one die. This computation is done twice, once for each die, then 2 is added to
obtain a vector from 2 to 12, the total of two dice.
The HISTOGRAM function makes a vector in which each element contains the number of occurrences of dice rolls whose total is equal to the subscript of the element. Finally, this vector is plotted by
the PLOT procedure.
|
{"url":"http://www.astro.virginia.edu/class/oconnell/astr511/IDLresources/idl_5.1_html/idl15e.htm","timestamp":"2014-04-20T23:46:25Z","content_type":null,"content_length":"8777","record_id":"<urn:uuid:7c53e830-1e84-42bf-b2bd-23ce0bda9758>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What Is The Mean Number Of Rolls Of A Die Before ... | Chegg.com
What is the mean number of rolls of a die before a 1 is observed ? Either as an individual or in a small group, roll a die until a 1 is observed. Repeat this process 30 times. (a) Obtain a point
estimate of the mean number of rolls of a die before a 1 is observed (b) The population standard deviation for the number of rolls before a 1 is observed is square root of 30. Use this result to
construct a 90 % Z-interval for the mean number of rolls required before a 1 is observed. (c) The population mean number of rolls before a 1 is observed is 6. Does your interval include 6 ? What
proportion of the Z-intervals in the class includes 6? How many did you expect to include 6 ? (d) Construct a 90% T-interval for the mean number of rolls required before a 1 is observed. (e) The
population mean number of rolls before a 1 is observed is 6. Does your interval include 6? What proportion of the t-intervals in the class includes 6? How many did you expect to include 6? (f)
Compare the Z-interval with the t-interval. Which has the smaller margin of error? Why?
Statistics and Probability
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/mean-number-rolls-die-1-observed-either-individual-small-group-roll-die-1-observed-repeat--q3765909","timestamp":"2014-04-19T05:40:27Z","content_type":null,"content_length":"23052","record_id":"<urn:uuid:05e88541-7c3e-4e93-81d3-5bf3e4e6e32b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
|
First Rule of Computation: Don't Trust Your Calculator
Somewhere along the line, we all come to trust the numbers that come out of computers. After all, an infallible device told us it was true. So, let us start with a simple example, on a ten digit
calculator enter the following:
1E12 + 1 - 1E12
If the 1 is in any position, but the last one, your calculator spits out 0. Despite the fact that the answer is very clearly 1. Using this example, and others, I try to have my students resist the
urge to enter numbers into their calculators until the very end of the problem, and they will then usually avoid such problems. Despite my advice to my students, this bit me twice in two days,
leading to a corollary to the first rule:
Just because you have a bigger, more sophisticated calculator, does not mean you can trust it!
I use Mathematica daily, and, in some sense, it is my extra large calculator. The first problem I had was the result of this innocuous looking bit of code:
The problem comes in as x goes to 0, n goes to infinity. However, f does not do this, and n times f does not do this either. In other words, f goes to zero faster than n goes to infinity, so that
when they are multiplied together the resulting limit is finite. But, I had defined both n and f as functions, so that they are evaluated separately. Therefor when I set x to 0, the whole thing blew
up in my face, as Mathematica only saw that I had infinity times something that was finite. I fixed the problem by replacing the function calls with the functions themselves, at which point
Mathematica got the hint. Thinking I was done, I ran the code again, and it blew up, again.
This time the trouble came about because of the following term in the denominator:
e[k - q]^2 - 2 e[k - q] e[k - q - qp] + e[k - q - qp]^2 - w[q]^2
Mathematica was saying that at k = q = qp this term was going to zero. This should not be zero as the first three terms cancel each other out because they equal
(e[k - q] - e[k - q - qp])^2
and w[q]^2 is not zero. However, individually each of the first three terms is about 20 orders of magnitude larger than w[q]^2, so Mathematica was just blithely ignoring it. The solution: ensure
Mathematica knows that the first three terms cancel out by using the following, instead.
(e[k - q] - e[k - q - qp])^2 - w[q]^2
It should be noted that increasing the precision of the calculation using either N[<>,100] or by increasing $MinPrecision to over 100 does not work. Only the modified form actually gives you a
non-zero answer.
So, even with an extra large calculator, you can really foul things up if you are not careful. There are two morals to this story: when performing floating point calculations, order matters, and a
careful up front analysis will often help you avoid these problems.
Similarly tagged OmniNerd content:
Thread parent sort order:
Highest Voted : Lowest Voted : Oldest : Newest
Thread verbosity:
Hardware or software limitations?
Do you think this limitation in very small or very large number manipulation comes about from hardware/software limitations? A double-precision floating point number is represented with 64 bits. Are
your numbers larger (or smaller) than what can be represented this way?
I’m noticing in MS Visual Studio that the ‘double’ type variable allows for a maximum value of 1.79769313486232E+308. But I think this falls in the same category as the 10 digit calculator example
where significant digits begin to be compromised after 2^52 as the IEEE spec shows.
(Some bits out of the 64 are used for the sign, and the exponent.)
But then I wonder how Mathmatica can calculate say 2^330 and get:
2187250724783011924372502227117621365353169430893212436425770606409952999199375923223513177023053824 — All significant digits.
Maybe it’s a software trick for these straightforward cases?
This comment thread has 3 replies.
Second Rule of Computation: Sometimes you have to trust your calculator.
I just spent a week searching for a non-existent bug in some code I wrote for a class. According to the work we had done looking at the solution analytically (the “we” includes the professor for the
class), as a parameter, t, in the equations was increased we expected the output of our code to go to one, for all values of another parameter, n. However, my code showed a distinct peak near n=0,
that I could not get rid of. Since, we all knew what the answer should be, I stripped down my code looking for numerical errors. The stripping down involved rearranging terms to eliminate any
numerical errors, extending the range of n to get rid of truncation errors, completing rewriting it in a different form, and having another student write their own version in a different language
entirely. But, the result remained consistent. So, despite this, the professor insisted that the code was wrong, as “if the system must go to 1 as t is increased, then the code cannot be correct.”
So, after losing my temper, I went back and looked closely at the original assumption. After very carefully examining the high-t limit, I concluded that my code was correct all along and we were all
incorrect. My fellow students concurred. The moral of the story: sometimes you have to trust your calculator.
Speaking of which, in science we often bend over backwards to expose the flaws in our methodologies just so that we can be absolutely sure of the correctness of our results. If I were an
experimentalist, this would mean that I would have to carefully re-examine anomalous data (re-doing the experiment if necessary) just to prove or disprove the correctness of my original result.
Computation is very similar.
The first step is always to re-examine the code, ensuring its correctness in every way possible, first. Once the code is found to be correct, then, and only then, can we say our original view may be
wrong. The second step is to show that the numerics had to be correct by analyzing the system analytically, if possible. (Usually, this step is done first, but sometimes the numerics reveals
something different.) It is this step that is often very difficult, and usually can only be accomplished in some limit.
That said, a thought occurred to me this morning: was it possible that my professor knew our original assumption was wrong, and he wanted to see if any of us would challenge it. If so, he is more
devious than I gave him credit for, and I would feel like a schmuck for losing my temper. In other words, the statement: “I’m right, but I can’t prove it” just doesn’t fly if you want to be a
theoretician. (And, no those weren’t my exact words, just what I was thinking.)
This comment thread has 2 replies.
|
{"url":"http://www.omninerd.com/articles/First_Rule_of_Computation_Don_t_Trust_Your_Calculator","timestamp":"2014-04-20T08:16:03Z","content_type":null,"content_length":"30677","record_id":"<urn:uuid:3a5d5e4f-6c4b-467b-94c5-a656574161f8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Approximation by multiple refinable functions
Canad. J. Math. 49(1997), 944-962
Printed: Oct 1997
• R. Q. Jia
• S. D. Riemenschneider
• D. X. Zhou
We consider the shift-invariant space, $\bbbs(\Phi)$, generated by a set $\Phi=\{\phi_1,\ldots,\phi_r\}$ of compactly supported distributions on $\RR$ when the vector of distributions $\phi:=(\phi_1,
\ldots,\phi_r)^T$ satisfies a system of refinement equations expressed in matrix form as $$ \phi=\sum_{\alpha\in\ZZ}a(\alpha)\phi(2\,\cdot - \,\alpha) $$ where $a$ is a finitely supported sequence of
$r\times r$ matrices of complex numbers. Such {\it multiple refinable functions} occur naturally in the study of multiple wavelets. The purpose of the present paper is to characterize the {\it
accuracy} of $\Phi$, the order of the polynomial space contained in $\bbbs(\Phi)$, strictly in terms of the refinement mask $a$. The accuracy determines the $L_p$-approximation order of $\bbbs(\Phi)$
when the functions in $\Phi$ belong to $L_p(\RR)$ (see Jia~[10]). The characterization is achieved in terms of the eigenvalues and eigenvectors of the subdivision operator associated with the mask
$a$. In particular, they extend and improve the results of Heil, Strang and Strela~[7], and of Plonka~[16]. In addition, a counterexample is given to the statement of Strang and Strela~[20] that the
eigenvalues of the subdivision operator determine the accuracy. The results do not require the linear independence of the shifts of $\phi$.
Keywords: Refinement equations, refinable functions, approximation, order, accuracy, shift-invariant spaces, subdivision
MSC Classifications: 39B12 - Iteration theory, iterative and composite equations [See also 26A18, 30D05, 37-XX]
41A25 - Rate of convergence, degree of approximation
65F15 - Eigenvalues, eigenvectors
|
{"url":"http://cms.math.ca/10.4153/CJM-1997-049-8","timestamp":"2014-04-18T05:32:05Z","content_type":null,"content_length":"35090","record_id":"<urn:uuid:d86cc22a-1e65-4450-b52e-1f0bbb791d47>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Methods For Physicists Arfken Pdf
Sponsored High Speed Downloads
Mathematical Methods for Physicists by G. Arfken Chapter 13: Special Functions Reporters:黃才哲、許育豪
Mathematical Methods for Physicists: A concise introduction CAMBRIDGE UNIVERSITY PRESS TAI L. CHOW
Mathematical Methods for Physicists By George B Arfken Mathematical Methods for Physicists Details: Mathematical Methods for Physicists, Seventh Edition: A ...
Mathematical Methods for Physicists (Fifth edition), Arfken and Weber ... Title: Microsoft Word - Homework Set 1.docx Author: laleh Created Date: 10/5/2012 11:25:28 PM
MATHEMATICAL METHODS FOR PHYSICISTS SIXTH EDITION George B. Arfken Miami University Oxford, OH Hans J. Weber University of Virginia Charlottesville, VA
Mathematical Methods For Physicists International Student Edition (Paperback) by George B. Arfken, Hans J. Weber and a great selection of similar Used, New and Collectible
Mathematical Methods for Theoretic Physics II . Physics 452 Course Information (Winter, 2011) Instructor: Prof. Luming Duan, Office: Randall 4219; Phone: 763-3179; Email: [email protected] Text
book:”Mathematical Methods for Physicists” (by Arfken and Weber, Sixth Edition)
Instructor’s Manual MATHEMATICAL METHODS FOR PHYSICISTS A Comprehensive Guide SEVENTH EDITION George B. Arfken Miami University Oxford, OH Hans J. Weber
Title: Mathematical methods for physicists / George Arfken Author: Arfken, George Subject: Mathematical methods for physicists / George Arfken Keywords
The pdf le that I’ve placed online is hyperlinked, ... Mathematical Methods in the Physical Sciences by ... If you know everything in here, you’ll nd all your upper level courses much easier.
Mathematical Methods for Physicists by Arfken and Weber. Academic Press At a more advanced ...
Title: Mathematical Methods for Physicists Solutions Manual, 5th edition, Fifth Edition pdf by H. J. Weber Author: H. J. Weber Created Date: 4/15/2014 9:12:45 PM
http://www.pakebooks.com/mathematical-methods-for-physicists-PDF-1017804444.html. ... Mathematical Methods of Physicists – Arfken and Weber 2. Mathematical Methods of Physists: A concise introduction
– Tai L. (Cambridge University Press - 2000) 3.
Title: Arfken, George "Mathematical methods for physicists / George Arfken" Author: Arfken, George Subject: Mathematical methods for physicists / George Arfken
mathematical ideas than possible with purely analytical methods. Texts G. Arfken and H. Weber, Mathematical Methods for Physicists (Academic Press, 2005), ...
Text: Mathematical Methods for Physicists by George B. Arfken and Hans J. Weber Office Hours: 2-3 pm Monday, 4-5 pm Friday URL: http://www.phy.ohiou.edu/˜phillips/Phys615.html ... T. L. Chow,
Mathematical Methods for Physicists: A Concise Introduction (Cambridge, 2000);
Essential Mathematical Methods for Physicists Description: This new adaptation of Arfken and Weber's bestselling Mathematical Methods for Physicists, Fifth Edition, is the most comprehensive, modern,
and accessible text for using mathematics to solve physics problems.
Mathematical Methods of Physics (Phys. 801) Study Guide, Fall 2002 Text: (1) Mathematical Methods of Physics by J. Mathew and R. L. Walker (2nd Edition). (2) Mathematical Methods for Physicists by G.
Arfken (5th edition). Instructor: Amit Chakrabarti, Professor of Physics
... Essential Mathematical Methods for Physicists, Hans J. Weber and George B. Arfken, Elsevier ACADEMIC PRESS, Boston 2004. ... your project can be on any subject where mathematical methods
introduced in the course are used to solve physical problems.
... (Mathematical Methods for the Physical Sciences) pro-vides an overview of complex variables, matrix theory, ... (as used by physicists and mathematicians for professional ... G. Arfken and H.
Weber, Mathematical Methods for Physicists, 6th edition, Aca-demic Press, 2005
95.606 Mathematical Methods of Physics II Instructor: Viktor A. Podolskiy Office: ... www: http://faculty.uml.edu/vpodolskiy/95.606 Textbook: Mathematical Methods for Physicists, 6-th edition G.B.
Arfken, H.J. Weber ISBN: 0-12-059876-0 Meeting times: scheduled time: MWF 1:00pm … 1:50pm, ...
... “Mathematical Methods in Physics I”, Fall 2013 Course Information Instructor: ... G. B. Arfken, H. J. Weber, F. E. Harris, Mathematical Methods for Physicists: A Comprehensive Guide, Seventh
edition (Elsevier, 2013);
REFERENCES TEXTS FOR MATH 402/403 Arfken1. , George, and Weber, Hans, "Mathematical Methods fo r Physicists" Second Edition, Academic Press, New
Mathematical Methods of Physics Physics 221 Savdeep Sethi September 29, 2008 ... Mathematical physics is a vast and fascinating subject. ... Arfken and Weber, \Mathematical Methods for Physicists."
Mathews and Walker, ...
MATHEMATICAL METHODS FOR PHYSICISTS SIXTH EDITION George B. Arfken Miami University Oxford, OH Hans J. Weber University of Virginia Charlottesville, VA
Mathematical Methods for the Physical Sciences Course Description: ... (as used by physicists and mathemati- ... G. Arfken and H. Weber, Mathematical Methods for Physicists, 6th edition, Aca-demic
Press, 2005
33-791 Group Theory Spring Semester, 2009 Assignment No. 1 REVISED Due Wednesday, January 28 SUGGESTED READING: Introductions to group theory will be found in many places, including Arfken and Weber,
... Mary Boas: Mathematical Methods in the Physical Sciences , ... Wiley. Supplementary text: Mathematical Methods for Physicists , by George B. Arfken and Hans J. Weber, 4 th Ed., Academic ... like
to see more general and more rigorous approach the book of Arfken and Hans is an excellent ...
Homework solutions for Chapter 12 of Arfken's: Mathematical Methods for Physicists. (For those of you who are still in school; STAY THERE! ... Maths Quest 12 Mathematical Methods Cas Pdf downloads at
Ebookily.org - Download free pdf files,ebooks and documents - Learning Brief SM1: ...
Mathematical Methods of Physics I ChaosBook.org/˘predrag/courses/PHYS-6124-11 ... G. Arfken, Mathematical Methods for Physicists; lengthy, but clear. (r) ... Mathematics for Physicists; remarkably
clear and well-
PH 682 Advanced Statistical Methods and Phase Transition . ... Applied Mathematics for Engineers and Physicists, McGraw-Hill (1970). 2. G. B. Arfken and H.J. Weber, Mathematical Methods for
Physicists, 5 th edition, Academic Press (2001). 3.
PHYS 3900: Methods of Mathematical Physics Instructor: W. M. Dennis Text: Mathematics for Physics, ... Reference: Mathematical Methods for Physicists, G. B. Arfken and H. J. Weber (Academic Press).
Office Hours: Any Time Homework: Will assign problems. Selected Problems will be graded.
Mathematical Methods in Physics II ... TEXTBOOK: Mathematical Methods for Physicists by G.B. Arfken & H.J. Weber, Elsevier 2005, Sixth Ed. COURSE CONTENT: PDE’s and Special Functions Integral
Transforms and Integral Equations Calculus of Variations and Its Applications
MATHEMATICAL METHODS FOR PHYSICS ... 1 Mathematical Methods for Physicists – Tai L. Chow 1st Edition, 2000, Cambridge University Press ... 6 Mathematical Methods for Physicists-G.B Arfken, H.
J.Weber, 5th Edition, Harcourt Pvt. Ltd. ...
G. ARFKEN, Mathematical Methods for Physicists. Texts with titles such as this usually have some treatment of matrices and linear transformations, and Arfken's book is representative of the best of
Mathematical Methods for Physicists by Arfken and Weber (less advanced) 2. Mathematical Methods for Scientists and Engineers by McQuarrie (less advanced) 3. Mathematical Methods for Students of
Physics and Related Fields by Hassani
... Mathematical Methods for Physics and Engineering by Riley, Hobson and Bence (Cambridge, ... • Arfken & Weber, Mathematical Methods for Physicists ... http://www.math.sfu.ca/~cbm/aands/dl/
Abramowitz&Stegun.pdf 1.
Mathematical Methods of Physics I Physics 330 Savdeep Sethi September 12, 2005 ... Mathematical physics is a vast and fascinating subject. ... Arfken and Weber, “Mathematical Methods for Physicists.
Suggested Reading in Mathematical Physics: Mathematical Methods for Physicists Arfken. This is the most commonly used undergraduate text, and the one used when I took this course as an
PHY-5115/5116: MATHEMATICAL METHODS I/II (Dr R Fiebig) The primary text for the courses is: G B Arfken and H J Weber, Mathematical Methods for Physicists. 6th Edition, Elsevier Academic
Text 1: Mathematical Methods in the Physical Sciences, 3rd edition, Mary L. Boas (Wiley, ... Reference: Mathematical Methods for Physicists, G. B. Arfken and H. J. Weber (Academic Press). O ce
Hours:Any Time (by appointment; best: after class or Mon. p.m.) Homework:Will assign problems.
(known as \Baby Arfken"); Arfken, Mathematical Methods for Physicists (QA37.2.A74 1970) . Mathews and Walker, Mathematical Methods of Physics. These are listed in order of increasing di culty; Arfken
and Mathews and Walker are graduate level texts. 1. 2
Essential Mathematical Methods for Physicists, by H.J. Weber and G.B. Arfken Mathematical Methods for Physicists, by G.B. Arfken and H.J. Weber
• Mathematical Methods for Physics and Engineering by K. F. Riley, M. P. Hobson, and S. J. Bence, • Essential Mathematical Methods for Physicists by H. J. Weber and G. Arfken. There are also web
sites which give problems and solutions. One of these is
(Mathematical Methods of Physics II) Instructor: Emanuele Berti Office: 205 Lewis Hall Class Schedule: Tue & Thu 1:00pm-2:15pm, Lewis Room 211 ... Mathematical Methods for Physicists by George B.
Arfken and Hans J. Weber Other Useful Books: (3) ...
MATHEMATICAL METHODS OF PHYSICS Physics 330 KPTC 105 ... supplemented at times by G. Arfken and H. Weber, Mathematical Methods for Physicists. Week Dates Chapter in text Subject 1 9/30, 10/2 3;
Appendix Complex variables (Also read 2) (see also Arfken) 2 10/7, 10/9 1 Ordinary differential ...
PHYS 3201: Mathematical Methods in Physics II (winter) ... Mathematical Methods for Physicists by Arfken (ISBN 0-12-059826-4) ... - Much denser than Arfken. Mathematical Physics by Butkov (ISBN
0-201-00724-4) - a viable alternate to Arfken, ...
Text Book: Mathematical Methods for Physicists,6th Edition, Arfken and Weber Grading: 50% Homework; 25% Midterm; ... For some reason, Mathematical Physics books (Arfken,Boas) don't cover numerical
methods (nor software methods).This is a significant
... Mathematical Methods for Physicists by George B. Arfken and Hans J. Weber Other Useful Books: (3) Mathematics for Physicists, by Philippe Dennery and Andre Krzywicki ... We will also use
Mathematical Methods for Physicists by George B. Arfken and Hans
Title: Mathematical Methods of Physics (2nd Edition) pdf Author: Mathews, Jon; Walker, Robert L. Subject: Mathematical Methods of Physics (2nd Edition) pdf download
7.1 G.B. Arfken and H.J. Weber, Mathematical Methods for Physicists (Academic Press, San Diego, 2001, 5th edition) ... 8.1.3 H.W. Wyld, Mathematical Methods for Physicists QC20.W9 Another general
mathematical methods textbook. Covers less material than the two
|
{"url":"http://ebookily.org/pdf/mathematical-methods-for-physicists-arfken-pdf","timestamp":"2014-04-23T17:14:44Z","content_type":null,"content_length":"46477","record_id":"<urn:uuid:459ca1b5-72d1-4216-8b37-64ffd1f33425>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Divide and Multiply by Negative Numbers
Edit Article
Dividing by Negative NumbersMultiplying by Negative Numbers
Edited by ColtonB, Theburn7, Maluniu
Negative numbers represent numbers that have a value of less than zero. These numbers are found when working with the numbers on the left side of the number line. Negative numbers can be added,
subtracted, multiplied and divided just like positive numbers. However, special rules apply when carrying out operations with negative numbers. It is important to pay close attention to the numbers'
signs when you divide and multiply by negative numbers.
Method 1 of 2: Dividing by Negative Numbers
1. 1
Complete division as you would normally, disregarding the negative signs of the numbers.
2. 2
Examine the numbers to see how many numerals are negative.
□ Your answer will be negative if only 1 of the 2 numbers is negative.
□ The answer will be positive if both of the numbers are negative.
3. 3
Write your answer placing a negative sign in front if it is negative and no sign in front if it is positive.
Method 2 of 2: Multiplying by Negative Numbers
1. 1
Conduct multiplication with the numbers as you would with regular (positive) numbers.
2. 2
Check the signs of the two numbers that you multiplied together.
□ The product (answer) will have a positive sign if both of the numbers were negative.
□ The product (answer) will have a negative sign if only 1 of the numbers was negative and the other positive.
3. 3
Write the answer and place a negative sign in front of it if it is negative.
4. 4
Do not put a sign in front of the answer if it is positive.
• Two negative signs creates a positive sign. Therefore, you can count the number of negative signs when you have a list of numbers to multiply. The answer will be positive if you have an even
number of negative signs. The answer will be negative if you have an odd number of negative signs.
• Check your sign after completing the operation. The sign of the answer changes the entire value of the number.
Article Info
Categories: Mathematics
Recent edits by: Theburn7, ColtonB
Thanks to all authors for creating a page that has been read 2,116 times.
Was this article accurate?
|
{"url":"http://www.wikihow.com/Divide-and-Multiply-by-Negative-Numbers","timestamp":"2014-04-19T12:54:56Z","content_type":null,"content_length":"65767","record_id":"<urn:uuid:6ea1343c-133c-44ee-a39d-07affa8dc50a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probability and Statistics Journal Entries
1. Read the “Fathom” paper given to you and write a thorough response. Do not summarize the article! Your response should include the following:
· Your reaction to the content of the paper.
· What would this lesson have looked like had Fathom not been used?
· How might it have been different without the technology?
· How did Fathom help to make this an effective lesson?
2. Suppose you randomly stopped school-aged children as they entered WalMart and asked, “What grade are you in?” What type of variable does this question represent? Explain your reasoning. What
are the observational units? Suppose you gathered the data below. Use this information to create a circle graph.
│ Grade │K │1 │2 │3 │4 │5│
│Number │10│20│45│90│10│5│
3. Recently, a researcher asked students to provide the number of minutes they spent reading the night before. In addition, the students reported their grade level. The data are reported below.
Create a side-by-side stemplot for the data. What statistical tendency is being exhibited? Why might this statistical tendency exist? What is a statistical tendency?
│Grade │5 │5 │7 │7 │7 │5 │5 │7 │7 │7 │
│ Time │28│30│20│20│22│31│33│19│18│16│
│Grade │5 │5 │7 │7 │7 │7 │5 │5 │7 │7 │
│ Time │34│32│24│29│22│18│28│32│17│23│
4. Dr. Rivera believes that teachers who were taught as children using manipulative are more inclined to use manipulatives in their own classrooms. To test this, she surveyed 50 teachers and asked
them the following:
· When you were a child, did your teachers use manipulatives in order to teach mathematical concepts to you?
· Do you use manipulatives to teach mathematics concepts to your students?
Of the 40 teachers replying “yes” to question #1, 25 answered “yes” to question #2. Of the remaining 10 teachers, 6 answered “yes” to question #2.
○ Use the data provided to create a two-way table.
○ Use the information to create and answer a question that represents marginal information.
○ Use the information to create and answer a question that represents conditional information.
5. Explain how to use snap-cubes and the leveling-off procedure to solve the following problem: On the first three quizzes, Mary scored 6, 9, and 7 points. How many points must she score on the
fourth quiz in order to have an average of 8 points?
6. Explain in your own words what is meant by the phrase, “The mean is the balance point of a distribution.” Include with your explanation a sample data set, and an explanation of how the mean
balances that particular data set.
7. Suppose that you are given the following data set:
· 2, 1, 4, 2, 2, 5, 3, 3, 2, 1
Find two different numbers that can be added to the given data set that change the median but not the mean. Explain how you chose these two numbers. (Note: The two numbers do not have to be
8. a) Create a box plot for the data set below. Describe the process you use in identifying the 5 numbers used in creating the box plot. Also, describe how these 5 numbers are identified on the box
· 19, 50, 23, 5, 21, 23, 20, 4, 9, 7
b) Describe how you could use the length of the box in your box plot to estimate visually whether or not 50 represents an outlier.
9. a) In your own words, describe what it means if a person’s test score is within one standard deviation of the mean.
b) Suppose that for a set of test scores the mean is 78 and the standard deviation is 4.3. What would be the score of a person who scored 2.5 standard deviations above the mean? Explain your
10. a) In your own words, explain what it means for two variables to be positively associated. Provide an example of a pair of variables that would be positively associated.
b) In your own words, explain what it means for two variables to be negatively associated. Provide an example of a pair of variable that would be negatively associated. Note: Your examples should
be different from those used in the text or discussed in class.
11. Suppose that your experiment is to flip a coin and then roll a die. When asked to provide an example of a compound event, Janie replied, “Getting heads on the coin and a 4 on the die. This is a
compound even because it contains information on both the coin and the die.” Explain to Janie why this actually represents a simple event. Be sure to include an actual example of a compound event
and why this event is a compound event.
12. What is the difference, if any, between experimental and theoretical probability? Explain your reasoning.
13. Suppose that 100 teachers at a local school were surveyed as to whether or not they used manipulatives in their mathematics classroom. These teachers were also classified according to what grade
level they taught. The results of the hypothetical survey are given below:
│ │Grades K-2│Grades 3-5│
│ Uses manipulatives │ 45 │ 10 │
│Does not use manipulatives │ 15 │ 30 │
If one teacher is selected at random from this group of teachers, find the two probabilities below:
Explain how you arrived at your answer.
a) P(uses manipulatives | Grades K-2 teacher)
b) P(Grades K-2 teacher | uses manipulatives)
14. James has a map of the continental United States on his wall. While wearing a blindfold, he is going to throw a dart at the map. He will then remove his blindfold and record what state his date
hits. (Note: You may assume that James will hit a state and not miss the map.) Sara believes that the probability of James hitting Georgia is 1/48 because Georgia is one state out of a possible 48
states. What is wrong with Sara’s idea? What information is needed in order to actually compute the theoretical probability of James hitting Georgia?
15. a) Explain why “The Sum Game” is not a fair game. Include with your explanation the theoretical probability of Player 1 and Player 2 winning.
b) Write a new set of rules for “The Sum Game” so that it is now a fair game. Explain how you know that the game is now fair.
16. a) Based on the data contained in Journal Entry #13, is it true that the probability that a person uses manipulatives depends on whether or not they teach in Grades K-2 or in Grades 3-5? Justify
your response.
b) In part a, you should have show that the events “uses manipulatives” and “grade-level” were dependent. Reconstruct the two-way table in Journal Entry #13 so that it describes independent events.
You only need to change the numbers—not the column/row headings. Verify that the numbers in your two-way table are representative of independent events.
17. Consider the following problem:
You are going to randomly select two people from a group of 100 people. You know that there are 25 teachers in this group. What is the probability that the two people selected are teachers?
Examine Sammy’s work below:
P(both teachers) = P(1^st teacher AND 2^nd teacher) = (25/100) * (25/100) = (625/10000)
Is Sammy’s work correct? If so, explain how you know this is correct. If not, explain where Sammy went wrong in his thinking and how to correct his work.
18. Experiment: There are three bags and you are going to select a colored chip from each bag. Bag A has 2 red chips and 1 blue chip. Bag B has 3 green chips and 2 red chips. Bag C has 5 red
chips and 4 blue chips. Suppose you want to know the probability of getting “at least one red chip.” In class, we worked this type of problem using the complement of the event “at least one red
chip.” First, what is the complement of this event? Second, why is it beneficial or necessary to use the complement to solve this probability?
|
{"url":"http://www.westga.edu/~srivera/Math%204713/Math%204713-web/Journal%20Entries.htm","timestamp":"2014-04-21T02:28:31Z","content_type":null,"content_length":"51951","record_id":"<urn:uuid:0e8fde20-37d5-4d3c-abbd-aa6e78c15ade>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/sheldoneinstein/answered/1","timestamp":"2014-04-17T01:35:34Z","content_type":null,"content_length":"118417","record_id":"<urn:uuid:0e9d7fb2-f1d1-497f-9918-fc08a748d034>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
|
calc find the work done
March 17th 2007, 12:30 PM #1
Junior Member
Mar 2007
calc find the work done
A 40 ft long rope weighs 0.35 lb/ft and hangs over the edge of a building 100 ft high. How much work is done in pulling the rope to the top of the building
There is a short cut for this, as the work is the same as that required to lift a
point mass equal to that of the rope from a position coincident with the
centre of mass of the rope to the top of the building. This is also equal to
the change in potential energy of the system so is:
W = mgh,
here m = 0.35*40 = 14 pounds, g= 32 ft/s^2, and h = 20 ft, so:
W = 8960 ft poundals (or 280 ft pounds)
i guess you may also want to know how to do it the integral way.
since we want to take the integral, we begin by dividing the rope into small pieces of length delta x. since the rope weighs 0.35 lb/ft, the weight of each piece is 0.35*delta x. we are moving
the rope a distance x, we call the distance x and not 40 since it is always changing (see diagram). so now we have the force ( 0.35*delta x) and the distance ( x ).
now work, by the riemann sum is given by:
W = lim{n-->infinity}E 0.35 * deltax*x = lim{n-->infinity}E 0.35 x deltax
please note that E is the summation sign. changing this to an integral we get:
W = int{0.35x}dx ............the delta x becomes the dx
you dont have to go through all of this everytime you do a problem. just remember work = int{force*distance}dx, call the distance x (usually the case) and the force is the weight per unit length.
so W = int{0.35x}dx evaluated between 0 and 40
=> W = [0.35/2 x^2]
=> W = [0.35/2 * 40^2 - 0.35/2 * 0^2]
=> W = 280 - 0 = 280 ft-lb
Last edited by Jhevon; March 18th 2007 at 12:49 AM.
March 17th 2007, 02:17 PM #2
Grand Panjandrum
Nov 2005
March 18th 2007, 12:10 AM #3
|
{"url":"http://mathhelpforum.com/calculus/12672-calc-find-work-done.html","timestamp":"2014-04-18T20:02:38Z","content_type":null,"content_length":"38594","record_id":"<urn:uuid:29dfaaac-5ade-4c41-8d02-72b8fc1169ca>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Value of Displaying Data Well
Posted on September 1, 2009 Comments (1)
Anscombe’s quartet: all four sets are identical when examined statistically, but vary considerably when graphed.
Image via Wikipedia
Anscombe’s quartet comprises four datasets that have identical simple statistical properties, yet are revealed to be very different when inspected graphically. Each dataset consists of eleven (x,y)
points. They were constructed in 1973 by the statistician F.J. Anscombe to demonstrate the importance of graphing data before analyzing it, and of the effect of outliers on the statistical properties
of a dataset.
Of course we also have to be careful of drawing incorrect conclusions from visual displays.
For all four datasets:
Property Value
Mean of each x variable 9.0
Variance of each x variable 10.0
Mean of each y variable 7.5
Variance of each y variable 3.75
Correlation between each x and y variable 0.816
Linear regression line y = 3 + 0.5x
Edward Tufte uses the quartet to emphasize the importance of looking at one’s data before analyzing it in the first page of the first chapter of his book, The Visual Display of Quantitative
Related: Edward Tufte’s: Beautiful Evidence – Simpson’s Paradox – Correlation is Not Causation – Seeing Patterns Where None Exists – Great Charts – Playing Dice and Children’s Numeracy – Theory of
One Response to “The Value of Displaying Data Well”
1. Curious Cat Science and Engineering Blog » Florence Nightingale: The passionate statistician
November 21st, 2009 @ 10:20 am
[...] articles on applied statistics – The Value of Displaying Data Well – Statistics for Experimenters – Playing Dice and Children’s Numeracy – [...]
|
{"url":"http://engineering.curiouscatblog.net/2009/09/01/the-value-of-displaying-data-well/","timestamp":"2014-04-16T07:43:01Z","content_type":null,"content_length":"33643","record_id":"<urn:uuid:f5d23ea9-9f8e-496f-8bde-a5a1ed7f7a77>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
volume of ingrediants in a cake
November 17th 2005, 02:14 PM #1
Nov 2005
Help please
I Need to know this or i'll fail!!
A large cake was baked. It was a round cake, made of two layers of vanilla cake; each layer was 18 inches in diameter and 3.5 inches tall. Between the two layers was a quarter inch of chocolate
frosting. And covering the top and side of the cake was 3/8" of buttercream frosting.
If the vanilla cake has 140 calories per cubic inch, and the chocolate frosting has 320 calories per cubic inch, and the buttercream frosting has 470 calories per cubic inch, how many 800-calorie
servings are in the cake? Please round down to the nearest serving.
I really need some help!!
Hint Help
First you need to find the volume of each ingredient in one slice of cake this is made by cutting twice from the center to the circumference of the pie. The two cuts are "theta" degrees apart.
Come up with a function for
calories = f(theta)
then set the left side equal to 800 and solve for theta.
Then do 360/theta (if you work in degree) and get the number of slices
Last edited by MathGuru; November 17th 2005 at 02:26 PM.
Still confused..
I'm a high schooler and I am very happy you replied but I have know idea what that means,But I'm really happy you replied
Are ya still there??
Its an extra credit question,and I need the credit if I'm going to pass my Math class.(Our teacher is giving us three 100's and I really need it)
Try to break the slice into all of its peices and figure out the individual volumes for layer 1 and layer 2 and chocolate and frosting.
Then multiply each volume by its calori density (given)
Then add up all of the peices and set them = to 800.
should be in the form:
then divide 360 by theta and round down.
Is this correct?
The Answer is 480.
I drew it free hand. Sorry it's so messy. I licked my fingers afterwards. mmmmmmm.
First of all show your work!
How many slices of 800 calorie cake fit into this pie? Did you answer this question? 480 slices? I don't think so.
Thanks again!!
November 17th 2005, 02:24 PM #2
November 17th 2005, 02:28 PM #3
Nov 2005
November 17th 2005, 02:31 PM #4
Nov 2005
November 17th 2005, 02:46 PM #5
November 17th 2005, 02:55 PM #6
Nov 2005
November 17th 2005, 02:56 PM #7
November 17th 2005, 02:57 PM #8
November 17th 2005, 02:58 PM #9
Nov 2005
|
{"url":"http://mathhelpforum.com/trigonometry/1314-volume-ingrediants-cake.html","timestamp":"2014-04-18T01:52:43Z","content_type":null,"content_length":"52588","record_id":"<urn:uuid:c7392614-de49-41b3-9bad-c1731eed9ff1>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hollywood, CA Geometry Tutor
Find a Hollywood, CA Geometry Tutor
...Some of the things I could help you learn are form, shading, perspective, tone, color mixing, texture, and use of light. Public speaking and performance has always been a large part of my life.
Ever since I was a kid, I was involved in Toastmasters for young adults, TAing classes, being a captain for sports teams, and performing on stage.
69 Subjects: including geometry, chemistry, Spanish, reading
...It is essential that a solid foundation is built with this subject as students prepare for more challenging math concepts ahead. It also makes a world of difference in how you approach everyday
problems from managing money to measuring ingredients in the kitchen. Algebra 2 builds on the topics explored in Algebra 1.
10 Subjects: including geometry, algebra 1, algebra 2, trigonometry
...I am also currently serving as a mentor/adviser for pre-health students at Duke University, so questions about the application process are welcome too! I have studied math and science all my
life and really enjoy helping others. I am calm, friendly, easy to work with and have been tutoring for many years.
13 Subjects: including geometry, chemistry, physics, biology
...Please feel free to contact me with any questions you may have. I am also very happy to provide references upon request. Thanks again for your interest, and I hope to hear from you!I am
currently a master's student studying clarinet performance at the University of Southern California under the tutelage of Yehuda Gilad.
28 Subjects: including geometry, reading, calculus, English
...I came to the United States about 13 years ago. After 1 year of Intensive English Program at Evans CAS, I became a math teacher at the same school, working with students of different age groups
and cultural backgrounds. In addition to regular teaching, I've been tutoring math students, developing tools to make my teaching more productive, and training other math teachers.
14 Subjects: including geometry, calculus, CBEST, ISEE
|
{"url":"http://www.purplemath.com/Hollywood_CA_Geometry_tutors.php","timestamp":"2014-04-17T11:17:48Z","content_type":null,"content_length":"24361","record_id":"<urn:uuid:dbf7c640-c983-4750-812f-dc72eddae26e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: 12 ways from Sunday
Posted by Chris on December 12, 2000
In Reply to: 12 ways from Sunday posted by bruce on December 12, 2000
: Anyone have data on the origin of the expression, "12 ways from Sunday", meaning LOTS of different ways?
I thought the phrase was "6 ways till Sunday"
|
{"url":"http://www.phrases.org.uk/bulletin_board/6/messages/1154.html","timestamp":"2014-04-17T00:55:43Z","content_type":null,"content_length":"7476","record_id":"<urn:uuid:4cfa240a-a643-40d1-b461-533f7371bef7>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MIMO communication system and transmission station - Patent # 8149944 - PatentGenius
MIMO communication system and transmission station
8149944 MIMO communication system and transmission station
(3 images)
Inventor: Kimura
Date Issued: April 3, 2012
Application: 12/273,124
Filed: November 18, 2008
Inventors: Kimura; Dai (Kawasaki, JP)
Assignee: Fujitsu Limited (Kawasaki, JP)
Primary Payne; David C.
Assistant Nguyen; Leon-Viet
Attorney Or Murphy & King, P.C.
U.S. Class: 375/267
Field Of
International H04B 7/02
U.S Patent
Patent 2007-110664
Other 3GPP TS 36.211 V8.1.0. Technical Specification 3rd Generation Partnership Project; Technical Specification Group Radio Access Network; EvolvedUniversal Terrestrial Radio Access
References: E-UTRA); Physical Channels and Modulation (Release 8), Nov. 2007. cited by other.
Motorola, "Comparison of PAR and Cubic Metric for Power De-rating" (R1-040642, TSG RAN WG1 #37), May 2004. cited by other.
Abstract: An imbalanced amplitude is produced to a pair of weighing factors (U.sub.1,1 and U.sub.1,2, U.sub.2,1 and U.sub.2,2) with respect to the transmission streams multiplexed to any of the
plurality of transmission antennas. Thus, the increase in PAPR in a preceding MIMO system can be prevented.
Claim: What is claimed is:
1. A multi-input multi-output (MIMO) communication system including a transmission station that transmits a plurality of transmission streams from a plurality of transmissionantennas
and a reception station that receives the transmission streams with a plurality of reception antennas and reproduces the respective transmission streams, the MIMO communication system
comprising: multipliers that multiply weighting factors bythe respective received transmission streams; and a controller that produces an imbalanced amplitude between a pair of the
weighting factors with respect to the respective transmission streams multiplexed to any of the transmission antennas, wherein thecontroller sets, as the weighting factor, a unitary
matrix or a submatrix of the unitary matrix in which a number of rows corresponds to a number of the transmission streams and a number of columns corresponds to a number of the
transmission antennas.
2. A multi-input multi-output (MIMO) communication system including a transmission station that transmits a plurality of transmission streams from a plurality of transmission antennas
and a reception station that receives the transmissionstreams with a plurality of reception antennas and reproduces the respective transmission streams, the MIMO communication system
comprising: multipliers that multiply weighting factors by the respective received transmission streams; and a controllerthat produces an imbalanced amplitude between a pair of the
weighting factors with respect to the respective transmission streams multiplexed to any of the transmission antennas, wherein the controller sets, as the weighing factor, an element
of a row towhich the imbalanced amplitude is produced between the elements of each row in a unitary matrix constituting a precoding matrix corresponding to the number of the
transmission streams.
3. The MIMO communication system according to claim 2, wherein the unitary matrix is selected from a code book, comprising a plurality of sets of the unitary matrix according to the
number of the transmission streams.
4. The MIMO communication system according to claim 1, wherein the imbalanced amplitude is optimized for the system or every cell formed by the reception station.
5. The MIMO communication system according to claim 1, wherein the imbalanced amplitude is optimized based on a characteristic of an amplifier that amplifies the transmission power of
the transmission station.
6. The MIMO communication system according to claim 1, wherein the imbalanced amplitude is controlled in accordance with the transmission power of the transmission station.
7. A multi-input multi-output (MIMO) communication system including a transmission station that transmits a plurality of transmission streams from a plurality of transmission antennas
and a reception station that receives the transmissionstreams with a plurality of reception antennas and reproduces the respective transmission streams, the MIMO communication system
comprising: multipliers that multiply weighting factors by the respective received transmission streams; and a controllerthat produces an imbalanced amplitude between a pair of the
weighting factors with respect to the respective transmission streams multiplexed to any of the transmission antennas, wherein the imbalanced amplitude is controlled in accordance
with thetransmission power of the transmission station, and wherein the imbalanced amplitude is controlled to increase the degree of the imbalance as the transmission power is
8. A multi-input multi-output (MIMO) communication system including a transmission station that transmits a plurality of transmission streams from a plurality of transmission antennas
and a reception station that receives the transmissionstreams with a plurality of reception antennas and reproduces the respective transmission streams, the MIMO communication system
comprising: multipliers that multiply weighting factors by the respective received transmission streams; and a controllerthat produces an imbalanced amplitude between a pair of the
weighting factors with respect to the respective transmission streams multiplexed to any of the transmission antennas, wherein the imbalanced amplitude is optimized based on a
characteristic ofan amplifier that amplifies the transmission power of the transmission station, and wherein the transmission station has a transmitter that reports the imbalanced
amplitude to the reception station, and the reception station has a controller thatproduces the imbalanced amplitude reported from the transmission station between the elements in
each row in an unitary matrix constituting a precoding matrix included in a code book held by the reception station, the code book comprising a plurality ofsets of the unitary matrix
corresponding to the number of the transmission streams.
9. A transmission station in a multi-input multi-output (MIMO) communication system including the transmission station that transmits a plurality of transmission streams with a
plurality of transmission antennas and a reception station thatreceives the transmission streams with a plurality of reception antennas and reproduces the respective received
transmission streams, the transmission station comprising: a weighting factor multiplier that multiplies a weighting factor by the respectivetransmission streams; and a controller
operable to produce an imbalanced amplitude to a pair of the weighting factors with respect to the respective transmission streams multiplexed to any of the transmission antennas,
wherein the controller sets, asthe weighting factor, a unitary matrix or a submatrix of the unitary matrix in which a number of rows corresponds to a number of the transmission
streams and a number of columns corresponds to a number of the transmission antennas.
10. A transmission station in a multi-input multi-output (MIMO) communication system including the transmission station that transmits a plurality of transmission streams with a
plurality of transmission antennas and a reception station thatreceives the transmission streams with a Plurality of reception antennas and reproduces the respective received
transmission streams, the transmission station comprising: a weighting factor multiplier that multiplies a weighting factor by the respectivetransmission streams; and a controller
operable to produce an imbalanced amplitude to a pair of the weighting factors with respect to the respective transmission streams multiplexed to any of the transmission antennas,
wherein the controller sets, asthe weighing factor, an element of a row to which the imbalanced amplitude is produced between the elements of each row in a unitary matrix constituting
a precoding matrix corresponding to the number of the transmission streams.
11. The transmission station according to claim 10, wherein the controller further comprises a memory that holds a code book comprising a plurality of sets of the unitary matrix
according to the number of the transmission streams, and theunitary matrix that is the subject to be set is selected from the code book.
12. The transmission station according to claim 11, wherein the controller performs the selection based on selecting information relating to the code book reported from the reception
13. The transmission station according to claim 9, wherein the imbalanced amplitude is optimized for the system or every cell formed by the reception station.
14. The transmission station according to claim 9, wherein the imbalanced amplitude is optimized based on a characteristic of an amplifier that amplifies the transmission power of the
transmission streams.
15. The transmission station according to claim 9, wherein the controller controls the imbalanced amplitude in accordance with the transmission power of the transmission station.
16. A transmission station in a multi-input multi-output (MIMO) communication system including the transmission station that transmits a plurality of transmission streams with a
plurality of transmission antennas and a reception station thatreceives the transmission streams with a Plurality of reception antennas and reproduces the respective received
transmission streams, the transmission station comprising: a weighting factor multiplier that multiplies a weighting factor by the respectivetransmission streams; and a controller
operable to produce an imbalanced amplitude to a pair of the weighting factors with respect to the respective transmission streams multiplexed to any of the transmission antennas,
wherein the controller controlsthe imbalanced amplitude in accordance with the transmission power of the transmission station, and wherein the imbalanced amplitude is controlled to
increase the degree of the imbalance as the transmission power is smaller.
17. A transmission station in a multi-input multi-output (MIMO) communication system including the transmission station that transmits a plurality of transmission streams with a
plurality of transmission antennas and a reception station thatreceives the transmission streams with a Plurality of reception antennas and reproduces the respective received
transmission streams, the transmission station comprising: a weighting factor multiplier that multiplies a weighting factor by the respectivetransmission streams; and a controller
operable to produce an imbalanced amplitude to a pair of the weighting factors with respect to the respective transmission streams multiplexed to any of the transmission antennas,
wherein the imbalanced amplitudeis optimized based on a characteristic of an amplifier that amplifies the transmission power of the transmission streams, and wherein the controller
reports the imbalanced amplitude to the reception station in order to reflect the imbalanced amplitudebetween the elements of each row in an unitary matrix constituting a precoding
matrix included in a code book held by the reception station, the code book comprising a plurality of sets of the unitary matrix corresponding to the number of thetransmission
18. The transmission station according to claim 10, wherein the precoding matrix in the case where the number of the transmission antenna is 2 and the number of the transmission
stream is 2 is any one of the matrix obtained by the followingequation (6), the matrix obtained by rearranging the rows in the matrix obtained by the equation (6), the matrix obtained
by rearranging the columns in the matrix obtained by the equation (6), and the matrix obtained by rearranging the rows and columnsin the matrix obtained by the equation (6), wherein
.alpha. represents the degree of the imbalanced amplitude.
..times..times..theta..alpha..times..times..function..times..times..theta.- .function..times..times..theta. ##EQU00013##
19. The transmission station according to claim 10, wherein the precoding matrix in the case where the number of the transmission antenna is 2N (N is an integer of 2 or more) is
formed by combining, as the matrix element, any one of the2.times.2 precoding matrix obtained from the following equation (6), the matrix obtained by rearranging the rows in the
matrix obtained by the equation (6), the matrix obtained by rearranging the columns in the matrix obtained by the equation (6), andthe matrix obtained by rearranging the rows and
columns in the matrix obtained by the equation (6), wherein the degree of the imbalanced is indicated as .alpha..
..times..times..theta..alpha..times..times..function..times..times..theta.- .function..times..times..theta. ##EQU00014##
20. The transmission station according to claim 10, wherein the precoding matrix in the case where the number of the transmission antenna is 4 and the number of the transmission
stream is 2 is any one of the matrix obtained by the followingequation (8), the matrix obtained by rearranging the rows in the matrix obtained by the equation (8), the matrix obtained
by rearranging the columns in the matrix obtained by the equation (8), and the matrix obtained by rearranging the rows and columnsin the matrix obtained by the equation (8), the
precoding matrix in the case where the number of the transmission antenna is 4 and the number of the transmission stream is 3 is any one of the matrix obtained by the following
equation (9), the matrixobtained by rearranging the rows in the matrix obtained by the equation (9), the matrix obtained by rearranging the columns in the matrix obtained by the
equation (9), and the matrix obtained by rearranging the rows and columns in the matrix obtained bythe equation (9), and the precoding matrix in the case where the number of the
transmission antenna is 4 and the number of the transmission stream is 4 is any one of the matrix obtained by the following equation (10), the matrix obtained by rearrangingthe rows
in the matrix obtained by the equation (10), the matrix obtained by rearranging the columns in the matrix obtained by the equation (10), and the matrix obtained by rearranging the
rows and columns in the matrix obtained by the equation (10),.function..alpha..theta..theta..function..function..alpha..theta..functio-
tion..alpha..theta..theta..function..function..alpha..theta..DELTA..times.- .DELTA..times..function..alpha..theta..times..cndot..theta..theta..theta..- theta. ##EQU00015##wherein
.alpha. represents the degree of the imbalanced amplitude and C.sub.N represents a normalized coefficient.
Description: CROSS-REFERENCE TO RELATED APPLICATION(S)
This application is based upon and claims the benefit of priority of the prior Japanese Application No. 2008-19641 filed on Jan. 30, 2008 in Japan, the entire contents of which are
hereby incorporated by reference.
(1) Field
The embodiment(s) discussed herein is directed to a MIMO communication system and a transmission station. The embodiments are preferable in the case in which, for example, a preceding
multi-input multi-output (MIMO) system is applied to anuplink communication in a mobile wireless communication system.
(2) Description of Related Art
A preceding MIMO system has been examined in an long term evolution (LTE) specification examined in 3rd generation partnership project (3GPP) as a third generation mobile wireless
communication system. Following Japanese Patent ApplicationLaid-Open No. 2007-110664 described below also describes the preceding MIMO system.
In the precoding MIMO system, predetermined a plurality of sets of preceding matrix (PM) (hereinafter referred to as "code book") are defined. A reception station selects the
preceding matrix, by which the most excellent reception property isachieved, by using a known signal (reference signal), and informs this information (index) of the transmission
station. The transmission station selects the preceding matrix based on the index informed from the reception station, multiplies thetransmission signal by the preceding matrix (the
element of the preceding matrix) (as a weighing factor), and transmits the resultant. Thus, the throughput of the wireless communication from the transmission station to the reception
station can beenhanced.
In the preceding MIMO system, the optimum number of data pieces (number of streams) simultaneously transmitted from the transmission station to the reception station can be selected.
The number of streams may sometimes be referred to as a rank,and the selection of the optimum rank may sometimes be referred to as a rank adaptation.
In the LTE specification (see the following 3GPP TS36.211 V8.1.0 (2007 Dec. 20)), for example, in case where the number of transmission antennas is four, four ranks of 1, 2, 3 and 4
are defined, wherein 16 types (64 types in total) of PM aredefined for the corresponding ranks, in relation to the downlink (DL) communication in the direction from a base station to
an user equipment (UE).
The UE selects the PM, by which it is estimated that the most excellent reception property is obtained, among 64 types of PMs, and reports this number (index) to the base station. The
base station carries out a downlink transmission by usingthe PM having the reported number.
In case where the number of the transmission antennas is two, 9 types of PMs in total are defined in the LTE specification (see 3GPP TS36.211 V8.1.0 (2007 Dec. 20)). Specifically, the
set of PM in the case of the rank 1 is expressed by thefollowing equation (1), and the set of PM in the case of the rank 2 is expressed by the following equation (2), respectively.
Notably, V.sub.L,M,N represents the Nth PM in the combination (code book) of the PM of L.times.M matrix, when the number of thetransmission antennas is defined as L (L is an integer
of 2 or more), and the number of transmission streams (ranks) is defined as M (M is an integer of 1 or more)
.function..times..function..function..function..cndot..cndot..cndot..func- tion..function..function..cndot..cndot..cndot. ##EQU00001##
The 2.times.2 matrixes in the equation (2) are unitary matrixes, and the 2.times.1 matrixes in the equation (1) correspond to the extracted column components in each unitary matrix.
Since the value by which the total transmission power becomes1 is multiplied as a normalized coefficient, the 2.times.1 matrixes are those of constant multiple of the unitary
matrixes, to be correct.
Similarly, when the number of the transmission antennas is four, 4.times.4 matrixes (16 types) in the case of rank 4 are the unitary matrixes, and 4.times.3 matrixes in the case of
rank 3, 4.times.2 matrixes in the case of rank 2, and 4.times.1matrixes in the case of rank 1 respectively correspond to the extracted column components in each unitary matrix.
Specifically, the matrixes of PM other than the rank 1 are mutually orthogonal.
In a cellular system, reducing power consumption of UE is important. The enhancement in the power efficiency of an amplifier for a transmission signal of the UE is effective for
reducing the power consumption. When the power efficiency of theamplifier is considered from the viewpoint of the transmission signal, it is desirable that the peak to average power
ratio (PAPR) of the transmission signal is reduced.
An orthogonal frequency division multiplexing (OFDM) system has been proposed as a wireless access system that is strong for a frequency selective fading in a multi-pass in a
broadband radio communication. However, this system employs amulti-carrier transmission, so that the PAPR of the transmission signal tends to increase. From the viewpoint of the power
efficiency of the UE, it is unsuitable for the transmission system of the uplink (wireless link from the UE to the base station)in the cellular system.
Therefore, as the transmission system of the uplink (UL) in the LTE specification, a system described below has been proposed. Specifically, in this system, a transmitter adds a
cyclic prefix (CP) to an effective symbol in a time-domain tocarry out a single carrier transmission, and a receiver performs a frequency equalization. Examples of this system include
an single carrier frequency division multiple access (SC-FDMA) system.
In the single carrier transmission, transmission data signals or known signals (reference signal or pilot signal) between the transmission and reception are multiplexed in the
time-domain, so that it can suppress the PAPR, compared to the OFDMin which the data signals or known signals are multiplexed in the frequency region.
Since a power added efficiency of an amplifier is enhanced as an output power thereof increases, it is desirable that an operating point is made close to the maximum value of the
output power as much as possible. However, when the output powerexceeds a fixed threshold value (saturation power), a non-linear distortion, which is non-tolerable as a transmission
signal, might occur. Therefore, there is a trade-off relationship between the distortion and the power added efficiency.
As the PAPR of the transmission signal is small, the difference (back-off) between the operating point and the threshold value can be decreased (e.g., see FIG. 3). The PAPR is mostly
used as an index of evaluating the back-off necessary for thedesign of the amplifier, but an evaluation index of raw Cubic Metric (raw CM) represented by the following equation (3) is
also proposed in the following Motorola, "Comparison of PAR and Cubic Metric for Power De-rating" (R1-040642, TSG RAN WG1 #37),2004. 5 described below. The relative back-off can be
defined by the following equation (4) by using the raw CM. rawCM=20*log 10((v_norm.sup.3).sub.rms) .quadrature..quadrature..quadrature. CM=[rawCM-20*log 10
The value obtained from the equation (4) is referred to as Cubic Metric (CM). This value is close to the actual value compared to the back-off calculated by using the PAPR. In the
equations (3) and (4), v_norm represents an amplitude of thenormalized input signal, while v_norm_ref represents an amplitude of a signal that becomes a reference. Further, (
).sub.rms means that the root mean square is assumed, and M is a value determined by the property of the amplifier.
In the existing LTE specification (3GPPTS36.211V8.1.0 (2007 Dec. 20)), the application of the preceding MIMO system is only examined with respect to the downlink (DL) communication
(multi-carrier transmission), in which the increase of the PAPRis not a problem, compared to the UE.
One of the objects of the embodiments are to suppress the increase in the PAPR when the preceding MIMO system is applied to, for example, a single carrier transmission (UL
communication in a mobile communication system).
Providing operation and effects that are derived from the configurations, which are illustrated in the preferred embodiments for carrying out the present invention described later,
and that cannot be obtained from a prior art can be located asanother object of the present invention, in addition to the aforesaid object.
In order to achieve the foregoing object, the present specification describes a following "MIMO communication system and transmission station".
(1) One embodiment of the MIMO communication system disclosed here is a multi-input multi-output (MIMO) communication system including a transmission station that transmits a
plurality of transmission streams from a plurality of transmissionantennas and a reception station that receives the transmission streams with a plurality of reception antennas and
reproduces the respective transmission streams, the MIMO communication system including means for multiplying a weighing factor by therespective received transmission streams, and
means for producing an imbalanced amplitude between a pair of the weighing factors with respect to the respective transmission streams multiplexed to any of the transmission antennas.
(2) One embodiment of the transmission station described here is the transmission station in the multi-input multi-output (MIMO) communication system including the transmission
station that transmits a plurality of transmission streams with aplurality of transmission antennas and a reception station that receives the transmission streams with a plurality of
reception antennas and reproduces the respective received transmission streams, the transmission station including a weighing factormultiplying unit that multiplies a weighing factor
by the respective transmission streams, and a control unit operable to produce an imbalanced amplitude to a pair of the weighing factors with respect to the respective transmission
streams multiplexed toany of the transmission antennas.
(3) The control unit may set, as the weighing factor, an element of a row to which the imbalanced amplitude is produced between the elements of each row in a unitary matrix
constituting a preceding matrix corresponding to the number of thetransmission streams.
(4) The control unit may further include a memory that holds a code book including a plurality of sets of the unitary matrix according to the number of the transmission streams, and
the unitary matrix that is the subject to be set is selectedfrom the code book.
(5) The imbalanced amplitude may be optimized for the system or every cell formed by the reception station.
(6) The imbalanced amplitude may be optimized based on the characteristic of the amplifier that amplifies the transmission power of the transmission streams.
(7) The control unit may control the imbalanced amplitude in accordance with the transmission power of the transmission station.
(8) The preceding matrix in the case where the number of the transmission antenna is 2 and the number of the transmission stream is 2 may be any one of the matrix obtained by the
following equation (6), the matrix obtained by rearranging therows in the matrix obtained by the equation (6), the matrix obtained by rearranging the columns in the matrix obtained by
the equation (6), and the matrix obtained by rearranging the rows and columns in the matrix obtained by the equation (6), wherein.alpha. represents the degree of the imbalanced
.function..alpha..theta..theta..theta..function..times..times..theta..alp- ha..function..function..times..times..theta..alpha..times..times..function-
..times..times..theta..alpha..times..times..function..times..times..theta.-.function..times..times..theta..cndot..cndot..cndot. ##EQU00002##
(9) The preceding matrix in the case where the number of the transmission antenna is 2N (N is an integer of 2 or more) may be formed by combining, as the matrix element, any one of
the 2.times.2 preceding matrix obtained from the followingequation (6), the matrix obtained by rearranging the rows in the matrix obtained by the equation (6), the matrix obtained by
rearranging the columns in the matrix obtained by the equation (6), and the matrix obtained by rearranging the rows and columnsin the matrix obtained by the equation (6), wherein the
degree of the imbalanced is indicated as .alpha..
.function..alpha..theta..theta..theta..function..times..times..theta..alp- ha..function..function..times..times..theta..alpha..times..times..function-
..times..times..theta..alpha..times..times..function..times..times..theta.-.function..times..times..theta..cndot..cndot..cndot. ##EQU00003##
(10) The preceding matrix in the case where the number of the transmission antenna is 4 and the number of the transmission stream is 2 may be any one of the matrix obtained by the
following equation (8), the matrix obtained by rearranging therows in the matrix obtained by the equation (8), the matrix obtained by rearranging the columns in the matrix obtained by
the equation (8), and the matrix obtained by rearranging the rows and columns in the matrix obtained by the equation (8), thepreceding matrix in the case where the number of the
transmission antenna is 4 and the number of the transmission stream is 3 may be any one of the matrix obtained by the following equation (9), the matrix obtained by rearranging the
rows in the matrixobtained by the equation (9), the matrix obtained by rearranging the columns in the matrix obtained by the equation (9), and the matrix obtained by rearranging the
rows and columns in the matrix obtained by the equation (9), and the preceding matrix inthe case where the number of the transmission antenna is 4 and the number of the transmission
stream is 4 may be any one of the matrix obtained by the following equation (10), the matrix obtained by rearranging the rows in the matrix obtained by theequation (10), the matrix
obtained by rearranging the columns in the matrix obtained by the equation (10), and the matrix obtained by rearranging the rows and columns in the matrix obtained by the equation
.function..alpha..theta..theta..function..function..alpha..theta..functio- n..alpha..theta..times..cndot..theta..theta..theta..theta..cndot..cndot..c-
ion..function..alpha..theta..DELTA..times..DELTA..times..function..alpha..-theta..times..cndot..theta..theta..theta..theta..cndot..cndot..cndot..cndo- t. ##EQU00004##
wherein .alpha. represents the degree of the imbalanced amplitude and C.sub.N represents a normalized coefficient.
According to the disclosed technique, the increase in the PAPR can be suppressed when the preceding MIMO system is applied to the communication in the direction (uplink) from the UE,
which is one example of the transmission station, to the basestation, which is one example of the reception station.
Additional objects and advantages of the invention (embodiment) will be set forth in part in the description which follows, and in part will be obvious from the description, or may be
learned by practice of the invention. The object andadvantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the
appended claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention,
as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram illustrating an example of a configuration of a MIMO wireless communication system according to an embodiment;
FIG. 2 is a block diagram illustrating an example of a configuration of a PM multiplying unit illustrated in FIG. 1; and
FIG. 3 is a graph illustrating an example of input/output power characteristic of an amplifier used for the UE that is one example of the transmission station.
DESCRIPTION OF EMBODIMENTS(S)
The embodiment will be described with reference to the drawings. The following embodiment is only illustrative, and does not intend to exclude various modifications or application of
the technique not described below. Specifically, theembodiments can be embodied as modified in various forms (e.g., by combining the respective embodiments) without departing from the
scope of embodiments.
[A] Outline
When a preceding MIMO system is simply applied to a UL communication in order to enhance the throughput of the UL, the transmission signals are multiplexed for every transmission
antenna in the case other than the case of the rank 1. As aresult, the PAPR (or raw CM) tends to increase. This increase can result in increasing the power consumption of the UE.
In view of this, in order to lessen the increase in the PAPR that is caused by the application of the preceding MIMO system to the UL communication, a unitary matrix U in which the
imbalanced amplitude between the components in the rows (betweenthe elements in the columns) is made variable is used for the preceding matrix (PM) that is multiplied by the plurality
of transmission streams of the UL. Preferably, the amplitude is made variable depending upon the characteristic of the amplifier orthe transmission power of the UE. As illustrated
below, the PAPR can be reduced as the imbalanced amplitude is reduced.
For example, the general equation of the 2.times.2 unitary matrix U2 can be expressed by the following equation (5).
e.times..times..theta..function..cndot..cndot..cndot. ##EQU00005##
When the imbalanced amplitude of (|b|.sup.2/|a|.sup.2).sup.1/2 is represented by .alpha. as a parameter, the PM in this case can be expressed by the following equation (6).
.function..alpha..theta..theta..theta..function..times..times..theta..alp- ha..function..function..times..times..theta..alpha..times..times..function-
..times..times..theta..alpha..times..times..function..times..times..theta.-.function..times..times..theta..cndot..cndot..cndot. ##EQU00006##
Since .theta..sub.0 only produces the same phase rotation to all transmission signals, and does not affect the characteristic, it may be an optional value. Notably, the PM may be the
matrix obtained by rearranging the components in the row inthe matrix expressed by the equation (6), the matrix obtained by rearranging the components in the column in the matrix
expressed by the equation (6), or the matrix obtained by rearranging the components in the column and row in the matrix expressed bythe equation (6).
From the equation (6) and the equation (3), the relationship between .alpha. and raw CM is as indicated in following Table 1 in theory, for example.
TABLE-US-00001 TABLE 1 Relationship between .alpha. and raw CM .alpha. raw CM [dB] 0 0.25 0.5 0.75 1 QPSK 3.39 3.94 4.82 5.32 5.46 16QAM 4.8 5.15 5.74 6.09 6.18
Specifically, as .alpha. is decreased, the raw CM can be decreased. Conceivably, the effect of the preceding MIMO becomes small as .alpha. is decreased. For example, in the case of
.alpha.=0, the throughput property corresponding to the casewhere the preceding MIMO is not carried out is obtained. In the case of .alpha.=1, the raw CM corresponding to the
conventional case (the case in which the preceding MIMO applied to DL is simply applied to UL) is obtained.
In other words, when 2.times.2 PM defined with respect to the DL communication in the LTE specification is applied to the UL communication, the raw CM increases to 5.46 [dB] with
respect to 3.39 [dB] in QPSK in a single stream, while increasesto 6.18 [dB] with respect to 4.8 [dB] in 16 QAM.
It is to be noted that the set of PM may be common to the UE in the system, or may differ for a part of or all UEs. In the later case, the UE can autonomously select the preferable or
optimum .alpha. based on the property of the amplifier ofthe own station or the current transmission power, and can report the selected .alpha. to the base station.
In the cellular system, there may be the case in which the base station controls the transmission power of UE (P.sub.UE) in such a manner that the transmission power of the UE located
at the position close to the cell end (remote from the basestation) becomes greater than the transmission power of the UE located at the position close to the center of the cell
(close to the base station).
In such a case, the back-off of the UE located at the center of the cell is apparently relatively greater than the back-off of the UE located at the cell end, so that the UE located
at the center of the cell can transmit a signal having agreater raw CM (e.g., see FIG. 3).
Specifically, the UE having relatively a great back-off and affording a transmission power can select a greater .alpha.. As a result, the throughput of the UL according to the
preceding MIMO can be enhanced.
On the other hand, with respect to the UE located in the vicinity of the cell end and not so affording a transmission power, the preceding MIMO is not carried out with .alpha.=0.
Alternatively, the equation of .alpha.=0.25 is established,whereby the throughput according to the preceding MIMO can be enhanced while suppressing the increasing amount of the CM.
The transmission power P.sub.UE (the maximum transmission power P.sub.max) that can be transmitted from the UE without causing the distortion at the amplifier can be expressed by, for
example, P.sub.UE(P.sub.max).ltoreq.P.sub.sat-CM[dB], whereinthe saturation power of the amplifier is defined as P.sub.sat, and the value calculated from the equation (4) (back-off)
is defined as CM (see FIG. 3). The value of the CM is determined according to the characteristic of the amplifier and the propertyof the transmission signal. The transmission power
P.sub.UE of the UE is sometimes controlled from the base station as described above.
Accordingly, the optimum value of a can be defined as the maximum .alpha. that satisfies the following equation (7), for example. Notably, the CM(.alpha.) means that the CM is the
function of .alpha.. P.sub.sat.gtoreq.P.sub.UE+CM(.alpha.).quadrature..quadrature..quadrature.
Next, the case in which the number of the transmission antenna is greater than 2 is considered. If the transmission signal is multiplexed more than 2 streams per one transmission
antenna, the CM might remarkably increase. Therefore, it ispreferable to avoid this situation.
When the number of the transmission antenna is 4, and the rank is 2, it is supposed that the PM is formed by the combination of PM in the case where the number of the transmission
antenna is 2, which is expressed by the equation (6) describedabove, as illustrated in the following equation (8). Notably, a CN is a normalized coefficient.
.function..alpha..theta..theta..function..function..alpha..theta..functio- n..alpha..theta..times..cndot..theta..theta..theta..theta..cndot..cndot..c- ndot. ##EQU00007##
Similarly, in the case of rank 3 and rank 4, the PM can be formed by the combination of PM in the case where the number of the transmission antenna is 2, as respectively expressed by
the following equations (9) and (10).
.function..alpha..theta..theta..function..function..alpha..theta..DELTA..- times..DELTA..times..function..alpha..theta..times..cndot..theta..theta..t-
a..times..cndot..theta..theta..theta..theta..cndot..cndot..cndot..cndot. ##EQU00008##
.DELTA..sub.2.times.1 in the equation (9) and .DELTA..sub.2.times.2 in the equation (10) represent the matrix having a coefficient (it may be 0) sufficiently smaller than .alpha., and
mean that they have dimensions of 2.times.1 and 2.times.2respectively.
The PM in the case where the number of the transmission antenna is 4 and the rank is 2 may be the matrix obtained by rearranging the components in the row in the matrix obtained from
the equation (8), may be the matrix obtained by rearrangingthe components in the column in the matrix obtained from the equation (8), and may be the matrix obtained by rearranging the
components in the row and column in the matrix obtained from the equation (8).
Similarly, the PM in the case where the number of the transmission antenna is 4 and the rank is 3 may be the matrix obtained by rearranging the components in the row in the matrix
obtained from the equation (9), may be the matrix obtained byrearranging the components in the column in the matrix obtained from the equation (9), and may be the matrix obtained by
rearranging the components in the row and column in the matrix obtained from the equation (9).
The PM in the case where the number of the transmission antenna is 4 and the rank is 4 may be the matrix obtained by rearranging the components in the row in the matrix obtained from
the equation (10), may be the matrix obtained by rearrangingthe components in the column in the matrix obtained from the equation (10), and may be the matrix obtained by rearranging
the components in the row and column in the matrix obtained from the equation (10).
The PM having the greater dimension can be formed by the combination of U.sub.2,2(.alpha.,.theta.). In this case, the PM may be the matrix obtained by rearranging the components in
the row in the matrix, may be the matrix obtained byrearranging the components in the column in the matrix, and may be the matrix obtained by rearranging the components in the row and
column in the matrix.
[B] One Embodiment
Next, a specific example of the MIMO communication system using the unitary matrix, in which the imbalanced amplitude .alpha. (variable) is produced between the components of columns
of the matrix, to the PM as described above will be describedin detail.
FIG. 1 is a block diagram illustrating an example of a configuration of a MIMO communication system according to an embodiment. The system illustrated in FIG. 1 includes, for example,
at least one transmission station 10 and at least onereception station 30. The transmission station 10 is, for example, an UE, and the reception station 30 is, for example, a base
station (BS or eNodeB). It is supposed below that a single carrier transmission, which can reduce the PAPR compared to amulticarrier transmission, such as SC-FDMA system is applied
for the UL communication, on the assumption of the relationship described above.
(About Transmission Station 10)
As illustrated in FIG. 1, the transmission station 10 includes, for example, an encoding/modulation unit 11, preceding matrix (PM) multiplying unit 12, code book indicator (CBI)
determining unit 13, switch (SW) 14, transmitter 15, dividers 16and 19, a plurality of (2 in this case) antennas 17 and 18, and receiver 20.
The encoding/modulation unit 11 encodes transmitted data with a predetermined encoding system, and modulates the encoded data with a predetermined modulation system. Examples of the
encoding system include an error correction encoding such asturbo encoding or convolutional encoding. Examples of the modulation system include a multi-value quadrature modulation
system such as QPSK, 16 QAM, 64 QAM, etc. The encoding system and/or modulation system may adaptively be changed (adaptive modulationand coding (AMC) control) according to the
environment of propagation path between the reception station 30 and the encoding/modulation unit 11.
The PM multiplying unit 12 has a code book (memory) 121 holding a plurality of sets of preceding matrix (PM). The PM multiplying unit 12 selects the corresponding PM (preceding
matrix) from the code book 121 based on the precoding matrixindicator (PMI) reported from the reception station 30, and multiplies the modulation signal sequence obtained from the
encoding/modulation unit 11 by the PM (the element of the PM) (as a weighing factor). The PMI is received and extracted (detected) bythe receiver 20 as a signal of a control channel,
for example.
It is to be noted that the code book 121 in the present embodiment holds sets of PM expressed by the equations (8), (9) and (10), i.e., sets of unitary matrix U in which the
imbalanced amplitude .alpha. is produced between the elements in eachcolumn in the matrix corresponding to the number of the transmission stream.
Considered here is the case in which the number of the transmission antenna is 2, and the number of the transmission stream is 2. The PM multiplying unit 12 has the code book 121,
weighing factor control unit (PM setting unit) 122, multipliers123-1, 123-2, 123-3, and 123-4, and adders 124-1 and 124-2.
One of the transmission streams is inputted to the multiplier 123-1 and the multiplier 123-2 respectively, wherein the components U.sub.1,1 and U.sub.2,1 in the first column (the
element having the imbalanced amplitude .alpha.) in the precedingmatrix (unitary matrix) U selected at the weighing factor control unit 122 are multiplied at the multipliers 123-1 and
123-2 respectively as a weighing factor. The other transmission stream is inputted to the remaining multiplier 123-3 and themultiplier 123-4 respectively, wherein the components
U.sub.1,2 and U.sub.2,2 in the second column (the element having the imbalanced amplitude .alpha.) in the preceding matrix (unitary matrix) U are multiplied at the multipliers 123-3
and 123-4respectively as a weighing factor.
Subsequently, the result of the multiplication at the multiplier 123-1 and the result of the multiplication at the multiplier 123-2 are added (synthesized) at one adder 124-1, and the
result of the multiplication at the remaining multiplier123-2 and the result of the multiplication of the multiplier 123-4 are added (synthesized) at the other adder 124-2. The
respective added results are outputted to the switch 14. One of the results of the addition is distributed to one antenna 17, whilethe other one is distributed to the other antenna
The CBI determining unit 13 determines (selects) the optimum (or preferable) imbalanced amplitude .alpha.. The selected .alpha. (or the later-described CBI obtained by quantizing the
selected .alpha.) is reported to the weighing factor controlunit 122 of the PM multiplying unit 12, and reflected on the PM in the code book 121 by the control unit 122.
Specifically, the weighing factor control unit 122 receives the report of the imbalanced amplitude .alpha. selected at the CBI determining unit 13, whereby it can produce the
imbalanced amplitude .alpha. to the set of the weighing factorsU.sub.1,1 and U.sub.2,1 or the weighing factors U.sub.1,2 and U.sub.2,2 for the respective transmission streams
multiplexed to any of the antennas 17 and 18.
The .alpha. selected by the CBI determining unit 13 may be the value given for every system or every cell of the reception station 30, which is the base station, or the value given
based on the characteristic of the amplifier in the transmitter15 of each of the transmission stations (UE) For example, the maximum .alpha. satisfying the equation (7) is selected
beforehand, considering the characteristic of the amplifier of the supposed UE 10 or the maximum transmission power, for example.
Alternatively, the UE 10 may autonomously select the maximum .alpha. satisfying the equation (7) based on the characteristic of the amplifier, or may adaptively select the maximum
.alpha. satisfying the equation (7) according to the currenttransmission power of the UE 10, additionally or alternatively. Specifically, as the current transmission power P.sub.UE of
the UE 10 is smaller, the back-off (surplus transmission power) is apparently great, so that a greater .alpha., e.g., the maximum.alpha. satisfying the equation (7) can be selected.
This means that the imbalanced amplitude .alpha. can be controlled in accordance with the transmission power of the UE 10.
The .alpha. autonomously selected at the UE 10 (the CBI determining unit 13) is preferably reported to the reception station 30 as the signal of the control channel, for example, in
order to reflect the .alpha. on the code book 361 of thereception station 30. The quantity of the reported information in this case is desirably as smaller as possible from the
viewpoint of the saving of the wireless resource (control channel resource) of the UL. Quantizing the .alpha. is one example ofthe techniques for reducing the information quantity.
FIG. 1 illustrates that the quantized information (index) is reported to the reception station 30 as the code book indicator (CBI).
The switch 14 selectively outputs the transmission signal, by which the PM is multiplied at the PM multiplying unit 12, and the CBI obtained at the CBI determining unit 13, whereby it
outputs the transmission signal and the CBI to thetransmitter 15 in a time division multiplexing manner.
The transmitter 15 performs the transmission process, such as DA conversion, frequency conversion (up-convert) to radio frequency, power amplification, etc., to the multiplexed
signal, and outputs the resultant to the dividers 16 and 19.
The divider 16 outputs the radio transmission signal inputted from the transmitter 15 to the antenna 17, while outputting the radio reception signal received at the antenna 17 from
the reception station 30 to the receiver 20.
Similarly, the divider 19 outputs the radio transmission signal inputted from the transmitter 15 to the antenna 18, while outputting the radio reception signal received at the antenna
18 from the reception station 30 to the receiver 20.
The antennas 17 and 18 are used for both receiving and transmitting signals. They transmit the radio transmission signal from the corresponding dividers 16 and 19 to the reception
station 30, while receiving the radio signal transmitted fromthe reception station 30 and outputting the received radio signal to the corresponding dividers 16 and 19.
The receiver 20 performs a reception process, such as low noise amplification, frequency conversion (down-convert) to the base band frequency, AD conversion, demodulation, decode,
etc., to the radio reception signal inputted from the dividers 16and 19. During the reception process, the PMI received (reported) from the reception station 30 as, for example, the
signal of the control channel is extracted and given to the PM multiplying unit 12 as the selection information of the PM in the codebook 121.
(About Reception Station 30)
On the other hand, as illustrated in FIG. 1, the reception station 30 includes, for example, a plurality of (2 in this case) antennas 31 and 44, dividers 32 and 43, receiver 33,
switch (SW) 34, channel estimating unit 35, PM selecting unit 36,MIMO receiving unit 37, demodulating/decoding unit 38, CBI receiving unit 39, .alpha. determining unit 40, control
channel transmitting unit 41, and transmitter 42.
The antennas 31 and 44 are used for both receiving and transmitting signals. They transmit the radio transmission signal from the corresponding dividers 32 and 43 to the transmission
station 10, while receiving the radio signal transmitted fromthe transmission station 10 and output the received radio signal to the corresponding dividers 32 and 43.
The divider 32 outputs the radio transmission signal inputted from the transmitter 43 to the antenna 31, while outputting the radio reception signal received by the antenna 31 from
the transmission station 10 to the receiver 33.
Similarly, the divider 43 outputs the radio transmission signal inputted from the transmitter 42 to the antenna 44, while outputting the radio reception signal received by the antenna
44 from the transmission station 10 to the receiver 33.
The receiver 33 performs a reception process, such as low noise amplification, frequency conversion (down-convert) to the base band frequency, AD conversion, etc., to the radio
reception signal inputted from the dividers 32 and 43.
The switch 34 selectively outputs the reception signal sequence, to which the reception process has already been performed, to any one of the channel estimating unit 35, CBI receiving
unit 39, and MIMO receiving unit 37. For example, the knownsignal (pilot signal or reference signal) with the transmission station 10 is outputted to the channel estimating unit 35,
the signal of the control channel that can include the CBI is outputted to the CBI receiving unit 39, and the data signals otherthan the aforesaid signals are outputted to the MIMO
receiving unit 37, respectively.
The channel estimating unit 35 carries out the channel estimation between the transmission station 10 and the channel estimating unit 35 based on the known signal, and gives the
obtained value of the channel estimation to the PM selecting unit36.
The CBI receiving unit 39 detects the CBI from the signal of the control channel, and gives the resultant to the a determining unit 40.
The .alpha. determining unit 40 determines (specifies) the .alpha. selected by the transmission station 10 (CBI determining unit 13) based on the CBI given from the CBI receiving unit
39, and gives the value thereof to the PM selecting unit36.
The PM selecting unit 36 has the code book (memory) 361 that holds the same set of PM as the code book 121 used in the transmission station 10. The PM selecting unit 36 reflects the
.alpha. given from the .alpha. determining unit 40 on the PMin the code book 361. Further, it searches (selects) the PM, which is estimated to provide the optimum or preferable
reception characteristic [e.g., signal to interference and noise ratio (SINR)], from the code book based on the value of the channelestimation obtained at the channel estimating unit
35, and gives the result (rank, index) to the control channel transmitting unit 41 as the PMI.
For example, when the transmission signal vector is defined as x, the preceding matrix (PM) is defined as Vk (notably, k means an index of PM), the channel matrix is defined as H, and
the noise component added to the transmission signal in thepropagation path is defined as n, the reception signal vector y is represented by the following equation (11). It is to be
noted that Nt represents the number of the transmission antenna, and R indicates the rank of the PM. y=HV.sub.Nt,R,kx+n.quadrature..quadrature..quadrature.
If the value of the channel estimation obtained at the channel estimating unit 35, i.e., the channel matrix is defined as H, the reception characteristic (SINR).sub..gamma.R,k,j of
the jth stream (j=0, 1, . . . , R-1) with respect to the kth PMhaving the rank R can be obtained from the following equation (12), in case where the reception system is minimum mean
square error (MMSE).
.gamma..sigma..function..times..times..sigma..times..cndot..cndot..cndot.- .cndot. ##EQU00009##
The PM selecting unit 36 obtains a channel capacity T.sub.R,k from the following equation (13), for example, based on the value of .sub..gamma.R,k,j.
.times..function..gamma..cndot..cndot..cndot..cndot. ##EQU00010##
The control channel transmitting unit 41 transmits the PMI obtained at the PM selecting unit 36 to the transmitter 42 with the PMI included in the signal of the control channel to the
transmission station 10.
The transmitter 42 performs a transmission process, such as DA conversion, frequency conversion (up-convert) to the radio frequency, power amplification, etc., to the signal of the
control channel, and outputs the resultant to the dividers 32and 43.
The data signal received by the transmission station 10 is subject to the known preceding MIMO process at the MIMO receiving unit 37 and demodulation/decoding unit 38 so as to
reproduce reception data (stream).
(Specific Example of Code Book (PM))
Next, one example of the set of 2.times.2 PM, which is used in the present embodiment and in which the number of the transmission antenna is 2 and the rank is 2, will be expressed by
the following equations (14) to (17).
.times..function..alpha..times..alpha..function..alpha..alpha..cndot..cnd- ot..cndot..cndot..times..function..alpha..pi..times..alpha..function..time-
..cndot..cndot..times..function..alpha..pi..pi..pi..times..alpha..function- ..alpha..alpha..cndot..cndot..cndot..cndot. ##EQU00011##
The sets of PM (code book) expressed by the equations (14) to (17) correspond to the results of creating four PMs based on the equation (6), and they are sets of unitary matrix in
which the imbalanced amplitude .alpha. is produced between thecomponents of each column in the matrix as the set of PM corresponding to the a plurality of transmission streams.
When the number of the transmission antenna is 4, the PM expressed by the following equation (18) in the case of the rank 4, the PM expressed by the following equation (19) in the
case of the rank 3, and the PM expressed by the followingequation (20) in the case of the rank 2 may respectively be used as one example.
.times..times..times..alpha..function..times..times..cndot..cndot..cndot.- .cndot..times..times..times..alpha..function..times..times..cndot..cndot..-
cndot..cndot..times..times..times..alpha..function..cndot..cndot..cndot..c- ndot. ##EQU00012## The equation (18) represents the set of PM obtained based on the equation (10), the
equation (19) represents the set of PM obtained based on the equation (9), and the equation (20) represents the set of PM obtained based on the equation (8),respectively.
[C] Others
In the aforesaid embodiment, the transmission station 10 receives the selecting information of PM (PMI) selected by the reception station 30 based on the result of the channel
estimation, whereby the PM used in the PM multiplying unit 12 isselected. However, the transmission station 10 may perform a channel estimation based on the reception signal from the
reception station 30, and the PM used at the PM multiplying unit 12 may be autonomously selected based on the result (e.g., in casewhere a time division duplex (TDD)) system may be
employed for a two-way communication between the transmission station 10 and the reception station 30).
The aforesaid embodiment assumes that the subject by which the PM, having the imbalanced amplitude .alpha. between the elements of the column, is multiplied is the UL transmission
stream from the UE 10 to the base station. However, theembodiment does not exclude the case in which the DL transmission stream, whose direction is reverse to the UL transmission
stream, is defined as the subject.
As this embodiment may be embodied in several forms without departing from the spirit of essential characteristics thereof, the present embodiments are therefore illustrative and not
restrictive, since the scope of the embodiment is defined bythe appended claims rather than by the description preceding them, and all changes that fall within metes and bounds of the
claims, or equivalence of such metes and bounds thereof are therefore intended to be embraced by the claims.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the
inventor to furthering the art, and are to be construed as beingwithout limitation to such specifically recited examples and conditions, nor does the organization of such examples in
the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiment(s) has (have) beendescribed in detail, it should be understood that
the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
* * * * *
Randomly Featured Patents
|
{"url":"http://www.patentgenius.com/patent/8149944.html","timestamp":"2014-04-17T06:59:07Z","content_type":null,"content_length":"76141","record_id":"<urn:uuid:0961b4df-977a-402f-a363-7f428df5e062>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lessons from a frustrated magnet
The latest news from academia, regulators research labs and other things of interest
Posted: March 12, 2009
Lessons from a frustrated magnet
(Nanowerk News) Scientists from the RIKEN Advanced Science Institute in Wako, the University of Tokyo and Sungkyunkwan University in Korea have developed a quantum theory that explains how
temperature and quantum fluctuations—a direct consequence of the Heisenberg uncertainty principle—affect the properties of materials called multiferroics (Quantum Theory of Multiferroic Helimagnets:
Collinear and Helical Phases).
In a magnet, interactions between the ‘spins’ on neighboring atoms result in the spins lining up collectively; in a ferroelectric material, the crystal lattice distorts so that positive ions move
away from negative ions. In a multiferroic material, electric polarization and magnetism are linked, which is particularly attractive for use in technological applications such as high-density data
Quite a few multiferroic materials have been identified to date. Theory has played an important role in the search for new examples by predicting which types of crystal structures allow both
ferroelectricity and magnetism. In earlier work, Hosho Katsura and Naoto Nagaosa, two of the authors of the new paper, showed that an electric polarization can occur in magnets called ‘frustrated
magnets’ (Spin current and magnetoelectric effect in noncollinear magnets), which now provides a unified view on many of the multiferroic materials.
The source of ‘frustration’ in a magnet. A magnetic ‘spin’ (black arrow) sits on each point of the square lattice. The interaction between spins along the edge of the squares (denoted by J1) tends to
make the spins line up parallel to each other, but the interaction along the diagonal (denoted by J2) tends to reverse the direction (red arrow). To accommodate these competing interactions, the
spins form a spiral.
In frustrated magnets, the spin of an atom lacks a ‘preferred’ direction along which to point because of competing interactions with its neighbors. Frustration can occur on a square lattice, for
example, when the interaction between spins along the square edges tends to make the spins parallel, while an antiparallel interaction occurs along the diagonal. As a compromise between these two
interactions, the spins form a spiral arrangement—that is, the direction of the spin rotates from site to site on the lattice. This spiraling structure gives rise to ferroelectricity.
To accurately describe ferroelectric spiral magnets, which include the materials Ni[3V2O[8, LiCuVO[4 and LiCu[2O[2, Katsura, Nagaosa and their colleagues needed to account for the effects of
temperature and quantum fluctuations in an all-encompassing theory. To do this, they used what is called the ‘Schwinger boson’ approach, which allows them to describe each spin with both a classical
and a quantum part. ]]]]]
Temperature and quantum fluctuations are known to disrupt magnetic order, in frustrated magnets. “Quantum fluctuations introduce new magnetic phases that are not predicted classically without
magnetic anisotropy, such as a ‘collinear’ phase in which the spins are aligned in the same direction but have different lengths,” says Katsura. As the predictions of the theory differ to those of
the purely classical theory, they have important implications for interpreting experiments on particular materials, including LiCu[2O[2. ]]
|
{"url":"http://www.nanowerk.com/news/newsid=9633.php","timestamp":"2014-04-17T21:23:59Z","content_type":null,"content_length":"33130","record_id":"<urn:uuid:1d280cd5-f7dc-4c2f-82ad-69bf57499163>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A terminating and confluent linear lambda calculus
- In TLCA , 2007
"... Abstract. Inspired by recent work on normalisation by evaluation for sums, we propose a normalising and confluent extensional rewriting theory for the simply-typed λ-calculus extended with sum
types. As a corollary of confluence we obtain decidability for the extensional equational theory of simply- ..."
Cited by 5 (3 self)
Add to MetaCart
Abstract. Inspired by recent work on normalisation by evaluation for sums, we propose a normalising and confluent extensional rewriting theory for the simply-typed λ-calculus extended with sum types.
As a corollary of confluence we obtain decidability for the extensional equational theory of simply-typed λ-calculus extended with sum types. Unlike previous decidability results, which rely on
advanced rewriting techniques or advanced category theory, we only use standard techniques. 1
- INFORM. AND COMPUT , 2007
"... We present a simple term calculus with an explicit control of erasure and duplication of substitutions, enjoying a sound and complete correspondence with the intuitionistic fragment of Linear
Logic’s proof-nets. We show the operational behaviour of the calculus and some of its fundamental properties ..."
Cited by 3 (2 self)
Add to MetaCart
We present a simple term calculus with an explicit control of erasure and duplication of substitutions, enjoying a sound and complete correspondence with the intuitionistic fragment of Linear Logic’s
proof-nets. We show the operational behaviour of the calculus and some of its fundamental properties such as confluence, preservation of strong normalisation, strong normalisation of simply-typed
terms, step by step simulation of β-reduction and full composition.
, 2010
"... short proof that adding some permutation rules ..."
, 2009
"... A short proof that adding some permutation rules to β preserves SN ..."
, 2009
"... A short proof that adding some permutation rules to β preserves SN ..."
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4489162","timestamp":"2014-04-20T06:49:29Z","content_type":null,"content_length":"19143","record_id":"<urn:uuid:306985c6-99cd-4c63-a7e1-e7163a1b71d3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
May 27th 2013, 11:45 AM #1
Dec 2012
suppose G is a connected graph with n vertices and n-1 edges.
prove that G has at least 2 vertices with deg(v)=1 .
Re: graph
Use the handshaking lemma.
Re: graph
Recall that $|\mathcal{V}|$ is the number of vertices and $|\mathcal{E}|$ is the number of edges.
Then we know that $\frac{2|\mathcal{E}|}{|\mathcal{V}|}\ge m$ where $m$ is the minimum vertex degree.
You can use that show that if $|\mathcal{V}|=n-1$ then there must be at one vertex of degree one.
You are working with a connected graph. So all vertices are of degree alt least one.
You should know that $\sum\limits_{v \in\mathcal{V}} {\deg (v)} = 2\left| \mathcal{E} \right|$.
Now you show some effort.
May 27th 2013, 01:24 PM #2
MHF Contributor
Oct 2009
May 27th 2013, 01:35 PM #3
|
{"url":"http://mathhelpforum.com/discrete-math/219377-graph.html","timestamp":"2014-04-17T10:04:10Z","content_type":null,"content_length":"37910","record_id":"<urn:uuid:26c6b087-3787-4841-ae72-0a71b9382bb2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Z-Ratings Center
Current Z-Ratings sets:
2013-14 National Basketball Association all games (2014-04-12) (0)
2013-14 National Hockey League all games (2014-04-12) (0)
2014 NCAA Division I Men's Lacrosse all games (2014-04-12) (.001)
(by conference)
Z-Ratings archives | Summary of previous Z-Brackets
What are the Z-Ratings?
The Z-Ratings are an application of the Bradley-Terry system for paired comparisons. The Bradley-Terry system was first applied to American collegiate ice hockey many years ago by Ken Butler, an
English statistician studying in Canada. This application, popularly known as KRACH, has been refined and enhanced by John Whelan, a math professor at Rochester Institute of Technology and one of the
most venerated members of the Lynah Faithful, the rabid fan base of Cornell University men's hockey. The demonstrated superiority of KRACH over other rating systems (most notably the RPI, as
discussed below) has led me to develop Bradley-Terry based ratings for other NCAA sports and professional leagues.
If it's the Bradley-Terry system, why the name 'Z-Ratings'?
I wanted something more compact. The system was originally discovered in 1929 by a German mathematician named Zermelo, hence the use of the letter Z.
How do the Z-Ratings work?
The Z-Ratings take into account only the strength of a team's schedule (according to the games it has already played) and the team's won-lost-tied record. Only games played against other teams in the
rating set are considered in the win-loss-tie records (i.e. a Division I team's win over a Division II team doesn't count). Ties are considered to be half a win and half a loss, and only the final
result of a game is considered (i.e. a loss in overtime is treated the same as a loss in regulation time). Other factors like margin of victory, home site, injured players, and future games are not
considered. Because of this quality, the Z-Ratings are retrodictive, even though a probability of victory for any conceivable matchup is obtainable. (Differences between retrodictive and predictive
rating systems)
To quote John Whelan's KRACH page (see above link), "a team's rating is meant to indicate its strength on a multiplicative scale, so that the ratio of two teams' ratings gives the expected odds of
each team winning a game between them." For example, if team A has a rating of 300 and team B a rating of 100, then team A is three times as likely as team B to win when they play.
A team's expected winning percentage against any opponent can be easily derived. Continuing with the teams above, team A's expected winning percentage against team B is equal to [300/(300+100)] = 300
/400 = .750. Similarly, team B's expected winning percentage is .250, which makes sense since team A is three times as likely to win as team B. This probability is also known as the Expected Winning
Percentage, or EWP, of team A over B. The average of a team's EWP's against all other rated teams is the Balanced Schedule Winning Percentage, or BSWP, and is listed in the ratings tables.
If you sum a team's EWP's in all the games it has played, the sum will equal the number of wins that team actually has. The correct ratings are those that satisfy this condition for all teams. Once
this condition is satisfied, you could multiply all the ratings by any constant, and they would still be valid. Following the example of KRACH, the Z-Ratings are defined such that a team with a
rating of 100 has an BSWP of .500.
In order to deal with teams that have either won or lost all their games, each team is assumed to have tied a (fictitious) game against a team with rating 100. Ken Butler's KRACH page explains more
fully why this is necessary. The weighting of these 'tied' games is different for each rating set, dependent on the number of such teams in the set, and is given with each set in italics above (and
on the ratings sheets as 'γ'). If all the teams in a set have at least one win or tie and one loss or tie, the fictitious games are not necessary and are not used. Generally, as a season progresses,
this weighting factor can be continually lowered, to the point where it is nearly insignificant by the end of the season (if it's still used at all).
A note on the NHL Z-Ratings: During the first four seasons of the shootout in regular-season NHL games (2005-06 through 2008-09), teams going into a shootout received a win and a loss, respectively.
Since the 2009-10 season, the shootout has been removed from the records; if an NHL game goes to a shootout, both the teams involved are credited with a tie.
Break down the ratings tables for me.
Note: in this table, I use the word "conference" when referring to subgroupings. These are the same as the divisions in the MLB, NFL, NBA, and NHL ratings sets.
│Heading │Explanation │
│Rk │The ordinal rank of the team, ordered by the team's Z-Rating. │
│(leftmost) │ │
│Conf. / │A two-letter abbreviation for the team's conference. The correspondence between conference names and abbreviations can be found in the "Conference ratings" table at the bottom of the │
│Div. │ratings sheet, or by moving the mouse over the abbreviation. │
│Z-Rating │The actual rating of the team; as stated above, the ratio X = Z[a]/Z[b] can be read to mean that the system evaluates team 'a' to be X times as strong as team 'b'. │
│BSWP │Balanced Schedule Winning Percentage: a team's expected winning percentage in an equal number of games against every other team in the group. │
│nCWP/nDWP │non-(Conference/Division) Winning Percentage: a team's expected winning percentage in an equal number of games against every other team in the group, except for the other teams in its │
│ │own conference. │
│Record │The team's win-loss-draw record. │
│Pct. │The team's winning percentage; the ratio of games won (plus half of games tied, where applicable) to total games played. │
│Ratio │The team's 'success ratio'; the ratio of wins to losses (with ties counting as one-half of each), counting the 'fake' tied games with a weighting of γ. │
│SOS │Strength of Schedule. This is the Z-Rating divided by the success ratio. │
│Rk │The team's ordinal rank, ordered by strength of schedule. │
│(rightmost)│ │
How are the conferences' strengths computed?
The BSWP listed for each conference in the "Conference ratings" table is the average of the nCWP of all the conference's teams. In other words, the winning percentage of the teams of the Ivy League
(for example) if each played a game against every non-Ivy team. The independents are calculated together as a conference - that is, their BSWPs against each other are not considered in the
calculation of their BSWP as a group.
How is strength of schedule computed?
The simple SOS definition for team 'a' is Z[a] / [(W[a] + (γ/2)) / (L[a] + (γ/2))], where Z is a team's rating, W is number of games won, L is the number of games lost, and γ is the weighting of the
'tied' games (again, actual ties count as half a win and half a loss). The Z-Ratings are a product of the SOS and winning ratio (defined as wins divided by losses, after factoring in the 'tied' games
necessary to render the ratings finite).
Why are the Z-Ratings better than RPI?
The Rating Percentage Index (RPI) is a linear combination of a team's winning percentage (25%), the winning percentages of its opponents (50%), and the winning percentages of its opponents' opponents
(25%). The latter two factors are included so that the strength of a team's schedule is factored into its rating. Using these criteria, a group of teams (e.g. a conference) can accumulate inflated
ratings by playing a large number of games amongst themselves.
In NCAA hockey, the shortcomings are much more apparent than in NCAA basketball (owing to the fact that as much as 80% of a college hockey team's schedule is within its conference, as compared to
around 50% in college basketball and lacrosse). However, the shortcomings still exist, and can skew the RPI in favor of teams of lesser quality. Because the Z-Ratings use the ratings themselves to
determine the strength of a team's opposition, the system is more robust, particularly when comparing teams from different conferences.
As noted above, only the opponents played and the won-lost records of the teams being compared factor into the Z-Ratings. While some, including ESPN's Jay Bilas, have attacked the RPI for not
including other factors, I think limiting the composition of the ratings to wins and losses is essential. To do otherwise could invite allegations of bias. How much do you value home site advantage?
Should teams be encouraged to run up the score so their rankings improve?
The NCAA has apparently come up with an answer to the first of those two questions for men's basketball - starting in 2004-05, home wins and road losses are valued at 0.6, and road wins and home
losses are valued at 1.4. This adjustment has been seen to benefit top teams in the "mid-major" conferences, at the expense of teams in the middle of the "power" leagues. Applying that sort of
"one-size-fits-all" solution discounts the fact that the advantage a team gains from playing at home varies greatly. Since there's really no way to quantify this effect, I have chosen to keep site
advantage out of the Z-Ratings.
What are some other rating systems out there?
Colley Matrix | Massey Ratings | Ken Pomeroy | Jeff Sagarin | RPI at CBSSports.com
Disclaimer: The Z-Ratings are not affiliated in any way with WHTZ-FM, more commonly known as Z-100.
Matt Carberry's home page
|
{"url":"http://mattcarberry.com/ZRatings/ZRatings.html","timestamp":"2014-04-17T18:24:01Z","content_type":null,"content_length":"13421","record_id":"<urn:uuid:e5bdc218-5340-4630-abf7-07a78fd38397>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
|
to Modeling
Introduction to Modeling for Biosciences,
David J. Barnes and Dominique Chu,
Springer, August 2010. ISBN: 978-1-84996-325-1
Topics and features:
• Introduces a basic array of techniques to formulate models of biological systems, and to solve them.
• Discusses agent-based models, stochastic modeling techniques, differential equations and Gillespie's stochastic simulation algorithm.
• Intersperses the text with exercises.
• Includes practical introductions to the Maxima computer algebra system, the PRISM model checker, and the Repast Simphony agent modeling environment.
• Contains appendices on Repast batch running, rules of differentiation and integration, Maxima and PRISM notation, and some additional mathematical concepts.
• Supplies source code for many of the example models discussed, at the associated website http://www.cs.kent.ac.uk/imb/.
Computational modeling has become an essential tool for researchers in the biological sciences. Yet in biological modeling, there is no one technique that is suitable for all problems. Instead,
different problems call for different approaches. Furthermore, it can be helpful to analyze the same system using a variety of approaches, to be able to exploit the advantages and drawbacks of each.
In practice, it is often unclear which modeling approaches will be most suitable for a particular biological question - a problem that requires researchers to know a reasonable amount about a number
of techniques, rather than become experts on a single one.
Introduction to Modeling for Biosciences addresses this issue by presenting a broad overview of the most important techniques used to model biological systems. In addition to providing an
introduction into the use of a wide range of software tools and modeling environments, this helpful text/reference describes the constraints and difficulties that each modeling technique presents in
practice. This enables the researcher to quickly determine which software package would be most useful for their particular problem.
This unique and practical work guides the novice modeler through realistic and concrete modeling projects, highlighting and commenting on the process of abstracting the real system into a model.
Students and active researchers in the biosciences will also benefit from the discussions of the high-quality, tried-and-tested modeling tools described in the book, as well as thorough descriptions
and examples.
Chapter Organization
• Chapter 1: Foundations of Modeling
This introductory chapter discusses some basic principles of biological modeling. Specifically, it presents some practical advice on how to approach modeling problems in biology. This chapter
sets the scene for the rest of the book, and is particularly intended for the scientist who has little previous experience in modeling.
• Chapter 2: Agent-based Modeling
This chapter introduces agent-based models (ABMs). These are computational semi-realistic models where every important part of the system is explicitly represented. ABMs can be very valuable in
biological modeling because they can represent very complicated systems that cannot be represented using, for example, purely equation-based modeling approaches. This chapter explains the
underlying ideas of ABMs, and highlights the characteristics that make systems amenable to ABM modeling. A large part comprises walk through illustrations of two models, namely, the spread of
malaria in a spatially structured population and a model of the evolution of fimbriae. This last example also demonstrates how ABMs can be used to simulate evolution in a biologically realistic
• Chapter 3: ABMs using Repast and Java
This chapter provides the reader with a practical introduction to agent-based modeling, via the Repast Simphony programming environment. Using examples of agent-based models from an earlier
chapter, we look in detail at how to build models via either flowcharts or Java code. We illustrate some of the ways in which a toolkit such as Repast considerably simplifies the life of the
modeler by providing extensive support for model visualization, charting of results, and multiple runs, for instance.
• Chapter 4: Differential Equations
This chapter introduces the reader to the basic ideas underlying ordinary differential equations. Knowledge of the basic rules of differentiation and integration is assumed; however, one of the
main objectives of the chapter is to convey to the biologist reader, in an intuitive way, the basic idea of infinitesimal change and differentiation. The second main aim of this chapter is to
provide an introduction to ordinary differential equations. The emphasis of this chapter is on usability. By the end of the chapter the reader will be able to formulate basic, but practically
useful, differential equations; have a grasp of some basic concepts, including stability and steady states); and will also have an understanding of some basic methods to solve them. Furthermore,
the reader will be able to critically evaluate differential equation models she may encounter in the research literature. The more general theoretical introduction into this topic is accompanied
by fully worked case studies, including a differential equation model of the spread of malaria and a stability analysis of Cherry and Adler's bistable switch.
• Chapter 5: Mathematical Tools
This chapter describes the use of the free, open-source computer algebra system Maxima. Maxima is a system similar to Maple and Mathematica. The use of Maxima is illustrated by a number of
examples. Special emphasis is placed on practical advice on how the software can be used, and the reader is made aware of difficulties and pitfalls of Maxima. The chapter also demonstrates how to
use Maxima in practice by walking the reader through a number of examples taken from earlier chapters of the book.
• Chapter 6: Other Stochastic Methods and Prism
This chapter introduces the reader to the concept of stochastic systems. It motivates the importance of noise and stochastic fluctuations in biological modeling and introduces some of the basic
concepts of stochastic systems, including Markov chains and partition functions. The main objective of this theoretical part is to provide the reader with sufficient theoretical background to be
able to understand original research papers in the field. Strong emphasis is placed on conveying a conceptual understanding of the topics, while avoiding burdening the reader with unnecessary
mathematical detail. The second part of this chapter describes PRISM, which is a powerful computational tool for formulating, analyzing and simulating Markov-chain models. Throughout the chapter,
concepts are illustrated using biologically-motivated case studies.
• Chapter 7: Simulating Biochemical Systems
This chapter introduces the reader to the principles underlying Gillespie's widely-used stochastic simulation algorithm (SSA), for the exact stochastic modeling of chemical reactions involving
relatively small numbers of molecules. We also look at Gibson and Bruck's improvements to the SSA, in order to support larger numbers of reactions, as well as the more recent variation by Slepoy,
Thompson and Plimpton. All of these techniques are illustrated with Java implementations and a discussion of their complexity. We also introduce the Dizzy and SGNSim toolkits, which implement
some of these approaches, along with tau-leap approximation and reaction delays.
• Appendix: Reference Material
In the appendix we have provided reference material in a convenient form to support the main chapters. This includes: advice on running Repast Simphony in batch mode; some common rules on
differentiation and integration; a short summary of Maxima and PRISM notation; and some basic mathematical concepts.
Source Code
Source of all the associated material in GZIP/TAR format and ZIP format.
Date of latest version: 27th February 2012.
The source code comprises the following:
All materials © Springer, David J. Barnes and Dominique Chu, 2010-2012.
Please notify us of any errors or inaccuracies at the address below.
Thanks to our readers for notifying us of the following corrections:
• Pages 132-133, Equations 4.1, 4.2 and 4.3: 0.5 should be 0.05
• Page 137, line 12 from the bottom: 5^2 = 25a should be 5^3 = 125a and x(5) = 25 should be x(5) = 125
• Page 138, Equation 4.9: x^3 should be t^3
• Page 142: the i=0 below the summations should be i=1
• Page 148, line 14 from the bottom (not counting the footnote): 'create' should be 'created'
Links to Modeling Software
This section contains links to both free and commercial software and packages that may be of use to modelers. Most of these are mentioned in the book, but not all are covered in detail.
This page (http://www.cs.kent.ac.uk/imb/) is maintained by: David Barnes (the anti-spam email address will need editing by you) to whom any questions, comments and corrections should be addressed.
© David J. Barnes and Dominique Chu
Last Updated: 27th Feb 2012
Created: 29th June 2010
|
{"url":"http://www.cs.kent.ac.uk/projects/imb/","timestamp":"2014-04-20T09:01:39Z","content_type":null,"content_length":"13259","record_id":"<urn:uuid:4f9a1500-3d27-4637-8f28-a6acf8e08529>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Redondo Beach SAT Math Tutor
Find a Redondo Beach SAT Math Tutor
...I’m very results oriented with tutoring–to me, the way of explaining things that allows you to understand is the right way. All students, no matter what level they are at now, can achieve
academic success. Whatever your starting point is, we’ll get you to your goal in small, manageable steps.
6 Subjects: including SAT math, chemistry, geometry, biology
...I can also tutor college math subjects like Linear Algebra, Abstract Algebra, Differential Equations, and more. My teaching strategy is to emphasize the importance of dedication and hard work,
highlight the fundamental building blocks for each course in order to understand the concepts, and boos...
9 Subjects: including SAT math, calculus, geometry, algebra 1
Hello! I am a Mathematics graduate from University of Riverside. I plan on becoming a teacher.
6 Subjects: including SAT math, geometry, algebra 1, elementary (k-6th)
...I have a passion for learning that I want to spread. I excel at helping people feel they understand the material better: Previous clients have noted that I am able to take seemingly complex
problems and make them very understandable. What this means for a client is that I identify weak points in ways that they think and teach them to reframe hard problems as simple ones.
44 Subjects: including SAT math, Spanish, chemistry, statistics
...I have tutored others in AutoCad before. I took 2 semesters in Revit 2009/ 2010 at El Camino College in Torrance, CA. I used Revit more than Autocad in my freelance architecture business since
11 Subjects: including SAT math, geometry, GRE, Microsoft Excel
|
{"url":"http://www.purplemath.com/redondo_beach_ca_sat_math_tutors.php","timestamp":"2014-04-21T14:53:31Z","content_type":null,"content_length":"24003","record_id":"<urn:uuid:14296424-9f0b-40ce-a002-ebab170a1d87>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need help solving basic linear system
August 6th 2010, 07:48 PM #1
Jul 2010
Need help solving basic linear system
Find the shortest distance between the solutions of the system A and the solutions of the system B
I got this one, it's x+7z+12
The answer is supposedly
When I do this however:
-6 2 4 5
3 -1 -2 -2.5
-3 1 2 2.5
1 -2/6 -4/6 -5/6
Now, the solution from here says
v=(-7, -23,1) u=(3,-1,-2) and computes u.v
Where do these values come from?
I also need help with the rest of the steps. I know after you get x, y, you can just use the distance formula and plug it in for 3x-y-2z-2.5, but how do I calculate for x and y?
Last edited by shibble; August 6th 2010 at 08:25 PM.
Bump, really need to get this down. Just need help on how the values of x,y,z were determined for B, and where the values of v=(-7,-32,1) came from.
Thanks a lot
I responded to this on a different forum but-
This makes no sense. What is "x+ 7z+ 12"? Not the answer certainly- the answer must be values for x, y, and z.
If you multiply the second equation by 2 and add to the first you eliminate y: x+ 7z+ 10= -2 or x+ 7z+12= 0. Is that what you meant? You weren't finished. From that , x= -7z- 12. Putting that
back into the second equation -3(-7z- 12)+ y+ 2z+ 5= 0 so y= 3(-7z- 12)- 2z- 5= -23z- 41. Since there are only two equations in three values, you cannot go any further. You could write the
solution in terms of parametric equations, x= -7t- 12, y= -23t- 41, z= t.
Geometrically, the solution is any point on that line.
The answer is supposedly
Are you sure you don't have this confused with the solution to A?
When I do this however:
-6 2 4 5
3 -1 -2 -2.5
-3 1 2 2.5
1 -2/6 -4/6 -5/6
Obviously, the second equation is just the first equation multiplied by -1/2 and the third equation is just the first equation multiplied by 1/2. You really only have one equation and so can only
solve for one value in terms of the other two.
Since -3x+y+2z+2.5=0, y= 3x- 2z- 2.5. You can write that as parametric equations in terms of two parameters. x= u, y= 3u- 2v- 2.5, z= v. Geometrically, that is a plane.
Now, the solution from here says
v=(-7, -23,1) u=(3,-1,-2) and computes u.v
Where do these values come from?
I also need help with the rest of the steps. I know after you get x, y, you can just use the distance formula and plug it in for 3x-y-2z-2.5, but how do I calculate for x and y?
(- 7, -23, 1) is the "direction vector" of the straight line satisfying the first equations. That is, it is a vector pointing along the line. Any time you have a straight line given by parametric
equations $x= At+ x_0$, $y= Bt+ y_0$, $z= Ct+ y_0$, the vector $A\vec{i}+ B\vec{j}+ C\vec{k}$, which can be written as (A, B, C), points in the same direction as the line.
(3, -1, -2) is the "normal vector" to the plane satisfying the second set of equations. If a plane is written in the form Ax+ By+ Cz= d, then $A\vec{i}+ B\vec{j}+ C\vec{k}$, which can be written
as (A, B, C), is a vector perpendicular to the plane.
Once we have the line x= -7t- 12, y= -23t- 41, z= t and the plane -3x+y+2z+2.5=0, the first thing I would do is see if any point satisfies both of those. Look at -3(-7t- 12)+ (-23t- 41)+ 2(t)+
2.5= 0. That becomes 21t+ 36- 23t- 41+ 2t+ 2.5= 0 or (21- 23+ 2)t+ 36- 41+ 2.5= 0t- 2.5= 0. That is never true so the line and plane are parallel. There is a specific distance between the two.
To find the distance between the line and plane, first choose a point on the line. The simplest thing to do is to let t= 0 in the equations of the line: x= -12, y= -41, z= 0 so (-12, -41, 0) is
on the line. The shortest distance from that point to the plane is along the perpendicular and we already know that any perpendicular to the plane is parallel to the vector (3, -1, -2). A line
through the point (-12, -41, 0) in the direction of the vector (3, -1, -2) is, as I said before, given by x= 3t- 12, y= -t- 41, z= -2t. To find where the that line intersects the plane, put the
x, y, z values into the equation ofthe plane:
-3(-3t- 12)+ (-t- 41)+ 2(-2t)+ 2.5= (9- 1- 4)t+ 36- 41+ 2.5= 4t- 2.5= 0. t= 2.5/4= .625.
Put that value of t into x= 3t- 12, y= -t- 41, z= -2t to find the point where the perpendicular line from (-12, -41, 0) passes through the plane. The distance between that point and the point
(-12, -41, 0) is the distance from the line to the plane, the distance from the solutions set for the first set of equations and the solution set for the second set of equations.
August 7th 2010, 11:18 PM #2
Jul 2010
August 8th 2010, 02:29 AM #3
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/advanced-algebra/152958-need-help-solving-basic-linear-system.html","timestamp":"2014-04-16T07:26:01Z","content_type":null,"content_length":"41681","record_id":"<urn:uuid:eba24fd7-c1bd-43ad-a9d2-50f098ab1b80>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find VC(t) For T > 0 In The Circuit In The Accompanying ... | Chegg.com
Find vC(t) for t > 0 in the circuit in the accompanying Figure if vC = 0.
Give answer as a formula. For example: 17+0.0476*e^(-7480*t)-17*e^(-20.9*t). Round the coefficients in the answer to 3 significant digits.
vC(t) = ______V
Electrical Engineering
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/find-vc-t-t-0-circuit-accompanying-figure-vc-0-give-answer-formula-example-17-00476-e-7480-q3585327","timestamp":"2014-04-20T17:42:24Z","content_type":null,"content_length":"20983","record_id":"<urn:uuid:284a8e9a-80dd-4f16-af69-3281ff6fe0a0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50777825e4b0ed1dac5098a3","timestamp":"2014-04-16T22:31:36Z","content_type":null,"content_length":"79118","record_id":"<urn:uuid:c545a487-4f11-4808-ac89-8c1cb9ccaf56>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The moving parts of modal logic
Here's another post on something that Karen Bennett said, which will hopefully be more substantive than the last one. In one of the discussions, Bennett said that Lewis's semantics for modal logic
runs into trouble when talking about the "moving parts of his system". This means, roughly, when it starts talking about matters metalinguistic, like the domain of worlds, truth, domains of
individuals, etc. Her particular example was: necessarily there are many worlds. Apparently Lewis's semantics mess this up. I don't know the details since I don't know what sort of language Lewis was
using and the particular semantics. That's not really the point. It got me wondering how prevalent this sort of thing is. When do semantics go astray for things that are naturally situated in the
metalangauge? The truth predicate can cause some problems. Does the reference predicate? I'm not really sure I've read anything about logics that include an object language reference predicate.
The idea dovetailed nicely with some stuff I was working on for Nuel Belnap's modal logic class. Aldo Bressan's quantified modal logic includes a construction of terms that represent worlds. This
construction works at least as long as the number of worlds is countable. I'm not sure what happens when there are uncountably many worlds since that requires me to get much clearer on a few other
things. This allows the construction of nominal-like things and quantification over them. The language lets you say that for any arbitrary number of worlds, necessarily there are at least that many
worlds. The neat point is that the semantics works out correctly. [The following might need to be changed because I'm worried that I glossed over something important in the handwaving. I think the
basic point is still correct.] For example, (finessing and handwaving some details for the post) the sentence "necessarily there are at least 39 worlds" will be true iff for all worlds, the sentence
"there are at least 39 worlds" is true, which will be just in case there are at least 39 worlds. This is because you can prove that there is a one-one correspondence between worlds and the terms that
represent worlds. Bressan uses S5. The way "worlds" is defined uses a diamond, so the specific world at which we are evaluating does not really matter. So, the semantics gets it right. Of course,
this doesn't say that there are only models with at least 39 worlds. If that is what we want to say, we can't. It does say that within a model, it's true at all worlds that there are at least 39
worlds, which is all the object language necessity says. This gives us a way to talk about a bit of the metalanguage in the object language, albeit in a slightly restricted way.
No comments:
|
{"url":"http://indexical.blogspot.com/2007/05/moving-parts-of-modal-logic.html","timestamp":"2014-04-17T18:50:55Z","content_type":null,"content_length":"50193","record_id":"<urn:uuid:57200d3e-6097-4d16-a21a-caf387a9f0ef>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Role of Probability in Discrimination Cases
James J. Higgins
Department of Statistics
Kansas State University
101 Dickens Hall
Manhattan, KS 66506
Statistics Teaching and Resource Library, August 21, 2001
© 2001 by James J. Higgins, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and
advance notification of the editor.
An important objective in hiring is to ensure diversity in the workforce. The race or gender of individuals hired by an organization should reflect the race or gender of the applicant pool. If
certain groups are under-represented or over-represented among the employees, then there may be a case for discrimination in hiring. On the other hand, there may be a number of random factors
unrelated to discrimination, such as the timing of the interview or competition from other employers, that might cause one group to be over-represented or under-represented. In this exercise, we
ask students to investigate the role of randomness in hiring, and to consider how this might be used to help substantiate or refute charges of discrimination.
Key words: Probability distribution, binomial distribution, computer simulation, decision rules
The activity allows students to get a feel for random variability and probability through simulation, and then to apply what they learn to an important decision-making problem.
Activity Description
A company decides to hire 14 people. The applicant pool is large, and there are equal numbers of equally qualified women and men in the pool. To avoid claims of discrimination, the company
decides to select the prospective employees at random. Students should consider three questions. (1) How big a difference between the numbers of men and women who are hired might occur just by
chance? (2) How likely is it that equal numbers of men and women are hired? (3) How big a difference might suggest discrimination in hiring?
An attached Excel program simulates the hiring process. Given the large size of the applicant pool, the selection process is modeled as a sequence of 14 Bernoulli trials with p = .5 where p is
the probability that a hire is a female. Instructions are provided with the Excel spreadsheet on how to use the program.
Teaching Notes
Students should discover a rule that would cause them to suspect discrimination. For instance, one such rule might be that if there are 11 or more men hired, or 11 or more women hired, then this
might be indicative of discrimination. Students should also discover that having exactly 7 men and 7 women hired has a relatively small chance of happening (only about a 21%).
By discussing the consequences of different decision rules, the student can be introduced to the notion of errors in hypothesis testing. Suppose, for instance, that a regulatory agency were to
investigate possible discrimination if the disparity between genders of newly hired employees is 11 to 3. What is the chance that a fair employer would be investigated for discrimination?
Compare this to an agency that would investigate if the disparity were 10 to 4 or 9 to 5.
The instructor may adapt this exercise to go along with a discussion of the binomial distribution. If this is done, then a table of the binomial distribution could be used to describe the
probability distribution of the number of women (or men) hired by the company.
Students should be invited to make the connection between this problem and other problems that appear to be different but have the same probability structure. Here is an example. “A standard
treatment for a certain disease has a 50-50 chance of working. A new treatment is proposed and is tried on 14 patients. How many patients would have to benefit from the new treatment before you
would be reasonably sure that it is better than the old treatment?
Prototype Activity
The prototype activity is a set of short-answer problems that students would work through with a partner.
This activity is designed to generate discussion about probability, randomness, and their role in decision-making.
Assessment should come in the form of short answers to questions on an activity such as the prototype activity and class discussion. Don’t expect students to immediately grasp the ideas. It will
take time and lots of examples.
Class discussion should follow along the lines of what is suggested in the Teaching Notes. Students should be able to draw parallels between this problem and similar problems involving
probability and randomness.
With this activity, the student should begin to get an intuitive feel for random variability, and they should begin to appreciate the role that probability can play in decision-making.
Editor's note: Before 11-6-01, the "student's version" of an activity was called the "prototype".
|
{"url":"https://www.causeweb.org/repository/StarLibrary/activities/higgins2001/","timestamp":"2014-04-19T00:31:02Z","content_type":null,"content_length":"16484","record_id":"<urn:uuid:bdd094a4-8fb0-4c62-af8e-aadce866caeb>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simplify The Expression
Add New Comment
Simplify The Expression
In this page we are going to discuss about
Simplifying The Expressions
concepts. Simplify means
converting complexity into simple and then solving it. In other words breaking a problem, into simple terms or
terminologies can be defined as Simplification. Students always find difficulty in simplifying problems but, its
actual purpose is to break down problems into steps and then solving it. There are different ways of
simplifying, and different methods followed by different tutors for simplifying expressions.
How to simplify expressions
Simplify the term itself suggests that it means simplifying things. But when it comes to math, it is not that
simple. Simplification is one of the major operation of math and to work out simplify problem is very difficult
without having clear concept. Simplification in math is applied to different concepts and these are: simplifying
radical expressions, square roots, rational
Simplifying The Expressions
fractions, exponents, algebraic
expressions, complex fractions, equations and expressions.
Let's discuss the most frequently discussed topics covered in simplification and how to simplify expressions.
Simplify Radical Expressions
Radical expressions are the combination of both variables and numbers inside a root. Radical expressions
should be broken into pieces in order to get its simplest form.
Simplify [1/sqrt(2)] + [1/8] = [1/sqrt(2)] + [1/(2sqrt(2))]
= [2/(2sqrt(2))] + [1/(2sqrt(2))]
= [3/(2sqrt(2))]
Simplify square roots
The process of obtaining the 'square root' of a number is termed as 'solving' or' simplification'. The number is
broken down to its factors to obtain the number which satisfies the condition of being the square root of the
Example: Find [sqrt(48)] = [sqrt(2*2*2*2*3)] = [2*2sqrt(3)] = [sqrt(3)]
Fing [sqrt(x^(2)36)] = [sqrt(x*x*2*2*3*3)] =x*2*3 =6x
Simplify rational
Simplifying The Expressions
A rational expression is more than a fraction in which the
numerator and/or the denominator are polynomials. Simplifying rational expression involve breaking down
Solve [(x^(2)+3x+2)/(x+2)] = [(x^(2)+x+2x+2)/(x+2)] = [(x(x+1)+2(x+1))/(x+2)]
= [((x+1)(x+2))/(x+2)] =x+1
Simplify fraction
Simplify fraction involves breaking a bigger fraction to smal er one, converting mixed to improper fraction and
then solving the problem.
Solve 2 [2/3] - [1/3] = [8/3] - [1/3] = [7/3]
Simplify exponents
Exponentiation is a mathematical operations, written as an, involving two numbers, the base a and the
exponent n.
Simplifying The Expressions
Solve x2.x3=x2+3=x5
Solve [x^(5)/x] =x5.x-1=x5-1=x4
Simplify algebraic expressions
An Algebraic Expression is a combination of numerals, variables and arithmetic operations such as +, -, *
and /
Simplify 2x+5x-(4+5)x+7x-2
Solution: 2x+5x-(4+5)x+7x-2=2x+5x-9x+7x-2=14x-9x-2
Simplify complex fractions
If a fraction is composed of numerator and denominator as a fraction, it is called complex fractions.
Example:Solve [(3/4)/(6/8)] = [3/4] * [8/6] = [2/2] =1
Simplify equations
Thank You
Document Outline
|
{"url":"http://pdfcast.org/pdf/simplify-the-expression","timestamp":"2014-04-17T09:36:40Z","content_type":null,"content_length":"40208","record_id":"<urn:uuid:70c4afe3-2d45-4bd6-a30f-e905de665360>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Step Function Integrators
Now that we know how a Riemann-Stieltjes integral behaves where the integrand has a jump, we can put jumps together into more complicated functions. The ones we’re interested in are called “step
functions” because their graphs look like steps: flat stretches between jumps up and down.
More specifically, let’s say we have a sequence of points
$\displaystyle a\leq c_1<c_2<...<c_n\leq b$
and define a function $\alpha$ to be constant in each open interval $\left(c_{i-1},c_i\right)$. We can have any constant values on these intervals, and any values at the jump points. The difference $
\alpha(c_k^+)-\alpha(c_k^-)$ we call the “jump” of $\alpha$ at $c_k$. We have to be careful here about the endpoints, though: if $c_1=a$ then the jump at $c_1$ is $\alpha(c_1^+)-\alpha(c_1)$, and if
$c_n=b$ then the jump at $c_n$ is $\alpha(c_n)-\alpha(c_n^-)$. We’ll designate the jump of $\alpha$ at $c_k$ by $\alpha_k$.
So, as before, the function $\alpha$ may or may not be continuous from one side or the other at a jump point $c_k$. And if we have a function $f$ discontinuous on the same side of the same point,
then the integral can’t exist. So let’s consider any function $f$ so that at each $c_k$, at least one of $\alpha$ or $f$ is continuous from the left, and at least one is continuous from the right. We
can chop up the interval $\left[a,b\right]$ into chunks so that each one contains only one jump, and then the result from last time (along with the “linearity” of the integral in the interval) tells
us that
That is, each jump gives us the function at the jump point times the jump at that point, and we just add them all together. So finite weighted sums of function evaluations are just special cases of
Riemann-Stieltjes integrals.
Here’s a particularly nice family of examples. Let’s start with any interval $\left[a,b\right]$ and some natural number $n$. Define a step function $\alpha_n$ by starting with $\alpha_n(a)=a$ and
jumping up by $\frac{b-a}{n}$ at $a+\frac{1}{2}\frac{b-a}{n}$, $a+\frac{3}{2}\frac{b-a}{n}$, $a+\frac{5}{2}\frac{b-a}{n}$, and so on. Then the integral of any continuous function on $\left[a,b\right]
$ gives
But notice that this is just a Riemann sum for the function $f$. Since $f$ is continuous, we know that it’s Riemann integrable, and so as $n$ gets larger and larger, these Riemann sums must converge
to the Riemann integral. That is
But at the same time we see that $\alpha_n(x)$ converges to ${x}$. Clearly there is some connection between convergence and integration to be explored here.
1 Comment »
1. [...] though, that we’ve seen a way to get finite sums before: using step functions as integrators. So let’s use the step function , which is defined for any real number as the largest [...]
Pingback by Infinite Series « The Unapologetic Mathematician | April 24, 2008 | Reply
• Recent Posts
• Blogroll
• Art
• Astronomy
• Computer Science
• Education
• Mathematics
• Me
• Philosophy
• Physics
• Politics
• Science
• RSS Feeds
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at
• Subjects
• Archives
|
{"url":"http://unapologetic.wordpress.com/2008/03/27/step-function-integrators/?like=1&_wpnonce=84af3d0b4c","timestamp":"2014-04-17T15:39:49Z","content_type":null,"content_length":"78662","record_id":"<urn:uuid:0d803fe8-7b94-485c-97fc-148c9ae31a07>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
|
integration arcsin rule
July 3rd 2008, 10:18 AM #1
Jun 2008
integration arcsin rule
$\int \frac{7x}{\sqrt{16-x^4}} dx = 7\int \frac{x}{\sqrt{4^2-(x^2)^2}} dx$
The arcsin rule can be used here, right? I'm not sure what to do with the x in the numerator to be able to apply the arcsin rule.
First, you will need to use the substitution of $u = x^2$ to obtain $\frac 72 \int \frac 1{\sqrt{16 - u^2}} \ \mathrm{d}u$.
Then you can apply the arcsin rule which is $\int \frac{1}{\sqrt{a^2 - x^2}} \ \mathrm{d}x = \mathrm{arcsin}\left(\frac{x}{a}\right)$.
Remember to change the $u$ back to $x^2$ in the final answer.
First, you will need to use the substitution of $u = x^2$ to obtain $\frac 72 \int \frac 1{\sqrt{16 - u^2}} \ \mathrm{d}u$.
Then you can apply the arcsin rule which is $\int \frac{1}{\sqrt{a^2 - x^2}} \ \mathrm{d}x = \mathrm{arcsin}\left(\frac{x}{a}\right)$.
Remember to change the $u$ back to $x^2$ in the final answer.
Got it, thanks!
July 3rd 2008, 10:31 AM #2
July 3rd 2008, 10:39 AM #3
Jun 2008
|
{"url":"http://mathhelpforum.com/calculus/42972-integration-arcsin-rule.html","timestamp":"2014-04-18T21:09:20Z","content_type":null,"content_length":"37582","record_id":"<urn:uuid:3bf9c839-eded-4cc2-9ed8-c3e4f0b29c41>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Black Diamond Math Tutor
Find a Black Diamond Math Tutor
Hi everyone!! My name is Jesse and I am a very enthusiastic and positive tutor eager to help your children learn and master any topic from college sciences to basic mathematics. I believe
education is the key to success. In my journey so far I have earned: - Associate of Arts and Sciences degree ...
25 Subjects: including statistics, geometry, trigonometry, probability
...I look forward to working with you and seeing you succeed. Best regards, StephanieI have four children ages: 5-15 and have tutored similar ages for the past few years. From learning shapes,
colors, letters and numbers to learning simple addition and subtraction, I have some experience with mont...
46 Subjects: including ACT Math, trigonometry, SAT math, algebra 1
...I have a Bachelor of Arts degree in Theatre Arts from Saint Martin's University where I had lead or principle roles in seven major productions. In addition to my college experience, I have had
roles or assisted in the production of over a dozen community and high school productions. When directing or coaching actors I use a method approach with heavy emphasis on scene analysis and
22 Subjects: including geometry, ACT Math, SAT math, algebra 2
...Having tutored the SAT, ACT, and GRE, and having studied Linguistics, Latin, English, and a few other languages in college, I've acquired a sizable vocabulary. I tutor vocabulary for
standardized tests using roots, word groups, and sentence context. For non-native English speakers, I make sure ...
33 Subjects: including ACT Math, GRE, algebra 1, algebra 2
...I also enjoy sports, computers, music, building and fixing things. I have a pretty laid back personality and am easy to work with. I have a degree in mathematics and I am a high school math
8 Subjects: including SAT math, ACT Math, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/black_diamond_wa_math_tutors.php","timestamp":"2014-04-21T07:39:46Z","content_type":null,"content_length":"23869","record_id":"<urn:uuid:b4a3fdc5-5d9b-4966-b0a9-a5cd757e8356>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Formative Assessment Lessons (beta)
Read more about the purpose of the MAP Classroom Challenges…
Modeling: Making Matchsticks
Mathematical goals
This lesson unit is intended to help you assess how well students are able to:
• Interpret a situation and represent the variables mathematically.
• Select appropriate mathematical methods.
• Interpret and evaluate the data generated.
• Communicate their reasoning clearly.
This lesson is designed to enable students to develop strategies for solving real-life problems. The problem is characterized by the use of appropriate units and formulas.
• Before the lesson, students tackle the problem individually. You assess their responses and formulate questions that will prompt them to review their work.
• At the start of the lesson, students respond individually to the questions set, and then work in groups to combine their thinking and produce a collaborative solution in the form of a poster.
• In the same small groups, students evaluate and comment on some sample responses. They identify the strengths and mistakes in these responses and compare them with their own work.
• In a whole-class discussion, students explain and compare the solution strategies they have seen and used.
• Finally, students reflect on their work and their learning.
Materials required
Each student will need a calculator, a copy of Making Matchsticks, a copy Formula Sheet, a blank sheet of paper, and a copy of the questionnaire How Did You Work?
Each small group of students will need a large sheet of paper for making a poster, felt tipped pens, and copies of Sample Responses to Discuss.
There are some projector resources to help you with whole-class discussions.
Time needed
15 minutes before the lesson, a 90-minute lesson and 15 minutes for follow-up lesson or for homework. Timings given are only approximate. Exact timings will depend on the needs of the class.
Mathematical Practices
This lesson involves a range of mathematical practices from the standards, with emphasis on:
Mathematical Content
This lesson asks students to select and apply mathematical content from across the grades, including the content standards:
Lesson (complete)
Projector Resources
A draft Brief Guide for teachers and administrators (PDF) is now available, and is recommended for anybody using the MAP Classroom Challenges for the first time.
We have assigned lessons to grades based on the Common Core State Standards for Mathematical Content. During this transition period, you should use your judgement as to where they fit in your current
The Beta versions of the MAP Lesson Units may be distributed, unmodified, under the Creative Commons Attribution, Non-commercial, No Derivatives License 3.0. All other rights reserved. Please send
any enquiries about commercial use or derived works to map.info@mathshell.org.
Can you help us by sending samples of students' work on the Classroom Challenges?
|
{"url":"http://map.mathshell.org.uk/materials/lessons.php?taskid=410&subpage=problem","timestamp":"2014-04-17T00:48:10Z","content_type":null,"content_length":"13469","record_id":"<urn:uuid:a6e99b8e-87e0-4c71-aade-0b2d1b35f2d5>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Greenway, VA Geometry Tutor
Find a Greenway, VA Geometry Tutor
...As a private tutor, I have accumulated over 750 hours assisting high school, undergraduate, and returning adult students. And as a research scientist, I am a published author and have
conducted research in nonlinear dynamics and ocean acoustics. My teaching focuses on understanding concepts, connecting different concepts into a coherent whole and competency in problem solving.
9 Subjects: including geometry, calculus, physics, algebra 1
...However, I also love working with students of all ages. I am very flexible. I have experience tutoring students with ADHD and dyslexia as well as students with general learning difficulties.
31 Subjects: including geometry, English, writing, biology
...I have several years of experience tutoring elementary level math from kindergarten up to sixth grade. I am very patient with students who are struggling with math, and I can help students
become less anxious about the subject and more confident in their skills. I have several years experience tutoring elementary students.
46 Subjects: including geometry, English, Spanish, algebra 1
...I think my English background helps me tremendously because I can explain things very well. I have tutored math students for over six years, and the students often tell me that I am better at
explaining things than their teachers. I have been tutoring students in writing for over six years; I love helping them progress as writers.
22 Subjects: including geometry, reading, literature, ESL/ESOL
...I am a bit of a grammar enthusiast, I guess you could say. I tend to proofread anything I'm reading, even if it's a published book. I actually enjoy proofreading papers and have done so for
previous supervisors and for peers.
13 Subjects: including geometry, biology, algebra 2, grammar
Related Greenway, VA Tutors
Greenway, VA Accounting Tutors
Greenway, VA ACT Tutors
Greenway, VA Algebra Tutors
Greenway, VA Algebra 2 Tutors
Greenway, VA Calculus Tutors
Greenway, VA Geometry Tutors
Greenway, VA Math Tutors
Greenway, VA Prealgebra Tutors
Greenway, VA Precalculus Tutors
Greenway, VA SAT Tutors
Greenway, VA SAT Math Tutors
Greenway, VA Science Tutors
Greenway, VA Statistics Tutors
Greenway, VA Trigonometry Tutors
Nearby Cities With geometry Tutor
Belleview, VA geometry Tutors
Berwyn, MD geometry Tutors
Brambleton, VA geometry Tutors
Calverton, MD geometry Tutors
Crystal City, VA geometry Tutors
Green Meadow, MD geometry Tutors
Jefferson Manor, VA geometry Tutors
Kingstowne, VA geometry Tutors
Lansdowne, VA geometry Tutors
South Riding, VA geometry Tutors
Sudley Springs, VA geometry Tutors
Sully Station, VA geometry Tutors
W Bethesda, MD geometry Tutors
West Hyattsville, MD geometry Tutors
West Mclean geometry Tutors
|
{"url":"http://www.purplemath.com/Greenway_VA_Geometry_tutors.php","timestamp":"2014-04-19T23:43:43Z","content_type":null,"content_length":"24190","record_id":"<urn:uuid:0188f058-42b6-434a-a5d2-7a643d4a455c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Guy, Richard Kenneth (1916
Guy, Richard Kenneth (1916–)
British-born mathematician who is professor emeritus of mathematics at the University of Calgary, Canada, and an expert in combinatorics and in number theory. Guy is the author of more than 250
papers and 10 books, including (as a coauthor) the game theory classic Winning Ways. He has been an editor of the "Problems" section of the American Mathematical Monthly since 1971.
Related category
|
{"url":"http://www.daviddarling.info/encyclopedia/G/Guy.html","timestamp":"2014-04-17T15:28:19Z","content_type":null,"content_length":"5619","record_id":"<urn:uuid:9f507f28-9ed3-4dc9-8520-3bc186000b51>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
physics question?
August 29th 2008, 02:21 PM #1
Oct 2007
physics question?
I'm stumped on this problem...it should be easy...
Suppose that 50 steel wires of diameter 0.5 mm can just support a particular weight without breaking. What would the diameter of a single steel wire have to be in order to just support the same
weight without breaking? Explain how you found your answer.
I'm stumped on this problem...it should be easy...
Suppose that 50 steel wires of diameter 0.5 mm can just support a particular weight without breaking. What would the diameter of a single steel wire have to be in order to just support the same
weight without breaking? Explain how you found your answer.
Hint: Tensile strength is proportional to cross-sectional area.
I know that the cross sectional area should= to the total cross sectional area of the 50 wires...but isn't that oversimplifying things....
For instance...Would design of the 50 wires matter?
Of course it matters.
If the thin wires and the single fat wire have the same tensile strength, then the proportion is by cross-sectional areas only.
50[pi* (0.5/2)^2] = 1[pi *(d/2)^2]
where d is the diameter of the fat wire in cm.
If the thin wires have X tensile strength and the fat wire has Y tensile strength, then,
50[pi* (0.5/2)^2](X) = 1[pi *(d/2)^2](Y).
As ticbol correctly answered, the material the wires are made from obviously matters.
However, my understanding of the question is that all wires being made from the same material is implied (otherwise there's clearly not enough information to answer it). Perhaps you should ask
whoever set the question for clarification ....
August 29th 2008, 02:41 PM #2
August 29th 2008, 02:45 PM #3
Oct 2007
August 29th 2008, 03:57 PM #4
MHF Contributor
Apr 2005
August 29th 2008, 08:05 PM #5
|
{"url":"http://mathhelpforum.com/math-topics/47120-physics-question.html","timestamp":"2014-04-18T18:41:37Z","content_type":null,"content_length":"43931","record_id":"<urn:uuid:a5944deb-71bd-4e21-bb53-558066590447>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
East Brunswick Algebra 2 Tutor
...I enjoy the challenges and applications of Math and Science to everyday situations and incorporating these into lessons. I also have coached many kids successfully in recreation program sports
such as baseball, tee ball, basketball, and soccer. I have a New Jersey Youth Soccer Association "F" license, and I have been a Head Coach of a girls travel soccer team for a soccer club.
41 Subjects: including algebra 2, English, chemistry, physics
...Although my degrees and certifications are in mathematics, I am highly literate and have helped many students make dramatic gains in their reading scores. I prep students for all three parts of
the SAT, including the essay. Writing skills and grammar are not taught as rigorously as they used to be.
23 Subjects: including algebra 2, English, calculus, geometry
I taught high school math from 1966 to 1980 and college math for many years between 1973 and the present. I will be teaching precalculus at a community college this coming term. My background also
includes the titles of Math Assessment Specialist for Educational Testing Service and Math Editor for a major test publishing firm.
21 Subjects: including algebra 2, calculus, statistics, geometry
...This means that my students not only learn procedural skills, but develop a deeper understanding of concepts that can be connected to future math concepts they will encounter. I treat every
student as an individual, figure out what they understand and build knowledge from there. Every student is different and each new teaching experience is exciting.
13 Subjects: including algebra 2, calculus, statistics, physics
...I also taught and tutor MATH at different levels (in high school and the university). I have a PhD in physics so I had to use calculus and precalculus during all my education and also during my
career. Both calculus and precalculus are like our mother language for physicists. I also taught ca...
9 Subjects: including algebra 2, physics, calculus, algebra 1
|
{"url":"http://www.purplemath.com/East_Brunswick_Algebra_2_tutors.php","timestamp":"2014-04-20T04:29:46Z","content_type":null,"content_length":"24447","record_id":"<urn:uuid:77eca1b9-b34c-4cc3-bd37-b0b637c898bf>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On the number of Archimedean solids
up vote 11 down vote favorite
Does anyone know of any good resources for the proof of the number of Archimedean solids (also known as semiregular polyhedra)?
I have seen a couple of algebraic discussions but no true proof. Also, I am looking more at trying to prove it topologically, but for now, any resource will help.*
*I worked on this project a bit as an undergraduate and am just now getting back into it.
gt.geometric-topology geometry convex-polytopes
1 I don't think 'proof assistant' means what you think it means. – HJRW Nov 18 '10 at 17:32
18 Having just spent an hour lecturing about the $p$-adic numbers, this question instantly makes me wonder about the classification of non-Archimedean solids :-/ – Kevin Buzzard Nov 18 '10 at 18:00
add comment
5 Answers
active oldest votes
A proof of the enumeration theorem for the Archimedean solids (which basically dates back to Kepler) can be found in the beautiful book "Polyhedra" by P.R. Cromwell (Cambridge
up vote 10 down University Press 1997, pp. 162-167).
vote accepted
This is indeed a very good proof resource. I just glanced briefly at it, but will explore it further when I get a chance. I really appreciate your suggestion - it is actually
the closest to the algorithm I am using than any I have found so far. Thanks again. – Tyler Clark Nov 18 '10 at 20:42
I'm glad you found it helpful. – Andrey Rekalo Nov 18 '10 at 21:05
add comment
Incidentally, you may be interested in the article by Joseph Malkevitch, "Milestones in the history of polyhedra," which appeared in Shaping Space: A Polyhedral Approach, Marjorie Senechal
and George Fleck, editors, pages 80-92. Birkhauser, Boston, 1988. There he makes the case (following Grünbaum) that there should be 14 Archimedean solids rather than 13, including the
up vote 9 pseudorhombicuboctahedron(!) as the 14th.
down vote
I will definitely try to find this article and read through it. Thanks for your feedback! – Tyler Clark Nov 18 '10 at 20:42
add comment
Following up on Joseph's comment: Branko Grünbaum and others have pointed out that besides the 13 or 14, there are also two infinite families of polyhedra meeting the definition of
Archimedean, although generally not considered to be Archimedean. Why prisms and antiprisms are excluded from the list has never been clear to me.
In any case, this is not just a historical curiosity --- in any attempt you make to classify them, you should run into these two infinite families.
up vote 9
down vote If you use a modern definition, i.e. vertex-transitive, then you will also get 13 others. And a little group theory can help in the classification. If you use a more classical definition,
i.e. "locally vertex-regular," you will indeed find a 14th.
You are indeed correct. The algorithm I worked on with another classmate did in fact give us the prisms and antiprisms. It has been a while since I have worked on this and I am just now
getting back into it. I cannot remember why we excluded the prisms and antiprisms - I will have to take a closer look at that issue. – Tyler Clark Nov 18 '10 at 20:40
add comment
I use a slightly different approach than Cromwell. Please see the Exercises at the end of Chapter 5 here: http://staff.imsa.edu/~vmatsko/pgsCh1-5.pdf.
up vote 4 This is a draft of a textbook I am writing, and currently using to teach a course on polyhedra. The level of the text is mid-level undergraduate, so strictly speaking, the Exercises are
down vote really an outline of a rigorous enumeration. Symmetry considerations are glossed over.
add comment
My proof can be found here: ywhmaths.webs.com/Geometry/ArchimedeanSolids.pdf
up vote 1 down vote
add comment
Not the answer you're looking for? Browse other questions tagged gt.geometric-topology geometry convex-polytopes or ask your own question.
|
{"url":"http://mathoverflow.net/questions/46502/on-the-number-of-archimedean-solids/61477","timestamp":"2014-04-18T11:00:22Z","content_type":null,"content_length":"74859","record_id":"<urn:uuid:a87468c8-c4c6-41ef-a9eb-4c8d8b5d14af>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Which math paper maximizes the ratio (importance)/(length)?
up vote 79 down vote favorite
My vote would be Milnor's 7-page paper "On manifolds homeomorphic to the 7-sphere", in Vol. 64 of Annals of Math. For those who have not read it, he explicitly constructs smooth 7-manifolds which are
homeomorphic but not diffeomorphic to the standard 7-sphere.
What do you think?
Note: If you have a contribution, then (by definition) it will be a paper worth reading so please do give a journal reference or hyperlink!
Edit: To echo Richard's comment, the emphasis here is really on short papers. However I don't want to give an arbitrary numerical bound, so just use good judgement...
soft-question math-communication
26 You should probably bound the length, cuz otherwise you could just pick your favorite paper of Ratner, Grothendieck, Thurston, et cetera and the importance blows everything else away. – Richard
Kent Dec 1 '09 at 1:41
21 Or Gromov, "from whose sentences people have written theses" (as I have seen someone write somewhere) – Mariano Suárez-Alvarez♦ Dec 1 '09 at 2:31
8 The award for the corresponding question for paper titles would have to go to "H = W". Meyers and Serrin, Proc. Nat. Acad, Sci. USA 51 (1964), 1055-6. – John D. Cook Jan 6 '10 at 2:49
show 1 more comment
59 Answers
active oldest votes
The little paper by John McKay on Graphs, singularities, and finite groups is a nice example.
up vote 4 down Graphs, singularities, and finite groups. The Santa Cruz Conference on Finite Groups (Univ. California, Santa Cruz, Calif., 1979), pp. 183--186, Proc. Sympos. Pure Math., 37, Amer.
vote Math. Soc., Providence, R.I., 1980.
add comment
There are a very large number of very concise papers written in the USSR, back when it existed.
up vote 16 down A good example would Beilinson's paper "Coherent sheaves on $\mathbb{P}^n$ and problems of linear algebra." It's probably not quite as earth-shaking as Milnor's paper, but it's also
vote only slightly more than 1 page long.
34 Well, omiting all details is not the same thing as being space-efficient! :P – Mariano Suárez-Alvarez♦ Dec 1 '09 at 2:03
3 Can anyone sum up what Beilison's paper is about to me? Many thanks. – darij grinberg Jan 5 '10 at 23:36
add comment
Any of three papers dealing with primality and factoring that are between 7 and 13 pages:
First place: Rivest, R.; A. Shamir; L. Adleman (1978). "A Method for Obtaining Digital Signatures and Public-Key Cryptosystems". Communications of the ACM 21 (2): 120–126.
up vote 11 down Runner-up: P. W. Shor, Algorithms for quantum computation: Discrete logarithms and factoring, Proc. 35nd Annual Symposium on Foundations of Computer Science (Shafi Goldwasser, ed.),
vote IEEE Computer Society Press (1994), 124-134.
Honorable mention: Manindra Agrawal, Neeraj Kayal, Nitin Saxena, "PRIMES is in P", Annals of Mathematics 160 (2004), no. 2, pp. 781–793.
12 Does it count if you publish in a conference proceedings with a 10-page limit (although I did buy an extra page). The full version, SIAM J. Computing 26: 1484-1509 (1997), was 26
pages. – Peter Shor Mar 19 '10 at 13:57
3 I'd say it counts and then some. I never saw a bunch of military types getting all excited and nervous about a paper on abelian categories or natural proofs and trying to
understand the results. – Steve Huntsman Mar 19 '10 at 14:09
show 2 more comments
Beilinson and Bernstein's paper "Localisation de $\mathfrak g$-modules" is probably the most important in geometric representation theory, and is roughly 3 pages long.
up vote 15 down vote
2 It's available on the internet in the sense that I can send you a scan. – Ben Webster♦ May 4 '11 at 23:04
show 2 more comments
Kazhdan's paper "On the connection of the dual space of a group with the structure of its closed subgroups" introduced property (T) and proved many of its standard properties. And it's
up vote 23 only 3 pages long (and it contains a surprisingly large number of details for such a short paper!)
down vote
add comment
My favourite is the following tiny, self-contained article:
"Uniform equivalence between Banach Spaces" by Israel Aharoni & Joram Lindenstrauss, Bulletin of the American Mathematical Society, Volume 84, Number 2, March 1978, pp.281-283.
up vote 4 down vote http://www.ams.org/bull/1978-84-02/S0002-9904-1978-14475-9/S0002-9904-1978-14475-9.pdf
(in which the authors prove that there exist two non-isomorphic Banach spaces that are Lipschitz homeomorphic.)
add comment
I would recommend one very short "paper" by Grothendieck in some IHES publications has defined algebraic de Rham cohomology. (I don't think it maximizes the ratio in question, but it is
an interesting one, anyway.)
BTW, it was actually part of a mail to Atiyah. It begins with 3 dots! (Maybe some private conversation was omitted). Of course, sometimes Grothendieck wrote long letters (e.g. his
up vote 11 700-page letter to Quillen "pursuing stacks" or his 50-page letter to Faltings on dessin d'enfant).
down vote
Also, I think Grothendieck had a (short?) paper with a striking title called "Hodge conjecture is false for trivial reason", in which he pointed out that the integral Hodge conj. is not
true, one has to mod out by torsion, i.e. tensored with Q.
20 "The general Hodge conjecture is false for trivial reasons." I'm nitpicking, but it's easily my favorite title of a math paper ever. – Harrison Brown Dec 1 '09 at 3:28
5 What, you don't like "Zaphod Beeblebrox's brain and the fifty-ninth row of Pascal's triangle"? – Qiaochu Yuan Dec 1 '09 at 19:57
5 Funny as that is, it doesn't have the same oomph to it as "X is false for trivial reasons." While we're on the subject of funny titles, though, I like "Mick gets some (the odds are on
his side)", which makes no sense whatsoever as a paper title until you realize what the paper's about! – Harrison Brown Dec 2 '09 at 2:01
1 Correcting the title is not just nitpicking. The paper is about a generalisation of the Hodge conjecture concerning a characterisation of the filtration on rational cohomology induced
by the Hodge filtration. It deals with rational cohomology and so is not concerned with the failure of the (non-generalised) Hodge conjecture for integral cohomology. The latter
result is due to Atiyah-Bott. – Torsten Ekedahl Mar 19 '10 at 5:24
1 My favorite title is P. Cartier, Comment l'hypothese de Riemann ne fut pas prouvee [How the Riemann hypothesis was not proved], Seminar on Number Theory, Paris 1980-81 (Paris, 1980/
1981), pp. 35-48, Progr Math 22, Birkhauser, Boston, 1982, MR 85f:11035. – Gerry Myerson Sep 23 '10 at 6:42
add comment
Depending on how strict you are, this might not qualify as a paper. Hilbert's 1900 ICM talk in which he posed his 23 problems.
up vote 26 down vote
add comment
Serre's GAGA isn't as short as some of the others, but it's still just over 40 pages (which is quite short by the standards of Serre/Grothendieck-style algebraic geometry at the time --
up vote 5 e.g. FAC is about 80 pages, and of course there are things like EGA...), and it's still GAGA.
down vote
add comment
A natural choice is Riemann's "On the Number of Primes Less Than a Given Magnitude" at only 8 pages long...
up vote 109 down vote http://en.wikipedia.org/wiki/On_the_Number_of_Primes_Less_Than_a_Given_Magnitude
show 1 more comment
In theoretical CS, there's the Razborov-Rudich "natural proofs" paper, which weighs in at 9 pages. After introducing and defining the terminology, and proving a couple of simple lemmas,
up vote 18 the proof of the main theorem takes only a couple of paragraphs, less than half a page if I recall correctly.
down vote
add comment
Noam Elkies, The existence of infinitely many supersingular primes for every elliptic curve over Q, Invent. Math. 89 (1987), 561-568.
up vote 28 down vote
11 Because its length (and simplicity), this was the first paper I ever completely read! – David Zureick-Brown♦ Dec 1 '09 at 4:49
add comment
Riemann's Habilitationsschrift, On the hypotheses which lie at the foundation of geometry, was the start of Riemannian Geometry. An English translation took up 6 pages in Nature.
up vote 31 down vote
9 That is not a paper: it's the Habilitation $\textit{lecture}$ Riemann gave, which was only published posthumously. – Victor Protsak May 21 '10 at 2:19
add comment
Mordell, L.J., On the rational solutions of the indeterminate equations of third and fourth degrees, Proc. Camb. Philos. Soc. 21 (1922), 179–192.
up vote 9 In this paper he proved the Mordell-Weil theorem for elliptic curves over $\mathbb{Q}$ (the group of rational points is finitely generated), and he stated the Mordell conjecture (curves
down vote of genus >1 over $\mathbb{Q}$ have only finitely many points), which was one of the most important open problems in mathematics until Faltings proved it in 1983.
add comment
H. Lebesgue, Sur une généralisation de l’intégrale définie, Ac. Sci. C.R. 132 (1901), 1025– 1028.
up vote 43 down vote The beginning of measure theory as we know it, and a very short paper.
show 1 more comment
My mention goes to V. I Lomonosov's "Invariant subspaces for the family of operators which commute with a completely continuous operator", Funct. Anal. Appl. 7 (1973) 213-214, which in less
up vote than two pages demolished numerous previous results in invariant subspace theory, many of which previously took dozens of pages to prove. It also kick-started the theory of subspaces
18 down simultaneously invariant under several operators, where it continues to be useful today. It's highly self-contained, using only the Schauder-Tychonoff theorem, if I remember correctly.
3 I also like Lomonosov and Rosenthal's "The simplest proof of Burnside's theorem on matrix algebras" that proves that a proper subalgebra of a matrix algebra over an algebraically closed
field must have a non-trivial invariant subspace. 3 pages. – Dima Sustretov May 4 '11 at 23:40
add comment
The so called "Weil conjectures" are in the last pages of André Weil's short paper in 1949, "Numbers of solutions of equations in finite fields", Bulletin of the American Mathematical
up vote 5 down Society 55: 497–508. They probably were around before though.
add comment
I get this nominee from Halmos...
E. Nelson, "A Proof of Liouville's Theorem", Proc. Amer. Math. Soc. 12 (1961) 995
up vote 50 9 lines long. Not the shortest paper ever, but maximizes importance/length ...
down vote
11 William C. Waterhouse, An Empty Inverse Limit, Proceedings of the American Mathematical Society, Vol. 36, No. 2 (Dec., 1972), p. 618. The body of the paper is only 6 lines. – Richard
Kent Dec 1 '09 at 17:29
3 Other shorties... from sci.math in 1994. P.H. Doyle: Plane Separation, Proc. Camb. Phil. Soc. 64 (1968) 291; MR 36#7115. H. Furstenberg: On the Infinitude of Primes, Amer. Math.
Monthly 62 (1955) 353; MR 16-904. D. Lubell: A Short Proof of Sperner's Lemma, J. Comb. Theory, Ser. A, vol.1 no. 2 (1966) 299; MR 33#2558. – Gerald Edgar Dec 2 '09 at 0:21
show 3 more comments
Jürgen Moser (1965), On the Volume Elements on a Manifold , Transactions of the American Mathematical Society, Vol. 120, No. 2 (Nov., 1965), pp. 286-294
up vote 9 down Besides the many powerful applications of the famous "Moser argument" (or "Moser trick"), the local version gives a very nice and elegant proof of the classical Darboux Theorem.
(For a nice summary of this and other papers by Jürgen Moser, I would recommend http://www.math.psu.edu/katok_a/pub/Moserhistory.pdf (a short discussion of the paper mentioned above
can be found at p.17-18))
add comment
One of the shortest papers ever published is probably John Milnor's Eigenvalues of the Laplace Operator on Certain Manifolds, Proceedings of the National Academy of Sciences of
USA, 1964, p. 542
up vote 27 down
vote He shows that a compact Riemannian manifold is not characterized by the eigenvalues of its Laplacian. It takes him little more than half of a page.
1 Also on the same topic: Also Kac's Can one hear the shape of a drum? is pretty short. – Delio Mugnolo Nov 9 '13 at 14:53
add comment
Drinfeld and Simpson's B-Structures on G-Bundles and Local Triviality, Mathematical Research Letters 2, 823-829 (1995) comes in at under seven pages and has been quite important in all
the work done on principal G-bundles (such as the geometric Langlands' program).
up vote 2 down In particular, it proved the double quotient description of G-bundles on curves (for reductive G) which had previously only been proved for $G = SL_n$ by Beauville and Laszlo.
The paper can be found here.
add comment
Endre Szemeredi's paper on the Regularity Lemma is just 3 pages long. I think that is a good candidate as well.
up vote 23 down Szemerédi, Endre (1978), "Regular partitions of graphs", Problèmes combinatoires et théorie des graphes (Colloq. Internat. CNRS, Univ. Orsay, Orsay, 1976), Colloq. Internat. CNRS,
vote 260, Paris: CNRS, pp. 399–401,
3 Although didn't it appear beforehand in the monstrously complicated proof of Szemeredi's theorem? – Harrison Brown Dec 2 '09 at 1:36
show 1 more comment
How about Leonid Levin (1986), Average-Case Complete Problems, SIAM Journal of Computing 15: 285-286? Quite important in complexity theory, and only two pages long, although very,
up vote 12 down very dense.
add comment
John Nash's "Equilibrium Points in n-Person Games" is only about a page and is one of the most important papers in game theory.
up vote 66 down vote
3 Link: courses.engr.illinois.edu/ece586/TB/Nash-NAS-1950.pdf – Adrian Petrescu Oct 30 '11 at 5:25
15 Nowadays this sort of paper would not get published at all, and would likely appear just as an answer on MathOverflow. – Andrej Bauer Nov 7 '11 at 14:26
1 The link given by Adrian is not current. A more stable link is: web.mit.edu/linguistics/events/iap07/Nash-Eqm.pdf – Todd Trimble♦ Dec 14 '13 at 0:21
show 1 more comment
What about Selberg's 1947 paper?
up vote 1 down vote
add comment
Robert Aumann's "Agreeing to Disagree" paper, at 3 pages of length, is one of the most important papers in its field.
up vote 12 down vote
add comment
Jannsen, Uwe (1992), "Motives, numerical equivalence and semi-simplicity", Inventions math. 107: 447–452.
up vote 10 down vote
add comment
It's not a paper, and it's not groundbreaking, but it's short!
up vote 48 A One-Sentence Proof That Every Prime $p\equiv 1(\mod 4)$ Is a Sum of Two Squares D. Zagier The American Mathematical Monthly, Vol. 97, No. 2 (Feb., 1990), p. 144 http://www.jstor.org/pss/
down vote 2323918
14 That's a beautiful one indeed. And it also has lot of potential for the (obscurity of the idea)/(line count) competition :P – Mariano Suárez-Alvarez♦ Jan 6 '10 at 3:39
111 And you can see that sentence for only $12 at JSTOR! – I. J. Kennedy May 17 '10 at 0:37
2 ...or broken up into a few sentences, but for free at Wikipedia (I guess that does decrease the cost/volume ratio). – Victor Protsak May 21 '10 at 2:23
96 Save 12 bucks: The involution on a finite set $S = \{(x,y,z) \in \mathbb{N}^3 : x^2 +4yz = p \} $ defined by: \[ (x,y,z) \mapsto \left\{ \begin{array}{cc} (x+2z,z,y-x-z) & \text{if }
x < y-z \\ ( 2y-x, y, y-x+z ) & \text{if } y-z < x <2y \\ ( x-2y, x-y+z, y ) & \text{if } x > 2y \end{array} \right. \] has exactly one fixed point, so $|S|$ is odd and the involution
defined by $(x,y,z) \mapsto (x,z,y)$ also has a fixed point. – Zavosh May 21 '10 at 11:46
23 One thing I've always wondered - is there any intuition behind the involution? – drvitek Sep 30 '10 at 4:16
show 2 more comments
Barry Mazur "On Embeddings of Spheres", Bull. AMS v 65 (1959) only 5 1/2 pages. It introduced the method of infinite repetition in topology and allowed the proof the generalized
up vote 10 down Schoenflies conjecture.
add comment
The one-page paper
Golay, Marcel J. E.: "Notes on Digital Coding", Proc. IRE 37, p. 657, 1949,
up vote 23
down vote which introduces the Golay code.
1 I first came across Golay codes when studying electrical engineering - I recal looking up this paper, thinking I understood it but I still had to experiment with examples for several
days to really believe that such a simple thing could be such a powerful error correcting code. – WetSavannaAnimal aka Rod Vance May 20 '11 at 13:01
show 1 more comment
Not the answer you're looking for? Browse other questions tagged soft-question math-communication or ask your own question.
|
{"url":"http://mathoverflow.net/questions/7330/which-math-paper-maximizes-the-ratio-importance-length?answertab=oldest","timestamp":"2014-04-20T23:40:59Z","content_type":null,"content_length":"178841","record_id":"<urn:uuid:343e4c09-acb0-471b-963a-bdb9a4ad7335>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Catch of the Day (153 Fishes)
The Bible tells of the Apostles going fishing and catching exactly 153
fish. It so happens that 153 is a "triangular" number (in the Pythagorean
sense), being the sum of the first 17 integers. It's also the sum of the
first five factorials.
A slightly more obscure property of 153 is that it equals the sum
of the cubes of its decimal digits. In fact, if we take ANY integer
multiple of 3, and add up the cubes of its decimal digits, then take
the result and sum the cubes of its digits, and so on, we invariably
end up with 153. For example, since the number 4713 is a multiple of
3, we can reach 153 by iteratively summing the cubes of the digits, as
starting number = 4713
4^3 + 7^3 + 1^3 + 3^3 = 435
4^3 + 3^3 + 5^3 = 216
2^3 + 1^3 + 6^3 = 225
2^3 + 2^3 + 5^3 = 141
1^3 + 4^3 + 1^3 = 66
6^3 + 6^3 = 432
4^3 + 3^3 + 2^3 = 99
9^3 + 9^3 = 1458
1^3 + 4^3 + 5^3 + 8^3 = 702
7^3 + 2^3 = 351
3^3 + 5^3 + 1^3 = 153 <-----
The fact that this works for any multiple of 3 is easy to prove. First,
recall that any integer n is congruent modulo 3 to the sum of its decimal
digits (because the base 10 is congruent to 1 modulo 3). Then, letting f(n)
denote the sum of the cubes of the decimal digits of n, by Fermat's little
theorem it follows that f(n) is congruent to n modulo 3. Also, we can easily
see that f(n) is less than n for all n greater than 1999. Hence, beginning
with any multiple of 3, and iterating the function f(n), we must arrive at
a multiple of 3 that is less than 1999. We can then show by inspection that
every one of these reduces to 153.
Since numerology has been popular for thousands of years, it's
conveivable that some of the special properties of the number 153
might have been known to the author of the Gospel. Of course, our
modern decimal number system wasn't officially invented until much
later, so it might seem implausible that the number 153 was selected
on the basis of any properties of its decimal digits. On the other
hand, the text (at least in the English translations) does specifically
state the number verbally in explicit decimal form, i.e.,
"Simon Peter went up, and drew the net to land full of great
fishes, an hundred and fifty and three: and for all there was
so many, yet was not the net broken."
John, 21:11
Thus, rather than talking about scores or dozens, it speaks in multiples
of 100, 10, and 1.
Since only multiples of 3 reduce to 153, we might ask what happens to
the other numbers. It can be shown that all the integers congruent to 2
(mod 3) reduce to either 371 or 407. The integers congruent to 1 (mod 3)
reduce to one of the fixed values 1 or 370, or else to one of the cycles
[55, 250, 133], [160, 217, 352], [136, 244], [919, 1459]. Within the
congruence classes modulo 3 there doesn't seem to be any simple way of
characterizing the numbers that reduce to each of the possible fixed
values or limit cycles.
Naturally we could perform similar iterations on the digits of
a number in any base. One of the more interesting cases is the
base 14, in which 2/3 of all number eventually fall into a particular
cycle. Coincidentally, this cycle includes the decimal number 153,
but it also includes 26 other numbers, for a total length of 27,
which is 3 cubed (which the mystically minded should have no trouble
associating with the Trinity). The decimal values of this base-14
cycle are
Again, there doesn't appear to be any way of distinguishing the numbers
that reduce to this cycle from those that don't, other than by performing
the iterations. By considering sums of higher powers (or polynomials) of
the digits in other bases, we can produce a wide variety of arbitrarily
long (but always finite) cycles.
The number 153 is also sometimes said to be related to a symbol called
the "vesica piscis", which consists of the intersection of two equal
circles whose centers are located on each others circumferences. However,
the relevance of the number 153 to this shape is rather dubious. It rests
on the fact that the ratio of the length to the width of this shape equals
the square root of 3, and one of the convergents of the continued fraction
for the square root of 3 happens to be 265/153. It is sometimes claimed
that this was the value used by Archimedes, but this is only partly true.
Archimedes knew that the square root of 3 is irrational, and he determined
that its value lies between 265/153 and 1351/780, the latter being another
convergent of the continued fraction.
Return to MathPages Main Menu
|
{"url":"http://www.mathpages.com/HOME/kmath463.htm","timestamp":"2014-04-18T00:18:14Z","content_type":null,"content_length":"5620","record_id":"<urn:uuid:5de17d0d-64d4-466a-af79-159c5c31befb>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
table of distribution of probabilities
May 17th 2011, 05:58 AM #1
Nov 2010
table of distribution of probabilities
I am Mateo and I have a probability question in my class that I not know formula for! Help please! Here is the question, translated to english:
An factory has 8 departments in its administartive structure and each department has a certain amount of faults as follows:
this is a table, I don't know how to draw one here:
Department: 1 2 3 4 5 6 7 8
#faults: 3 2 1 0 1 2 0 1
Now, I have to prepare a table of ditribution of properties!
I know that I have
x and p(x), but I don't know the formula to calculate p(x). It's not in the papers given! thank you for your help!
I am Mateo and I have a probability question in my class that I not know formula for! Help please! Here is the question, translated to english:
An factory has 8 departments in its administartive structure and each department has a certain amount of faults as follows:
this is a table, I don't know how to draw one here:
Department: 1 2 3 4 5 6 7 8
#faults: 3 2 1 0 1 2 0 1
Now, I have to prepare a table of ditribution of properties!
I know that I have
x and p(x), but I don't know the formula to calculate p(x). It's not in the papers given! thank you for your help!
Let x be a number of faults than p(x) is the proportion of departments with x faults.
So if x=0 p(x)=2/8=1/4 which is the number of departments with 0 faults divided by the total number of departments.
Similarly p(1)=3/8 etc
May 17th 2011, 06:17 AM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/statistics/180846-table-distribution-probabilities.html","timestamp":"2014-04-17T01:03:36Z","content_type":null,"content_length":"33908","record_id":"<urn:uuid:4e960d18-d13d-469f-8582-412913aa9e49>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
September 2nd 2009, 06:03 PM #1
Sep 2009
a) Show that, if f is convex, then -f is concave (and vice versa)
b) Show that the following function, having domain {0,1,...,n-1}, is convex:
f(j) = 1 / (n-j)
I know what convex/concave looks like, but not sure what it means in a formula. Any help would be great thanks!
convex means $\forall x,y,\lambda\in [0,1],f(\lambda x+(1-\lambda)y)\leq\lambda f(x)+(1-\lambda)f(y)$
or $\forall x<c<y,\frac{f(x)-f(c)}{x-c}\leq\frac{f(x)-f(y)}{x-y}\leq\frac{f(y)-f(c)}{y-c}$
or $f'(x)$exists and it is non-decreasing
or $f''(x)\geq 0$
for concave, you just replace all > with <!
September 2nd 2009, 07:51 PM #2
Senior Member
Jul 2009
|
{"url":"http://mathhelpforum.com/advanced-statistics/100322-proofs.html","timestamp":"2014-04-18T22:32:30Z","content_type":null,"content_length":"32387","record_id":"<urn:uuid:df317eea-10e1-4708-a4a7-62e55f83d512>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Symmetry454 - Wikipedia, the free encyclopedia
The Symmetry454 Calendar (Sym454) is a proposal for calendar reform proposed by Dr. Irv Bromberg of the University of Toronto.
The Symmetry454 arithmetic is fully documented and placed in the public domain for computer implementation.
The Symmetry454 Calendar meets all of the WCC criteria.
en.wikipedia.org /wiki/Symmetry454 (447 words)
CalendarHome.com - Calendar reform - Calendar Encyclopedia
Some calendar reform ideas, such as the Pax Calendar, Symmetry454 calendar and the Common-Civil-Calendar-and-Time Calendar, were created to solve this problem by having years of either 364 days
(52 weeks) or 371 days (53 weeks), thus preserving the 7-day week.
These calendars add a leap week of seven days to the calendar every five or six years to keep the calendar roughly in step with the solar year.
The Symmetry454 calendar has alternating months of 28 and 35 days, and a leap week in December, when needed.
encyclopedia.calendarhome.com /Calendar_reform.htm (738 words)
The Symmetry454 Calendar (Site not responding. Last check: 2007-10-14)
The Symmetry454 Calendar is a perpetual solar calendar that conserves the traditional 7-day week, has symmetrical equal quarters, and starts every month on Monday.
This document, entitled All about the Symmetry454 Calendar Leap Week, introduces various leap rules that could be used with the Symmetry454 Calendar — most could just as easily be adapted to
almost any other calendar.
This document includes the full "soup mix" of new leap rules: the Rotation-Adjusted Year (RAY), Revised Julian variant of the ISO standard leap rule (RJiso), mean and actual astronomical leap
rules, and fixed arithmetic leap cycles for the Northward Equinox or North Solstice.
individual.utoronto.ca /kalendis/symmetry.htm (998 words)
Henry argues that his proposal will succeed where others have failed because it is the only one that keeps the weekly cycle perfectly intact and therefore respects the Fourth Commandment.
However, other calendar proposals that intercalate entire weeks do exist, such as the Symmetry454 calendar.
He advocates transition to the calendar on January 1st, 2006 as that is a year in which his calendar and the Gregorian calendar begin the year in sync.
www.reference.com /browse/wiki/Common-Civil-Calendar-and-Time (371 words)
Spotlight Radio:
The Symmetry454 calendar is similar to the Gregorian calendar in a few ways.
First, the Symmetry454 calendar is the same every year.
Fourth, experts say the Symmetry454 calendar is easier to use, and satisfies everyone's different needs.
www.spotlightradio.net /script.php?id=1106 (990 words)
My Symmetry454 has month lengths of 4-5-4 weeks, and the leap week has no name because it is simply appended to December in leap years.
In the past my "Classic" Symmetry Calendar had month lengths of 30-31-30 days and the leap week was separate at the end of the year, dubbed "Irvember", a name that was invented by a family member
ridiculing my calendar at the Passover sedar last year, to the great delight of all present.
If it has no decimals then it refers to 00:00h UT. If you make it have.5 as a decimal then it refers to 12:00h UT, etc., and as you change it you can see the effect on the solar longitude and
lunar phase that is displayed in that window.
www.geocities.com /Athens/1584/luachmail.html (19619 words)
Try your search on: Qwika (all wikis)
|
{"url":"http://www.factbites.com/topics/Symmetry454","timestamp":"2014-04-18T19:06:14Z","content_type":null,"content_length":"19597","record_id":"<urn:uuid:f7f28220-bde9-49c0-84ea-ba6fc23ab271>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Geomblog
I had to return to Salt Lake the evening of day 2, so blogging got left by the wayside. Piotr is to blame for this post :).
As usual, by day 2 (day 3 for me because of the tutorials day), my desire to attend talks far outstripped my ability to attend them, and I especially wanted to hear more about the Kumar/Kannan
clustering with spectral norm paper, and the stability for clustering result of Awasthi, Blum and Sheffet. If I ever have control of setting the program for a conference, I'm going to decree that all
talks start no earlier than 11am (they can go late in the evening, for all I care).
The first talk that I attended was by Alex Andoni on the
new polylog approximation for the edit distance
that he has with Robert Krauthgamer and Krysztof Onak.
Dick Lipton actually spends
a good deal of time on the problem and background in this post, and also highlights their earlier paper that gave a sub-polynomial appproximation in subquadratic time.
This paper goes one better, producing an algorithm that runs in time $O(n^{1+\epsilon})$ while yielding an $\log^{1/\epsilon}$ approximation. A key idea in this result is a very neat trick: rather
than treating the strings x,y symmetrically, the algorithm first compresses x down to a much smaller string (of length n^\epsilon), and then runs a crude exact algorithm on this compressed string and
This compression is randomized: the algorithm subsamples small pieces of x. However, a straightforward sampling doesn't preserve the distance, so the algorithm actually starts by designing a careful
hierarchical partitioning of x which induces a modified distance function (that they relate to the edit distance). Then the sampling takes place in this tree. Obviously I'm skipping many of the
details that I didn't get, but Alex gave a very nice talk that laid out the hig level picture (his talks are always a model of clarity and intuition).
Another talk that I found intriguing was
the work by Saks and Seshadri
on computing the longest increasing subsequence of a string. While there's a straightforward dynamic program for this problem, they showed that the length of the sequence could be estimated in
"sublinear" time (in the size of the program). In fact, their technique is quite general, and might lead to a general method for approximating the output of a dynamic program in time sublinear in the
size of the table. A key structural property that they develop is that either the dynamic program can easily be broken up into two balanced subrectangles, or it can be subsampled O(log n) times,
yielding smaller copies that can be combined to find a solution (it's possible I'm misrepresenting the last bit). Sesh gave an excellent talk, but his taste in talk jokes was so terrible it was
actually funny :) (sorry, Sesh!)
I deeply regret having to leave before the evening plenary session, and especially regret missing Dan Spielman's talk, which by all accounts was magnificient. All the talks were videotaped, so
they'll eventually be online and I can watch it then.
Three newsworthy items from the FOCS business meeting: (for more details follow the focs2010 tag on twitter)
1. Petros Drineas is now an AF program director at the NSF. Congrats, Petros !
2. STOC 2011 might experiment with a poster session with a different call for submissions. That's a great idea - I hope they decide to go ahead with it
3. Not one person felt that FOCS should continue to have printed proceedings. As recently as a few years ago, at least a third of the audience at theory conferences would have asked for printed
proceedings. Times they are a'changin' !!!
Does our lack of attention to journal publications influence the "impracticality" of our algorithms ?
I've been thinking about algorithms, applications and the dreaded "practicality" issue. And the following thought occurred to me.
We have a well-known aversion to journal papers in our community. I won't rehash the arguments for and against. But it is definitely true that in a conference paper we elide many details of the
algorithm construction and proof. It's not uncommon for a 10-page conference paper to blow up to a nearly 100 page journal paper (cough cough MST cough cough).
So suppose you have this snazzy new algorithm for a problem. You're happy with your handwavy conference version, and you don't bother to write the journal version. When you do, you realize to your
horror that your snazzy 1-page procedure has become this 12 page lumbering beast of an algorithm that's so insanely complicated that you'd be ashamed to even call it 'galactic'. Now this might
motivate you to come up with a simpler algorithm, or it might motivate someone else !
But if you don't write the journal version with full details, how is anyone to know ?
While this is in part about practicality (since practical almost always equals simple), it's more than that. Clarity in algorithm design is useful even if there's no practical value, because it aids
in understanding. So this is in my mind a completely different reason for why we should write more journal papers. It might make our algorithms simpler !!
p.s Yes, I am aware that very often the hard part of a paper is PROVING that an algorithm is correct, and not the algorithm itself. Obviously this would not apply to those cases. But even there,
having to contemplate the monstrosity of the proof you end up writing would motivate you to simplify it. After all, proofs are about understanding, not fireworks and fountains at the Bellagio.
A long time ago I realized that if I took blogging duties too seriously, I'd be exhausted trying to report on talks that I attended. So I'm not going to even try. I will talk about a pair of talks on
clustering though.
A core question of clustering is this: learn a mixture of $k$ $d$-dimensional Gaussians with different means and covariance matrices from samples drawn from the mixture.A long line of research,
starting with a paper by Sanjoy Dasgupta back in 1999, produced variations on the following result:
If the Gaussians are "well separated" and have "nice" shapes, then a "small number" of samples suffices to determine the parameters of the system.
Here, "parameters" would be means, covariance matrix elements, and relative weights. As time went on, the separations got smaller, the niceness reduced, and the "small number" got better.
These two papers take a slightly more general tack, in different ways. They both eliminate the conditional nature of prior bounds ("if the Gaussians are well separated"), instead replacing this by an
unconditional bound ("poly in a parameter capturing the separation").
The first, by Moitra and Valiant (both Ph.D students!) measures the separation between the distributions via their statistical distance, and gives a bound for learning in terms of the reciprocal of
this distance. Their main insight is to project the distributions onto a few directions (i.e 1D), learn what they can, and then recurse on the data that doesn't appear to be well-separated enough to
learn properly. This takes a lot of technical skill to do correctly.
The second paper, by Belkin and Sinha, takes a different approach. They pay more attention to the parameters of the distribution. This can be a problem in general because closeness in distribution
does not imply closeness in parameter space. However, they show that the set of all distributions with fixed moments forms a nice algebraic variety, and that the above problem can be quantified by
determining how closely this variety folds in on itself (intuitively, if the variety intersects itself, then two far away parameter sets yield the same distribution). Using some standard (but not
trivial) results in algebraic geometry, they can bound the complexity of the resulting structures (so that they can estimate the number of samples needed to get to a distiinct region of the space).
There's more detail involved in the estimation of parameters and moments and so on.
While both papers are impressive achievements that essentially resolve the theoretical aspects of learning mixtures of distributions, I came away not entirely sure how to compare them. On the one
hand, the Moitra/Valiant paper is more explicit and is likely to be less "galactic" in nature. But it appears to be limited to Gaussians. The Sinha/Belkin paper is more general, but uses more
abstract machinery that is unlikely to be useful in practice.
So what remains is stilll an effective way of learning Gaussians or other families of distributions, and also a better understanding of how these bounds relate to each other.
I'm in Las Vegas for FOCS 2010. It's been a while since I've attended FOCS, and the short distance from Salt Lake was definitely a factor.
Kudos to the organizing committee for the tutorial sessions today. I've long whined about the lack of non-paper-presenting events at theory conferences, and the topics for today were great ones.
Unfortunately, a spectacular debacle on the part of the hotel restaurant meant that I (and 11 other poor souls) missed most of Mihai's tutorial on data structures.
But the first tutorial by Ketan Mulmuley was fantastic. Let's be clear. It's impossible to convey any depth of material on GCT in 1.5 hours, but he hit (to me) just the right level of high level
exposition and low-level technical conjectures. I also learnt something brand new from the presentation, which I'll try to summarize.
In Godel's incompleteness theorem, we all know that the key step is phrasing a statement of the form "I am not true". The self-referentiality of this statement is what drives the paradox within the
logical system that allows you to frame it.
The part that's important to remember is that the logical system has to be powerful enough in the first place, so that it's possible to even do self-referencing. A weak logical system can't do that,
which is why Godel's theorem only kicks in once the system is strong enough (a very Greek tragedy concept this, that it is your own power that sows the seeds to destroy you).
It turns out that a similar mechanism comes into play with P vs NP. Mulmuley starts with a "trivial proof of hardness" for a function f. Write down all circuits of small complexity, and for each
circuit write down some input for which C(x) != f(x). This table is of course of exponential size, but never mind that.
He now introduces the flip operation. This is the idea that instead of trying to manipulate this large table, we actually store a smaller table of certificates, with the property that from one such
certificate, you can reconstruct efficiently a set of circuit and witness pairs from the table, and the set of such certificates cover the entire table. You also need to able to decode these
certificates efficiently and quickly verify that an input produced in this manner actually does demonstrate a counter example.
There's a problem with the flip operation though. We're using this flip structure to show that P != NP (nonuniformly for now). On the other hand, the success of the flip operation requires us to
construct a mapping from the certificates to the table that can also be verified in polynomial time. But since f is an NP-complete problem, what we are essentially saying is that we can efficiently
construct short witnesses of membership or non-membership, which means that we've collapsed P and NP !
You could of course retort by saying, "well I don't want to use the flip operation then !". It turns out that you can't ! He states a result saying that ANY proof separating P and NP will essentially
look like a flip-based proof. So no matter what you do, you have to deal with the self-contradicting structure of the flip. The geometric construction (GCT) is his program for constructing the flip
so that this cycle of referencing can be broken.
But that's not where the analogy to Godel's theorem comes from. When we start digging into separating the permanent from the determinant, we can translate the problem into an embedding problem over
algebraic varieties. This is because both the permanent and determinant have unique characterizations as the only polynomials satisfying certain properties over matrices.
Now the separation problem can be framed as the problem of finding an efficient obstruction that demonstrates the separation between these two varieties. What's even more bizarre is that ANY proof
that separates the permanent and the determinant will end up producing such an algebraic-geometric obstruction, even if the proof itself doesn't use algebraic geometry.
So let's review. Any proof of P vs NP will induce a "flip" construction that generates small certificates that explicitly show hardness of a problem. The permanent is one such hard problem, and lends
itself to an invariant representation that yields a key obstacle that we need to construct in polynomial time in order to make the flip work.
In other words, we need to be able to construct an object in polynomial time (that to date we don't know how to do in less than triply exponential time) in order to show that these very same
poly-time computations CANNOT solve the permanent.
And this is the core paradox at the heart of the program. Our inability to prove that P != NP is because we can't show that P is strong enough to express within itself the certificate that it is NOT
strong enough to capture NP.
Mulmuley makes another very interesting point at the end of the tutorial. Many people have wondered why the GCT framework can't be used to prove "easier" bounds for other separations. While he
mentions some examples of problems (matrix multiplication) where people are attempting to use the GCT framework to show poly lower bounds, he also points out that it is only because of the beautiful
invariant representations of the permanent and determinant that we can bring the machinery of algebraic geometry to bear on the problem. As he says, we got lucky.
There's an argument playing out on cstheory (yes, we need a better name) on whether we should encourage or discourage questions like "how can I model X phenomenon theoretically", or "Are applied
questions off-topic"
While Scott Aaronson posted an eloquent defense of why we should always entertain such questions, the general tone of the responses has been negative. The main fear has been a 'slippery slope' - that
if we allow one modelling question, then the site would be overrun by modelling questions.
I think this sentiment is particularly interesting in the context of David Johnson's post requesting examples where approximation algorithms were deployed usefully in practice (and that he couldn't
come up with many good examples)
The truth is that outreach to more applied communities is not about taking your algorithms hammer and beating their heuristic over the head with it. As I've said before, you have to have a DRAMATIC
improvement in performance before people will change their standard practices.
It's firstly about communicating a style of asking and answering questions, even if the actual answers are "trivial". A collaboration where you persuade someone to use max flows might not be
theoretically very splashy, but it has a huge impact in how people think about algorithms, theory and problem solving.
Doing this first prepares the ground for future collaborations. If you want someone to use your new and snazzy approximation for vertex cover, you have to first convince them that vertex cover makes
sense, and before that you have to convince them that even modelling the problem as a graph makes sense. Believe me when I say that the very notion of defining a problem formally before solving it is
not how work gets done in many applied areas.
Michael Mitzenmacher makes a similar point in a comment to the above post, where he says:
I do think, however, that sometimes the focus of theorists -- to get the best constant-approximation or to shave log or log log factors off -- obscures rather than highlights a good part of what
I see as the real "practical value" of this line of research. Focusing on what algorithmic ideas underlie the approximation, and more implementation and experimentation, would (I think) give more
real-world power to this theoretical work.
(Guest Post by David Johnson)
In my Knuth Prize Lecture on approximation algorithms at the last STOC conference, I celebrated the many great theoretical advances we have seen in the area of approximation algorithms over the last
four decades, but lamented the apparent paucity of practical applications of this work. We motivate our work in this area as a way of coping with NP-hardness, but how many people who actually have to
do such coping in practice are
using our ideas?
In my talk I could only come up with a few examples, these from our work at AT&T Labs - Research, where we have adapted the Goeman-Williamson primal-dual approximation algorithm for the Prize
Collecting Steiner Tree problem for use in tools that help design access networks, and where exponential potential function methods for approximating multicommodity flows are being exploited in a
proposed content-distribution scheme. I also noted that of the 14 Kanellakis "Theory and Practice Awards" given out so far, of which some 6 or 7 have been for algorithms, only one, that to Freund and
Schapire for their statistical "boosting" technique, could be viewed as related to approximation algorithms, and the boosting research is typically classified instead as Learning Theory.
So, I think many viewed my talk as a rather negative exercise. Certainly, practical impact is only one dimension along which to evaluate a field, and, if we relied only on practical motivations, we
would not have developed the deep and informative theory we now have. However, practicality IS an important dimension, and I hope that my inability to identify many examples of practical impact was
more a fault of my own ignorance, than one of the field. Giving a talk about this to a primarily theoretical audience did not develop many new leads, so I'm hoping that a blog posting will be more
In particular, I would like to collect
(A) Stories about algorithms or algorithmic approaches developed by approximation algorithm theorists that were implemented and applied, either directly or in modified form, by people to solve
real-world problems.
(B) Stories about the analysis of algorithms developed by others (for example, the greedy algorithm for SET COVER) that influenced the use of these algorithms in practice, or at least helped
explain their performance and hence was cited in "Applications" papers.
The algorithms need not be restricted to NP-hard problems -- I am also interested in approximation algorithms for problems with polynomial-time optimization algorithms that can be too slow for some
practical applications.
For both types of stories, I'd like as much detail as possible - who did the work, where, etc., and citations if known. You can post your nominations/comments on this blog, or send them directly to
me (dsj@research.att.com). If you post as a comment, please do not do so anonymously, as I may want to contact you for more information.
Assuming I get a good response, I plan to prepare a survey paper (or at least another blog posting) summarizing what I learn.
Thanks! David Johnson (dsj@research.att.com)
SoCG CFP is out:
• Titles/abstracts: Nov 22
• Papers: Dec 1
• Notification: Feb 14
• June 13-15, conference.
As Sariel noted, the CFP makes an explicit request for more diverse submissions and plans to accept more papers:
Anticipating the usual high overall quality of submissions, the program
committee intends to accept more papers than was typical in previous years. Also, we intend to interpret the scope of the conference broadly, and will accept all papers of high quality that are
of significant interest to our research community. The conference venue in Paris is attractive and allows a bigger conference than previous years.
My optimistic side is excited by this, and I really hope we can see more papers on diverse topics. My pessimistic side is grumping "yes, that means any papers that aren't mine". sigh...
In unrelated news, the NSF appears to have a new policy for 'data management' and will require a new supplementary document to this effect. Note that since it's a mandatory document, it will affect
theory proposals as well:
Plans for data management and sharing of the products of research. Proposals must include a supplementary document of no more than two pages labeled “Data Management Plan.” This supplement should
describe how the proposal will conform to NSF policy on the dissemination and sharing of research results (see AAG Chapter VI.D.4), and may include:
* The types of data, samples, physical collections, software, curriculum materials, and other materials to be produced in the course of the project
* The standards to be used for data and metadata format and content (where existing standards are absent or deemed inadequate, this should be documented along with any proposed solutions or
* Policies for access and sharing including provisions for appropriate protection of privacy, confidentiality, security, intellectual property, or other rights or requirements
* Policies and provisions for re-use, re-distribution, and the production of derivatives
* Plans for archiving data, samples, and other research products, and for preservation of access to them
(Via Lance) Partha Niyogi just passed away.
I first met Partha at SoCG 2007 in Gyeongju, where he gave an invited presentation on geometric perspectives in machine learning. We also spent a very pleasant day playing hookey from the conference,
wandering around the nearby tourist spots, and talking about geometry and machine learning. He was a brilliant researcher with an impatience and drive that led him to do some truly excellent work.
Anyone who's ever dabbled in machine learning knows about the strong connections between ML and geometry, dating back to the very notion of VC dimension. But while the connections between the areas
are manifest and plentiful, there haven't been many people who could successfully navigate both realms. Partha was one of those unusual people, who coupled deep mathematical insight with a strong
vision for problems in ML. Most recently, he organized a wonderful summer school in machine learning and geometry, whose syllabus is worth perusing to get a sense of the depth of this interaction.
In many circles, he's most known for his work in manifold learning, where in collaboration with Misha Belkin, Steve Smale and others, he laid out a series of rigorous results on the problem of how to
"learn" a low dimensional manifold given access to samples of points from it. The work on Laplacian Eigenmaps is a key marker in this area, and has had tremendous influence in the realm of
"non-Euclidean" data analysis. Among his other recent contributions in theory-land are a neat result from FOCS 2006 with Belkin and Narayanan on sampling the surface of a convex polytope using heat
flow and the Laplace operator.
It's a great loss.
The School of Computing is looking for three ! count 'em ! THREE ! tenure-track faculty this year, in a broad search with interests in:
algorithms, concurrent software, data bases, data mining, formal verification, large data analysis, machine learning, performance verification, parallel systems, robotics, and theory.
We're looking for interdisciplinary folks with interests in possily more than one of the above, and there are nominally three "areas of interest" that we are looking in: parallel computing, robotics
and big data.
For theory folks, probably the most relevant of the three areas of interest is the big data search, especially if you're interested in data mining/machine learning/statistics and other forms of data
analysis. But if you have strengths in any of the other two areas, then make sure to highlight that.
Email me if you have any questions about this. I'm very excited about growing our large-data program by getting strong algorithms folks. rec
|
{"url":"http://geomblog.blogspot.com/2010_10_01_archive.html","timestamp":"2014-04-20T13:37:08Z","content_type":null,"content_length":"203200","record_id":"<urn:uuid:167c4783-0b21-4dfd-b517-ddc5405a7cd5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pine Lake Science Tutor
Find a Pine Lake Science Tutor
...I look forward to working with you in the future.I am a science teacher who absolutely loves science and math! I have tutored algebra since high school to students on both the high school and
collegiate levels. I definitely tutor students at a rate that they feel comfortable with, and have found great success in tutoring these subjects.
15 Subjects: including organic chemistry, genetics, biology, chemistry
...I had a grade of A+. I have tutored several of my classmates for midterm and final exams. I have also used differential equations in solving a lot of problems during my Masters Degree. I have
taken Linear Algebra class during my undergraduate years with a grade of A.
12 Subjects: including physics, algebra 1, algebra 2, calculus
...While in college, I did multiple internships with my college professors in the areas of Environmental Chemistry, and worked as a Teaching Assistant for Laboratory Biology. I have used
computers my whole life thanks to a family that always believed that technology would become the future of our w...
19 Subjects: including microbiology, biology, ecology, reading
...I currently have a little brother in high school, so I understand all the various stresses there. Moreover, many of the students I have tutored come from high school. Since I'm not too far
away from it, I feel like I can identify with the varying amounts of homework.
34 Subjects: including physiology, precalculus, reading, ACT Science
...I speak fluent Spanish and especially enjoy working with people who speak Spanish as their first language. If you would like to speak better English, improve your SAT score or make better
grades in literature or social studies feel fee to contact me.I am an American History buff and the Civil Wa...
30 Subjects: including anthropology, Spanish, ACT Science, English
|
{"url":"http://www.purplemath.com/pine_lake_ga_science_tutors.php","timestamp":"2014-04-18T08:40:18Z","content_type":null,"content_length":"23873","record_id":"<urn:uuid:90f790c2-ada5-4a2c-89a5-62ffd1615c5d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
numtaps : int
The desired number of taps in the filter. The number of taps is the number of terms in the filter, or the filter order plus one.
bands : array_like
A monotonic sequence containing the band edges in Hz. All elements must be non-negative and less than half the sampling frequency as given by Hz.
desired : array_like
A sequence half the size of bands containing the desired gain in each of the specified bands.
weight : array_like, optional
A relative weighting to give to each band region. The length of weight has to be half the length of bands.
Hz : scalar, optional
The sampling frequency in Hz. Default is 1.
type : {‘bandpass’, ‘differentiator’, ‘hilbert’}, optional
The type of filter:
‘bandpass’ : flat response in bands. This is the default.
‘differentiator’ : frequency proportional response in bands.
‘hilbert’ : filter with odd symmetry, that is, type III
(for even order) or type IV (for odd order) linear phase filters.
maxiter : int, optional
Maximum number of iterations of the algorithm. Default is 25.
grid_density : int, optional
Grid density. The dense grid used in remez is of size (numtaps + 1) * grid_density. Default is 16.
|
{"url":"http://docs.scipy.org/doc/scipy-0.11.0/reference/generated/scipy.signal.remez.html","timestamp":"2014-04-16T04:24:34Z","content_type":null,"content_length":"16946","record_id":"<urn:uuid:3b8594b3-2a7c-4df4-8848-98f52834d14b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On rational monads and free iterative theories
- in Proc. of 2nd ACM SIGPLAN Wksh. on Mechanized Reasoning about Languages with Variable Binding, MERLIN'03 , 2003
"... Neil Ghani Dept. of Math. and Comp. Sci. ..."
, 2003
"... It is well known that, given an endofunctor H on a category C, the initial (A + H−)-algebras (if existing), i.e., the algebras of (wellfounded) H-terms over different variable supplies A, give
rise to a monad with substitution as the extension operation (the free monad induced by the functor H). Mo ..."
Cited by 4 (1 self)
Add to MetaCart
It is well known that, given an endofunctor H on a category C, the initial (A + H−)-algebras (if existing), i.e., the algebras of (wellfounded) H-terms over different variable supplies A, give rise
to a monad with substitution as the extension operation (the free monad induced by the functor H). Moss [17] and Aczel, Adámek, Milius and Velebil [2] have shown that a similar monad, which even
enjoys the additional special property of having iterations for all guarded substitution rules (complete iterativeness), arises from the inverses of the final (A + H−)-coalgebras (if existing), i.e.,
the algebras of non-wellfounded H-terms. We show that, upon an appropriate generalization of the notion of substitution, the same can more generally be said about the initial T ′ (A, −)-algebras
resp. the inverses of the final T ′ (A, −)coalgebras for any endobifunctor T ′ on any category C such that the functors T ′ (−,X) uniformly carry a monad structure.
- Milius (Eds.), Proceedings of the 7th Workshop on Coalgebraic Methods in Computer Science, CMCS’04 (Barcelona, March 2004), Electron. Notes in Theoret. Comput. Sci , 2004
"... This paper shows that the approach of [2,12] for obtaining coinductive solutions of equations on infinite terms is a special case of a more general recent approach of [4] using distributive
laws. ..."
Cited by 3 (2 self)
Add to MetaCart
This paper shows that the approach of [2,12] for obtaining coinductive solutions of equations on infinite terms is a special case of a more general recent approach of [4] using distributive laws.
"... Recently there has been a great deal of interest in higherorder syntax which seeks to extend standard initial algebra semantics to cover languages with variable binding by using functor
categories. The canonical example studied in the literature is that of the untyped λ-calculus which is handled as ..."
Add to MetaCart
Recently there has been a great deal of interest in higherorder syntax which seeks to extend standard initial algebra semantics to cover languages with variable binding by using functor categories.
The canonical example studied in the literature is that of the untyped λ-calculus which is handled as an instance of the general theory of binding algebras, cf. Fiore, Plotkin, Turi [8]. Another
important syntactic construction is that of explicit substitutions. The syntax of a language with explicit substitutions does not form a binding algebra as an explicit substitution may bind an
arbitrary number of variables. Nevertheless we show that the language given by a standard signature Σ and explicit substitutions is naturally modelled as the initial algebra of the endofunctor Id +
FΣ ◦ + ◦ on a functor category. We also comment on the apparent lack of modularity in syntax with variable binding as compared to first-order languages. Categories and Subject Descriptors
"... Replace this file with prentcsmacro.sty for your meeting, ..."
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=162224","timestamp":"2014-04-18T00:52:22Z","content_type":null,"content_length":"21138","record_id":"<urn:uuid:2f263f7c-7f45-4070-bbec-d6a5c0e03baa>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cyclic Subgroup
January 30th 2013, 06:35 PM #1
Jul 2011
Cyclic Subgroup
I need to prove that H={(1 n 0 1) | n is an element of the integers} is a cyclic subgroup of GL2(reals). H is a matrix with the first row 1 and n and the second row 0 and 1. I appreciate the
Re: Cyclic Subgroup
i suppose you mean:
$H = \left\{\begin{bmatrix}1&n\\0&1 \end{bmatrix} | n \in \mathbb{Z} \right\}$
the way to do this is show that:
$\phi:H \to \mathbb{Z}$ defined by:
$\phi\left(\begin{bmatrix}1&n\\0&1 \end{bmatrix}\right) = n$
is an isomorphism of groups. this means that if:
$A = \begin{bmatrix}1&k\\0&1 \end{bmatrix};\ B = \begin{bmatrix}1&m\\0&1 \end{bmatrix}$
you need to show that:
$\phi(AB) = \phi(A) + \phi(B)$
and that $\phi$ is bijective.
January 30th 2013, 07:41 PM #2
MHF Contributor
Mar 2011
|
{"url":"http://mathhelpforum.com/advanced-algebra/212319-cyclic-subgroup.html","timestamp":"2014-04-20T17:10:40Z","content_type":null,"content_length":"33481","record_id":"<urn:uuid:922c1c0f-7de1-4060-b0f0-9e38704d22b3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(n!(j)+1,n!(j)+1)=1 for 1<=i<j<=n
Please i need this bt tomorrow morning, and I have been staring at it for 5 hours.
1. Please state the problem in the text of the thread, not just in the title. 2. Please state the problem correctly. 3. I'm guessing that (n!(j)+1,n!(j)+1)=1 for 1<=i<j<=n means gcd(i×n! + 1, j×n! +
1) = 1. 4. If d divides i×n! + 1 and j×n! + 1, then d also divides their difference, which is (j–i)×n!. But all the prime factors of the product (j–i)×n! must lie between 1 and n. It follows that any
prime factor of d must lie between 1 and n. So d cannot divide i×n! + 1. Therefore d has no prime factors, so d must be 1.
|
{"url":"http://mathhelpforum.com/number-theory/103854-n-j-1-n-j-1-1-1-i-j-n.html","timestamp":"2014-04-16T16:50:08Z","content_type":null,"content_length":"32397","record_id":"<urn:uuid:1873950a-1b5e-48b4-8cba-008fa335cfa4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Accuplacer Test
Math Accuplacer Test
Math Accuplacer Test- a Brief Overview
Basic knowledge of Mathematics is essential, in order to successfully complete your college career. Hence, study for the math Accuplacer test seriously, so that your admission application receives
its due consideration. Now you must be wondering, 'what is the math Accuplacer test?' Let us answer this question together...
Outline of the Accuplacer Mathematics Examination
The Mathematics segment of the Accuplacer will evaluate your high-school knowledge of the subject. Thus, while preparing for the math Accuplacer test, ensure that you revise all that you have studied
through grades 7 to 12. Other than that, some other features of the test are as follows:
• Question paper format and syllabus: The math Accuplacer test comprises 3 parts, which are as follows:
□ Arithmetic: In this sub-section you will have to solve 17multiple-choice questions. These problems will cover topics like addition, subtraction, multiplication and division of whole numbers,
fractions as well as decimals. This sub-section will also evaluate your knowledge of ratio, proportion, percentages and simple problems of Geometry.
□ Elementary Algebra: In this sub-segment of math Accuplacer you will have to solve 12 problems. These problems will evaluate your knowledge of basic Algebraic formulas and how well you can
apply them. Some of the questions will also cover topics like, equations (linear, simultaneous and quadratic), in-equalities and factorization.
□ College-level-math: This sub-section of the test consists of 20 questions. These questions will deal with topics under advanced Algebra. Hence, you will have to solve questions involving
complex numbers, matrices, permutations, combinations and pre-Calculus. You will also have to solve questions on Co-ordinate Geometry and Trigonometry in this part of the test.
• Type of questions: In all the three sub-sections of the math Accuplacer test, you will have to solve simple Mathematical questions as well as word problems. Although, the topics covered under
math Accuplacer are a part of your high-school curriculum, yet the questions will be quite complicated and detailed. So it is a good idea to practice and revise the more difficult problems of
high-school math.
• Scoring system: There are no pass/fail marks for the entire Accuplacer examination and the same rule applies to the Mathematics section as well. Moreover, because there is no negative marking for
a wrong answer, you should try to answer all the questions in the math Accuplacer test. The CollegeBoard (organizers of the Accuplacer test) awards a score for the Mathematics segment and a
percentile rank corresponding to this score.
• Duration of the test: The Accuplacer is not a timed examination and hence, there is no fixed time limit for the math Accuplacer. However, you should not take more than 2 minutes to solve each
question if you want to complete the examination within, say 90minutes.
The above-mentioned link will help you gain an insight into the nature of the Mathematics segment of the Accuplacer. You must now be wondering about preparing for the test. The following section will
help you with it.
What More do I Need to Know About Accuplacer Math?
Preparing well for the math Accuplacer test is essential, so that you can take the examination competently. The first step towards it is to have a thorough knowledge of exactly which topics will be
covered in the test. If you want to know about the topics then click on the following link: http://accuplacer.collegeboard.org/students/accuplacer-tests
The next thing that you need to do is practice, because that alone will ensure that you are prepared to face the challenges of the examination. Moreover, practice will get you acquainted with a
multiple-choice computer adaptive examination, which is the very nature of the entire Accuplacer test. The following link will help you practice for the test: http://media.collegeboard.com/
The afore-mentioned links will help you gain further insight into the math Accuplacer question paper. Moreover, these are links to the official website and hence, you can be assured of the
authenticity of the question. Solve these sample questions and you will see that you are able to take the test in a confident manner.
Terms and Conditions
Information published in TestPrepPractice.net is provided for informational and educational purpose alone for deserving students, researchers and academicians. Though our volunteers take great amount
of pain and spend significant time in validating the veracity of the information or study material presented here, we cannot be held liable for any incidental mistakes. All rights reserved. No
information or study material in this web site can be reproduced or transmitted in any form, without our prior consent. However the study materials and web pages can be linked from your web site or
web page for
• Research
• Education
• Academic purposes
No permission is required to link any of the web page with educational information available in this web site from your web site or web page
|
{"url":"http://www.testpreppractice.net/ACCUPLACER/math-accuplacer-test.html","timestamp":"2014-04-20T03:10:44Z","content_type":null,"content_length":"19787","record_id":"<urn:uuid:b5b0c883-75af-490d-b069-810c0f92f5fb>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
|
5. Classical Linear Theory
Program PANEL
In this section we illustrate the results of the procedure outlined above. Program PANEL is an exact implementation of the analysis described above, and is essentially the program given by
Moran.Error: Reference source not found Other panel method programs are available in the textbooks by Cebeci,Error: Reference source not found Houghton and Carpenter,^^13 and Kuethe and Chow.^^14 Two
other similar programs are available. A MATLAB program, Pablo, written at KTH in Sweden is available on the web,^^15 as well as the program by Professor Drela at MIT, XFOIL.^^16 Moran’s program
includes a subroutine to generate the ordinates for the NACA 4-digit and 5-digit airfoils (see Appendix A for a description of these airfoil sections). The main drawback is the requirement for a
trailing edge thickness that is exactly zero. To accommodate this restriction, the ordinates generated internally have been altered slightly from the official ordinates. The extension of the program
to handle arbitrary airfoils is an exercise. The freestream velocity in PANEL is assumed to be unity, since the inviscid solution in coefficient form is independent of scale.
PANEL’s node points are distributed employing the widely used cosine spacing function. The equation for this spacing is given by defining the points on the thickness distribution to be placed at:
These locations are then altered when camber is added (see Eqns. A-1 and A-2 in App. A). This approach is used to provide a smoothly varying distribution of panel node points that concentrate points
around the leading and trailing edges.
An example of the accuracy of program PANEL is illustrated in Fig. 5.10, where the results from PANEL for the NACA 4412 airfoil are compared with results obtained from an exact conformal mapping of
the airfoil (Conformal mapping methods were described in Chapter 4. Conformal transformations can also be used to generate meshes of points for use in CFD methods). The agreement is nearly perfect.
Numerical studies need to be conducted to determine how many panels are required to obtain accurate results. Both forces and moments and pressure distributions should be examined. You can select the
number of panels used to represent the surface. How many should you use? Most computational programs provide the user with freedom to decide how detailed (expensive - in dollars or time) the
calculations should be. One of the first things the user should do is evaluate how detailed the calculation should be to obtain the level of accuracy desired. In the PANEL code your control is
through the number of panels used.
Figure 5.10. Comparison of results from program PANEL with an essentially exact
mapping solution for the NACA 4412 airfoil at 6° angle-of-attack.
We check the sensitivity of the solution to the number of panels by comparing force and moment results and pressure distributions with increasing numbers of panels. This is done using two different
methods. Figures 5.11 and 5.12 present the change of drag and lift, respectively, by varying the number of panels. For PANEL, which uses an inviscid incompressible flowfield model, the drag should be
exactly zero. The drag coefficient found by integrating the pressures over the airfoil is an indication of the error in the numerical scheme. The drag obtained using a surface (or “nearfield”)
pressure integration is a numerically sensitive calculation, and is a strict test of the method. The figures show the drag going to zero, and the lift becoming constant as the number of panels
increase. In this style of presentation it is hard to see exactly how quickly the solution is converging to a fixed value.
The results given in Figs. 5.11 and 5.12 indicate that 60-80 panels (30 upper, 30 lower for example) should be enough panels. Note that the lift coefficient is presented in an extremely expanded
scale, and the drag coefficient presented in Fig. 5.13 also uses an expanded scale. Because drag is typically a small number, it is frequently described in drag counts, where 1 drag count is a C[D]
of 0.0001.
To estimate the limit for an infinitely large number of panels the results can be plotted as a function of the reciprocal of the number of panels. Thus the limit result occurs as 1/n goes to zero.
Figures 5.13, 5.14, and 5.15 present the results in this manner for the case given above, and with the pitching moment included for examination in the analysis.
Figure 5.11. Change of drag with number of panels.
Figure 5.12. Change of lift with number of panels.
Figure 5.13. Change of drag with the inverse of the number of panels.
The results given in Figures 5.13 through 5.15 show that the program PANEL produces results that are relatively insensitive to the number of panels once fifty or sixty panels are used, and by
extrapolating to 1/n = 0 an estimate of the limiting value can be obtained.
Figure 5.14. Change of lift with the inverse of the number of panels.
Figure 5.15. Change of pitching moment with the inverse of the number of panels.
In addition to forces and moments, the sensitivity of the pressure distributions to changes in panel density must also be investigated: pressure distributions are shown in Figs. 5.16 and 5.17. The 20
and 60 panel results are given in Fig. 5.16. In this case it appears that the pressure distribution is well defined with 60 panels. This is confirmed in Figure 5-17, which demonstrates that it is
almost impossible to identify the differences between the 60 and 100 panel cases. This type of study should (in fact must) be conducted when using computational aerodynamics methods.
Figure 5.16. Pressure distribution from progrm PANEL, comparing results using 20 and 60 panels.
Figure 5.17. Pressure distribution from program PANEL, comparing results using 60 and 100 panels.
Having examined the convergence of the mathematical solution, we investigate the agreement with experimental data. Figure 5.18 compares the lift coefficients from the inviscid solutions obtained from
PANEL with experimental data from Abbott and von Doenhof.^^17 Agreement is good at low angles of attack, where the flow is fully attached. The agreement deteriorates as the angle of attack increases,
and viscous effects start to show up as a reduction in lift with increasing angle of attack, until, finally, the airfoil stalls. The inviscid solutions from PANEL cannot capture this part of the flow
physics. The different stall character between the two airfoils arises due to different flow separation locations on the different airfoils. The cambered airfoil separates at the trailing edge first.
Stall occurs gradually as the separation point moves forward on the airfoil with increasing incidence. The uncambered airfoil stalls due to a sudden separation at the leading edge. An examination of
the difference in pressure distributions can be made to see why this might be the case.
Figure 5.18. Comparison of PANEL lift predictions with experimental data (Ref. Error: Reference source not found).
The pitching moment characteristics are also important. Figure 5.19 provides a comparison of the PANEL pitching moment predictions (taken about the quarter chord point) with experimental data. In
this case the calculations indicate that the computed location of the aerodynamic center, C[m0], and a trend with angle of attack due to viscous effects that is exactly opposite the inviscid
prediction. This occurs because the separation is moving forward from the trailing edge of the airfoil and the load over the aft portion of the airfoil does not increase as fast as the forward
loading. This leads to a nose up pitching moment until eventually the separation causes the airfoil to stall, resulting in a nose down pitching moment.
Figure 5.19 Comparison of PANEL moment predictions with experimental data, (Ref. Error: Reference source not found).
We do not compare the drag prediction from PANEL with experimental data. For two-dimensional incompressible inviscid flow the drag is theoretically zero. In the actual case, drag arises from skin
friction effects, further additional form drag due to the small change of pressure on the body due to the boundary layer (which primarily prevents full pressure recovery at the trailing edge), and
drag due to increasing viscous effects with increasing angle of attack. A well designed airfoil will have a drag value very nearly equal to the skin friction and nearly invariant with incidence until
the maximum lift coefficient is approached.
In addition to the force and moment comparisons, we need to compare the pressure distributions predicted with PANEL to experimental data. Figure 5.20 provides one example. The NACA 4412 experimental
pressure distribution is compared with PANEL predictions. In general the agreement is very good. The primary area of disagreement is at the trailing edge. Here viscous effects act to prevent the
recovery of the experimental pressure to the levels predicted by the inviscid solution. The disagreement on the lower surface is a little surprising, and suggests that the angle of attack from the
experiment may not be precise.
Figure 5.20. Comparison of pressure distribution from PANEL with data,^^18
Panel methods often have trouble with accuracy at the trailing edge of airfoils with cusped trailing edges, when the included angle at the trailing edge is zero. Figure 5.21 shows the predictions of
program PANEL compared with an exact mapping solution (a FLO36^^19 run at low Mach number) for two cases. Figure 5.21a is for a case with a small trailing edge angle: the NACA 65[1]-012, while Fig.
5.21b is for the more standard 6A version of the airfoil. The corresponding airfoil shapes are shown Fig. 5.22. The “loop” in the pressure distribution in Fig. 5.21a is an indication of a problem
with the method.
Figure 5.21. PANEL Performance near the airfoil trailing edge
Figure 5.22. Comparison at the trailing edge of 6- and 6A-series airfoil geometries.
This case demonstrates a situation where this particular panel method is not accurate. Is this a practical consideration? Yes and no. The 6-series airfoils were theoretically derived by specifying a
pressure distribution and determining the required shape. The small trailing edge angles (less than half those of the 4-digit series), cusped shape, and the unobtainable zero thickness specified at
the trailing edge resulted in objections from the aircraft industry. These airfoils were very difficult to manufacture and use on operational aircraft. Subsequently, the 6A-series airfoils were
introduced to remedy the problem. These airfoils had larger trailing edge angles (approximately the same as the 4-digit series), and were made up of nearly straight (or flat) surfaces over the last
20% of the airfoil. Most applications of 6-series airfoils today actually use the modified 6A-series thickness distribution. This is an area where the user should check the performance of a
particular panel method.
5.2.3 Geometry and Design
Effects of Shape Changes on Pressure Distributions: So far we have been discussing aerodynamics from an analysis point of view. To develop an understanding of the typical effects of adding local
modifications to the airfoil surface, Exercise 5 provides a framework for the reader to carry out an investigation to help understand what happens when changes are made to the airfoil shape. It is
also worthwhile to investigate the very powerful effects that small deflections of the trailing edge can produce. This reveals the power of the Kutta condition, and alerts the aerodynamicist to the
basis for the importance of viscous effects at the trailing edge.
Making ad hoc changes to an airfoil shape is extremely educational when implemented in an interactive computer program, where the aerodynamicist can easily make shape changes and see the effect on
the pressure distribution immediately. An outstanding code that does this has been created by Ilan Kroo and is known as PANDA.^ ^20 Strictly speaking, PANDA is not a panel method, but it is an
accurate subsonic airfoil prediction method.
Shape for a specified pressure distribution: There is another way that aerodynamicists view the design problem. Although the local modification approach described above is useful to make minor
changes in airfoil pressure distributions, often the aerodynamic designer wants to find the geometric shape corresponding to a prescribed pressure distribution from scratch. This problem is known as
the inverse problem. This problem is more difficult than the analysis problem. It is possible to prescribe a pressure distribution for which no geometry exists. Even if the geometry exists, it may
not be acceptable from a structural standpoint. For two-dimensional incompressible flow it is possible to obtain conditions on the surface velocity distribution that ensure that a closed airfoil
shape exists. Excellent discussions of this problem have been given by Volpe^^21 and Sloof.^^22 A two-dimensional inverse panel method has been developed by Bristow.^^23 XFOIL also has an inverse
design option.Error: Reference source not found Numerical optimization can also be used to find the shape corresponding to a prescribed pressure distribution.^^24
5.2.4 Issues in the Problem formulation for 3D potential flow over aircraft
The extension of panel methods to three dimensions leads to fundamental questions regarding the proper specification of the potential flow problem for flow over an aircraft. The main problem is how
to model the wake coming from the fuselage aft of the wing and wing tips. The issue is how to specify the wake behind surfaces without sharp edges. The Kutta condition applies to distinct edges, and
is not available if there are not well defined trailing edges.
In some methods wakes are handled automatically. In other methods the wakes must be specified by the user. This provides complete control over the simulation, but means that the user must understand
precisely what the problem statement should be. The details of the wake specification often cause users difficulties in making panel models. Figure 5.23, from Erickson,Error: Reference source not
found shows an example of a panel model including the details of the wakes. For high lift cases and for cases where wakes from one surface pass near another, wake deflection must be computed as part
of the solution. Figure 5.24 comes from a one week “short” course that was given to prospective users of an advanced panel method, PAN AIR.^^25 Each surface has to have a wake, and the wakes need to
be connected, as illustrated in Fig.5.24. The modeling can be complicated. Special attention to wake layout must be made by the user. To ensure that the problem is properly specified and to examine
the entire flowfield in detail a complete graphics capability is required.
Hess^^26 provides an excellent discussion of these problems. Many different approaches have been used. Carmichael and Erickson^^27 also provide good insight into the requirements for a proper panel
method formulation. Similarly, references Error: Reference source not found and Error: Reference source not found provide good overviews.
As illustrated above, a practical aspect of using panel methods is the need to pay attention to details (actually true for all engineering work). This includes making sure that the outward surface
normal is oriented in the proper direction and that all surfaces are properly enclosed. Aerodynamics panel methods generally use quadrilateral panels to define the surface. Since three points
determine a plane, the quadrilateral may not necessarily define a consistent flat surface. In practice, the methods actually divide panels into triangular elements to determine an estimate of the
outward normal. It is also important that edges fit so that there is no leakage in the panel model representation of the surface. Nathman has recently extended a panel method to have panels include
Figure 5.23. Illustration of the panel model of an F-16XL,Error: Reference source not found including the wakes usually not shown in figures of panel models, but critical to the model.
Figure 5.24. Details of a panel model showing the wake model details and that the wakes are connected. (from a viewgraph presented at a PAN AIR user’s short course, Ref. Error: Reference source not
There is one other significant difference between two-dimensional and three-dimensional panel methods. Induced drag occurs even in inviscid, irrotational flow, and this component of drag can be
computed by a panel model. However, its calculation by integration of pressures over the surface requires extreme accuracy, as we saw above for the two-dimensional example. The use of a farfield
momentum approach is much more accurate. For drag this is known as a Trefftz plane analysis, see Katz and Plotkin.Error: Reference source not found
5.2.5 Example applications of panel methods
Many examples of panel methods have been presented in the literature. Figure 5.25 shows an example of the use of a panel model to evaluate the effect of the space shuttle on the Boeing 747. This is a
classic example. Other uses include the simulation of wind tunnel walls, support interference, and ground effects. Panel methods are also used in ocean engineering. Recent America’s Cup designs have
been dependent on panel methods for hull and keel design. The effects of the free surface can be treated using panel methods.
Figure 5.25. The space shuttle mounted on a Boeing 747.
(from the cover of an AIAA Short Course with the title Applied Computational Aerodynamics)
One example has been selected to present in some detail. It is an excellent illustration of how a panel method is used in design, and provides a realistic example of the typical agreement that can be
expected between a panel method and experimental data in a demanding real world application. The work was done by Ed Tinoco and co-workers at Boeing.^^29 Figure 5.26 shows the modifications required
to modify a Boeing 737-200 to the 737-300 configuration. The panel method was used to investigate the design of a new high lift system. They used PAN AIR, which is a Boeing developed advanced panel
method.^^30 Figure 5.27 shows the panel method representation of the airplane.
|
{"url":"http://lib.znate.ru/docs/index-221235.html?page=2","timestamp":"2014-04-20T10:48:54Z","content_type":null,"content_length":"33124","record_id":"<urn:uuid:d1d38c9d-d44d-47d8-8b33-86debce38f80>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Atlanta Ndc, GA ACT Tutor
Find an Atlanta Ndc, GA ACT Tutor
...I continually put in the extra effort required to be proficient in these areas by reading appropriate textbooks and doing some of the exercise problems in those books. More often than not, I
do not need to reference a textbook to tutor. Hence, little time is wasted and more is accomplished in a shorter time frame.
14 Subjects: including ACT Math, calculus, statistics, Java
...I have classroom experience teaching Algebra and regularly held tutorials for my own students. I know what's expected from each student and I will create a plan of action to help you achieve
your personal goals to better understand mathematics. I am Georgia certified in Mathematics (grades 6-12) with my Masters in Mathematics Education from Georgia State University.
7 Subjects: including ACT Math, geometry, algebra 1, algebra 2
...Student scores have improved upon their re-exam. As a former Teacher of the Year (2008-2009), I believe in teaching students in a manner that they learn. Since not everyone is a auditory
learner, I use visual and hands-on learning to help the lesson stick.
21 Subjects: including ACT Math, calculus, geometry, algebra 1
I have a wide array of experience working with and teaching kids grades K-10. I have tutored students in Spanish, Biology, and Mathematics in varying households. I have instructed religious
school for 5 years with different age groups, so I am accustomed to working in multiple settings with a lot of material and different student skill.
16 Subjects: including ACT Math, Spanish, chemistry, calculus
Hi,My name is Alex. I graduated from Georgia Tech in May 2011, and am currently tutoring a variety of math topics. I have experience in the following at the high school and college level:- pre
algebra- algebra- trigonometry- geometry- pre calculus- calculusIn high school, I took and excelled at all of the listed classes and received a 5 on the AB/BC Advanced Placement Calculus exams.
16 Subjects: including ACT Math, calculus, geometry, algebra 2
Related Atlanta Ndc, GA Tutors
Atlanta Ndc, GA Accounting Tutors
Atlanta Ndc, GA ACT Tutors
Atlanta Ndc, GA Algebra Tutors
Atlanta Ndc, GA Algebra 2 Tutors
Atlanta Ndc, GA Calculus Tutors
Atlanta Ndc, GA Geometry Tutors
Atlanta Ndc, GA Math Tutors
Atlanta Ndc, GA Prealgebra Tutors
Atlanta Ndc, GA Precalculus Tutors
Atlanta Ndc, GA SAT Tutors
Atlanta Ndc, GA SAT Math Tutors
Atlanta Ndc, GA Science Tutors
Atlanta Ndc, GA Statistics Tutors
Atlanta Ndc, GA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Atlanta_Ndc_GA_ACT_tutors.php","timestamp":"2014-04-17T13:44:49Z","content_type":null,"content_length":"24064","record_id":"<urn:uuid:cefb12fc-2d36-45ad-8f38-25ce161852da>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Crystal Lake, IL Algebra 1 Tutor
Find a Crystal Lake, IL Algebra 1 Tutor
...I have incredible patience when it comes to teaching or tutoring math. I will do my best to explain it in as many ways as possible to help you understand what it is you are trying to learn and
master. I simply adore teaching math to all ages.
10 Subjects: including algebra 1, geometry, algebra 2, statistics
...I teach Elementary math, Pre-Algebra, Algebra 1, Algebra 2, Geometry, Trigonometry, PreCalculus, Calculus. If you are interested in taking home tutoring classes for your kids, and improving
their grades, do not hesitate to contact me. Qualification: Masters in Computer Applications My Approac...
8 Subjects: including algebra 1, geometry, algebra 2, ACT Math
...I hope that I can be of service to anyone who requires aid with these disciplines. They offer life-long skills by offering problem-solving techniques and numerical tools. I have background in
peer-tutoring when I was in school, helping in both Physics and Math.This was my major in college.
16 Subjects: including algebra 1, chemistry, calculus, physics
...I teach the students about consonant blends, digraphs, and dipthongs as a part of my program. As the student develops an understanding of letter sounds, decoding of words is added to the
program. Real words as well as nonsense words are used.
26 Subjects: including algebra 1, English, reading, geometry
...Everyday millions of children feel defeated by math. Math anxiety negatively impacts their sleep, their school experience, their social integration, and their self-esteem. They may consider
themselves "dumb" or "less than" others.
10 Subjects: including algebra 1, geometry, SAT math, algebra 2
|
{"url":"http://www.purplemath.com/crystal_lake_il_algebra_1_tutors.php","timestamp":"2014-04-18T13:52:57Z","content_type":null,"content_length":"24258","record_id":"<urn:uuid:79c92a81-5912-4ef4-8b15-3f6b77d46fe9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solve this problem: Diophantine Solutions
How many ordered pairs of non-negative integers \( (a,b) \) are there such that \( 2a + 3b = 100 \)?
3 tries left ·
What's troubling you about this problem?
Before you can request a clarification or dispute an answer, you must verify that belongs to you by clicking the link in the email we just sent you.
If is not your address, you can set another primary email to your account on the email settings page.
|
{"url":"https://brilliant.org/i/0Tv5F6/?no_js=true","timestamp":"2014-04-21T09:37:21Z","content_type":null,"content_length":"39628","record_id":"<urn:uuid:ab2f6be6-ed10-4b35-aefd-22cc8caea4b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matt Ringh
Pi is an extremely interesting number, and the day to celebrate it is March 14th. The continuing pop culture references to Pi are evidence of the love people have for Math, and Math’s most widely
known constant. Author of the Life of Pi, Yann Martel, said this of his choice of the name “Pi” … Read more
Number Bonds are a crucial component of early Singapore Math programs. Number Bonds consist of a minimum of 3 circles connected with lines, and they provide a pictorial representation of
part-part-whole relationships.
A child who knows this bond should be able to immediately fill in any one of these three numbers if it was missing, … Read more
If you went to school in the 1980′s, this post may remind you of a forgotten favorite. A staple of Computer Education at the time, the Logo programming language is a simple, and still widely
available, tool for teaching students about code. Known for its turtle graphics, Logo has a limited set of instructions (i.e. primitives), … Read more
Inspired by Brent Yorgey Factorization Diagrams, Stephen Von Worley wrote an algorithm which creates visuals on the fly for all natural numbers. The result is a mesmerizing dance of primes,
composites, and their constituents. Watch primes (represented as circular arrays) as they become rarer in the higher numbers. Even rarer, watch for “Twin Primes” – primes that differ by … Read more
Pi is an extremely interesting number, and the day to celebrate it is 3/14. The continuing pop culture references to Pi are evidence of the love people have for Math, and Math’s most widely known
constant. Author of the Life of Pi, Yann Martel, said this of his choice of the name “Pi” for … Read more
Infuse Learning is an “any device” student response system which is 100% free, and among the best resources we’ve seen for 1:1 classrooms.
If a device has access to the internet, then you can push questions and content out to that device, and receive completed work back. We especially like that this supports “drawing,” on iOS, Android,
and … Read more
The sample items found at Smarter Balanced are designed to prepare students for the next-generation Common Core assessment. These sample items and performance tasks provide an early look into the
depth of understanding of the CCSS that will be measured by the Smarter Balanced assessment system. This online assessment contains many click-and-drag activities which work … Read more
The video below shows how SMART Notebook Math Tools can be paired with FluidMath Software to teach a math lesson. Two programs filled with great features, teaming to solve and teach systems of
1. Use Fluid Math to teach acceleration due to gravity. The video below shows how FluidMath’s Animation feature allows you to write a function (such as a[y]= – 9.8t^2) that describes an object’s
motion, and then run a simulation of the movement along a path.
2. Use Algodoo to simulate the ascent … Read more
Posted in Classroom Tools, Fluidmath, Lesson Plans, Math, Physics, Science, STEM, Video Tagged Algodoo, Bobak Ferdowski, Felix Baumgartner, FluidMath, NASA, PhET, Physics, YouTube 1 Comment |
December 16, 2013: Using Newsela to Differentiate the Use of Informational Text
September 18, 2013: South Huntington School District Receives First NAO Robot on Long Island
August 12, 2013: NAO•U: The Ultimate Learning Experience
Recent Comments
• Adliya Mystafa on Choose Your Own Adventure with the NAO Robot
|
{"url":"http://www.teq.com/blog/author/matt/","timestamp":"2014-04-20T18:22:46Z","content_type":null,"content_length":"105457","record_id":"<urn:uuid:4f381cbe-1546-4b02-8f60-d48257878d63>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The SuRE programming framework
- In International Logic Programming Symposium , 1995
"... While clause resolution is central to logic programming, practical efforts to optimize resolution have largely concerned efficient clause indexing. One recent exception is Unification Factoring
[5], which optimizes backtracking through clause heads of a predicate. Now we consider the problem of opti ..."
Cited by 8 (5 self)
Add to MetaCart
While clause resolution is central to logic programming, practical efforts to optimize resolution have largely concerned efficient clause indexing. One recent exception is Unification Factoring [5],
which optimizes backtracking through clause heads of a predicate. Now we consider the problem of optimizing clause resolution in a more general setting than Unification Factoring. One fundamental
change is to use mode information to distinguish between elementary matching and unification operations that arise in unifying a term. In addition, we expand the traditional notion of clause
resolution to allow for tabled predicates. The result is that our optimization technique becomes relevant for both top-down and bottom-up evaluations. Our technique uses Clause Resolution Automata
(CRA), to model clause resolution in this expanded setting. The CRAs enable us to succinctly capture the cost of resolution. Furthermore, they can be readily implemented as a source transformation.
After formally deve...
- Journal of Logic Programming , 1994
"... The use of sets in declarative programming has been advocated by several authors in the literature. A representation often chosen for finite sets is that of scons, parallel to the list
constructor cons. The logical theory for such constructors is usually tacitly assumed to be some formal system of c ..."
Cited by 7 (1 self)
Add to MetaCart
The use of sets in declarative programming has been advocated by several authors in the literature. A representation often chosen for finite sets is that of scons, parallel to the list constructor
cons. The logical theory for such constructors is usually tacitly assumed to be some formal system of classical set theory. However, classical set theory is formulated for a general setting, dealing
with both finite and infinite sets, and not making any assumptions about particular set constructors. In giving logical-consequence semantics for programs with finite sets, it is important to know
exactly what connection exists between sets and set constructors. The main contribution of this paper lies in establishing these connections rigorously. We give a formal system, called SetAx,
designed around the scons constructor. We distinguish between two kinds of set constructors, scons(x; y) and dscons(x; y), where both represent fxg [ y, but x 2 y is possible in the former, while x
62 y holds in the latter. Both constructors find natural uses in specifying sets in logic programs. The design of SetAx is guided by our choice of scons as a primitive symbol of our theory rather
than as a defined one, and by the need to deduce nonmembership relations between terms, to enable the use of dscons. After giving the axioms SetAx, we justify it as a suitable theory for finite sets
in logic programming with the aid of the classical theory and unification, and formulate Herbrand structure within it. Together, these provide a rigorous foundation for the set constructors in the
context of logical semantics. KEYWORDS: set constructors, finite sets, Zermelo-Fraenkel set theory, freeness axioms, set unification, Herbrand structure, logical semantics.
, 1994
"... Machine (WAM) to implement the paradigm and that these changes blend well with the overall machinery of the WAM. A central feature in the implementation of subset-logic programs is that of a
monotonic memo-table, i.e., a memo-table whose entries can monotonically grow or shrink in an appropriate ..."
Cited by 6 (0 self)
Add to MetaCart
Machine (WAM) to implement the paradigm and that these changes blend well with the overall machinery of the WAM. A central feature in the implementation of subset-logic programs is that of a
monotonic memo-table, i.e., a memo-table whose entries can monotonically grow or shrink in an appropriate partial order. We present in stages the paradigm of subset-logic progams, showing the effect
of each feature on the implementation. The implementation has been completed, and we present performance figures to show the efficiency and costs of memoization. Our conclusion is that the monotonic
memo-tables are a practical tool for implementing a set-oriented logic programming language. We also compare this system with other closely related systems, especially XSB and CORAL. Keywords: subset
and relational program clauses, sets, set matching and unification, memo tables, monotonic aggregation, Warren Abstract Machine, run-time structures, performance analysis / Address correspondence
"... Subset-logic programs are built up of three kinds of program clauses: subset, equational, and relational clauses. Using these clauses, we can program solutions to a broad range of problems of
interest in logic programming and deductive databases. In an earlier paper [Jay92], we discussed the impleme ..."
Cited by 4 (0 self)
Add to MetaCart
Subset-logic programs are built up of three kinds of program clauses: subset, equational, and relational clauses. Using these clauses, we can program solutions to a broad range of problems of
interest in logic programming and deductive databases. In an earlier paper [Jay92], we discussed the implementation of subset and equational program clauses. This paper substantially extends that
work, and focuses on the more expressive paradigm of subset and relational clauses. This paradigm supports setof operations, transitive closures, monotonic aggregation as well as incremental and lazy
eager enumeration of sets. Although the subset-logic paradigm differs substantially from that of Prolog, we show that few additional changes are needed to the WAM [War83] to implement the paradigm
and that these changes blend well with the overall machinery of the WAM. A central feature in the implementation of subset-logic programs is that of a "monotonic memo-table," i.e., a memo-table who
entries can monot...
- Journal of Programming Languages , 1997
"... Set-based languages have emerged as a powerful means for expressing not only programs but also requirements, test cases and so on. However, a uniform compilation schema for sets has not yet been
completely developed. The present paper tries to overcome this lack using a set-based logic language, SL ..."
Cited by 2 (0 self)
Add to MetaCart
Set-based languages have emerged as a powerful means for expressing not only programs but also requirements, test cases and so on. However, a uniform compilation schema for sets has not yet been
completely developed. The present paper tries to overcome this lack using a set-based logic language, SL (set language), as target. The approach is based on an imperative abstract machine, the SAM
(set abstract machine). The translation from SL to SAL (SAM assembly language) is described and all the possible optimizations, both at source code level and at assembly code level, are detailed. The
potentials for identifying parallel flows of computations are analysed. Several examples of compilations are presented and discussed.
, 1993
"... This paper shows the use of partial-order assertions and lattice domains for logic programming. We illustrate the paradigm using a variety of examples, ranging from program analysis to deductive
databases. These applications are characterized by a need to solve circular constraints and perform aggre ..."
Cited by 1 (1 self)
Add to MetaCart
This paper shows the use of partial-order assertions and lattice domains for logic programming. We illustrate the paradigm using a variety of examples, ranging from program analysis to deductive
databases. These applications are characterized by a need to solve circular constraints and perform aggregate operations. We show in this paper that defining functions with subset assertions and,
more generally, partial-order assertions renders clear formulations to problems involving aggregate operations (first-order and inductive) and recursion. Indeed, as pointed out by Van Gelder [V92],
for many problems in which the use of aggregates has been proposed, the concept of subset is what is really necessary. We provide model-theoretic and operational semantics, and prove the correctness
of the latter. Our proposed operational semantics employs a mixed strategy: top-down with memoization and bottom-up fixed-point iteration. Keywords: Partial Orders, Lattices, Monotonic Operations,
Aggregation, S...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1723214","timestamp":"2014-04-18T17:58:47Z","content_type":null,"content_length":"30256","record_id":"<urn:uuid:73455d74-202f-4781-b452-ac1ca42d60b9>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
|
User Jiang
bio website
location Buffalo, NY
visits member for 3 years, 7 months
seen Apr 18 at 2:59
stats profile views 620
15 asked find a special group $G$
3 accepted Restriction on the coefficients for an operator in the free group factor $ L(\mathbb{F}_2) $
Dec Find a lower bound for a pre-invariant $Fol(L(F_m), X_m)$
24 revised Fix typo
Dec Find a lower bound for a pre-invariant $Fol(L(F_m), X_m)$
24 revised Fix typo
20 awarded Disciplined
3 accepted finite index, self-normalizing subgroup of $F_2$
Dec finite index, self-normalizing subgroup of $F_2$
3 comment Thanks, I did not realize this simple observation.
Dec finite index, self-normalizing subgroup of $F_2$
3 revised added 25 characters in body
3 asked finite index, self-normalizing subgroup of $F_2$
27 accepted noncommutative polynomials equality
27 accepted $Aut(\mathbb{Z}G)=?$ for $G=\mathbb{Z}^2\rtimes_n\mathbb{Z}$
16 answered noncommutative polynomials equality
Nov $Aut(\mathbb{Z}G)=?$ for $G=\mathbb{Z}^2\rtimes_n\mathbb{Z}$
13 comment But then, it should be clear the only units of this quotient ring consist of $ \pm 1, \pm x^{\pm 1}, \pm y^{\pm 1}$ and their products.
Nov $Aut(\mathbb{Z}G)=?$ for $G=\mathbb{Z}^2\rtimes_n\mathbb{Z}$
13 comment You are right, the quotient ring is the two variable Laurent polynomial ring, which I misunderstood to be the polynomial ring.
Nov $Aut(\mathbb{Z}G)=?$ for $G=\mathbb{Z}^2\rtimes_n\mathbb{Z}$
13 comment there seems some tricky points concerning lifting automorphism from the quotient algebra to the original one, since $x\to x+y, y\to y$ gives us an automorphism of the quotient algebra $\
mathbb{Z}G_2/I$, but we can check that $\alpha(x)=x+y, \alpha(y)=y, \alpha(z)=z$ is not an automorphism of $\mathbb{Z}G_2$ since it does not preserve the skew relation $yx=xyz^2$. So, in
general, we have to determine two noncommutative polynomial $p, q$, s.t., $\alpha(x)=x+y+(z-1)p, \alpha(y)=y+(z-1)q, \alpha(z)=z$ and it preserves the above skew relation.
13 asked $Aut(\mathbb{Z}G)=?$ for $G=\mathbb{Z}^2\rtimes_n\mathbb{Z}$
Oct Is it known that “hyperfinite length” cannot distinguish free group factors?
31 comment @Jon, roughly speaking, a result in equivalence relation setting is often easier to prove than its counterpart in Von Neumann algebras setting, but the result on the topological
generators of the full group generated by the hyperfinite equivalence relation is proved rather later than its counterpart. This seems strange.
Oct Is it known that “hyperfinite length” cannot distinguish free group factors?
30 comment @Jon, it is interesting that Ge and Shen has already defined the "generating length" for a type II_1 Von Neumann algebra as an analogy of the definition of cost in the equivalence
relation setting, you can find it in page 368 of the book" Third International Congress of Chinese Mathematicians, 1st part". But the relation between hyperfinite length and generating
length has not been mentioned in that paper.
Oct Is it known that “hyperfinite length” cannot distinguish free group factors?
29 comment @Jon, roughly speaking, a similar result in the equivalence relation setting is used in the proof of theorem 4.10 of the paper "Topological Properties of Full Groups" by J.Kittrell and
T.Tsankov, and F. Maitre refined this process in his paper on generator problems. I guess when Ge and Popa introduced this "hyperfinite length", they also expect it to be related to the
generator problem for Von Neumann algebras?
Oct revised noncommutative polynomials equality
29 emphasize what I want
|
{"url":"http://mathoverflow.net/users/9305/jiang?tab=activity","timestamp":"2014-04-21T07:31:50Z","content_type":null,"content_length":"46635","record_id":"<urn:uuid:9c09487a-e897-4309-8b18-7e38a16d3dc8>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DRUM Collection: Mathematics Research WorksSPECTRAL METHODS FOR HYPERBOLIC PROBLEMSRECOVERY OF EDGES FROM SPECTRAL DATA WITH NOISE—A NEW PERSPECTIVELONG-TIME EXISTENCE OF SMOOTH SOLUTIONS FOR THLong time existence of smooth solutions for the rapidly rotating shallow-water and Euler equationsECENTRAL DISCONTINUOUS GALERKIN METHODS ON OVERLAPPING CELLS WITH A NONOSCILLATORY HIERARCHICAL RECONSTRUCTION
http://hdl.handle.net/1903/1595 2014-04-16T21:52:00Z 2014-04-16T21:52:00Z Tadmor, Eitan http://hdl.handle.net/1903/8687 2008-11-15T03:30:34Z 1994-01-01T00:00:00Z Title: SPECTRAL METHODS FOR
HYPERBOLIC PROBLEMS Authors: Tadmor, Eitan Abstract: We review several topics concerning spectral approximations of time-dependent problems, primarily | the accuracy and stability of Fourier and
Chebyshev methods for the approximate solutions of hyperbolic systems. To make these notes self contained, we begin with a very brief overview of Cauchy problems. Thus, the main focus of the �rst
part is on hyperbolic systems which are dealt with two (related) tools: the energy method and Fourier analysis. The second part deals with spectral approximations. Here we introduce the main
ingredients of spectral accuracy, Fourier and Chebyshev interpolants, aliasing, di�erentiation matrices ... The third part is devoted to Fourier method for the approximate solution of periodic
systems. The questions of stability and convergence are answered by combining ideas from the �rst two sections. In this context we highlight the role of aliasing and smoothing; in particular, we
explain how the lack of resolution might excite small scales weak instability, which is avoided by high modes smoothing. The forth and �nal part deals with non-periodic problems. We study the
stability of the Chebyshev method, paying special attention to the intricate issue of the CFL stability restriction on the permitted time-step. 1994-01-01T00:00:00Z ENGELBERG, SHLOMO TADMOR, EITAN
http://hdl.handle.net/1903/8664 2008-11-07T03:30:57Z 2008-01-01T00:00:00Z Title: RECOVERY OF EDGES FROM SPECTRAL DATA WITH NOISE—A NEW PERSPECTIVE Authors: ENGELBERG, SHLOMO; TADMOR, EITAN Abstract:
We consider the problem of detecting edges—jump discontinuities in piecewise smooth functions from their N-degree spectral content, which is assumed to be corrupted by noise. There are three scales
involved: the “smoothness” scale of order 1/N, the noise scale of order √η, and the O(1) scale of the jump discontinuities. We use concentration factors which are adjusted to the standard deviation
of the noise √η � 1/N in order to detect the underlying O(1)-edges, which are separated from the noise scale √η � 1. 2008-01-01T00:00:00Z CHENG, BIN TADMOR, EITAN http://hdl.handle.net/1903/8663
2008-11-04T03:31:05Z 2008-01-01T00:00:00Z Title: LONG-TIME EXISTENCE OF SMOOTH SOLUTIONS FOR THLong time existence of smooth solutions for the rapidly rotating shallow-water and Euler equationsE
Authors: CHENG, BIN; TADMOR, EITAN Abstract: We study the stabilizing effect of rotational forcing in the nonlinear setting of twodimensional shallow-water and more general models of compressible
Euler equations. In [Phys. D, 188 (2004), pp. 262–276] Liu and Tadmor have shown that the pressureless version of these equations admit a global smooth solution for a large set of subcritical initial
configurations. In the present work we prove that when rotational force dominates the pressure, it prolongs the lifespan of smooth solutions for t <∼ ln(δ−1); here δ � 1 is the ratio of the pressure
gradient measured by the inverse squared Froude number, relative to the dominant rotational forces measured by the inverse Rossby number. Our study reveals a “nearby” periodic-in-time approximate
solution in the small δ regime, upon which hinges the long-time existence of the exact smooth solution. These results are in agreement with the close-to-periodic dynamics observed in the
“near-inertial oscillation” (NIO) regime which follows oceanic storms. Indeed, our results indicate the existence of a smooth, “approximate periodic” solution for a time period of days, which is the
relevant time period found in NIO obesrvations. 2008-01-01T00:00:00Z LIU, YINGJIE SHU, CHI-WANG TADMOR, EITAN ZHANG, MENGPING http://hdl.handle.net/1903/8662 2008-11-04T03:31:04Z 2007-01-01T00:00:00Z
Title: CENTRAL DISCONTINUOUS GALERKIN METHODS ON OVERLAPPING CELLS WITH A NONOSCILLATORY HIERARCHICAL RECONSTRUCTION Authors: LIU, YINGJIE; SHU, CHI-WANG; TADMOR, EITAN; ZHANG, MENGPING Abstract: The
central scheme of Nessyahu and Tadmor [J. Comput. Phys., 87 (1990), pp. 408–463] solves hyperbolic conservation laws on a staggered mesh and avoids solving Riemann problems across cell boundaries. To
overcome the difficulty of excessive numerical dissipation for small time steps, the recent work of Kurganov and Tadmor [J. Comput. Phys., 160 (2000), pp. 241–282] employs a variable control volume,
which in turn yields a semidiscrete nonstaggered central scheme. Another approach, which we advocate here, is to view the staggered meshes as a collection of overlapping cells and to realize the
computed solution by its overlapping cell averages. This leads to a simple technique to avoid the excessive numerical dissipation for small time steps [Y. Liu, J. Comput. Phys., 209 (2005), pp.
82–104]. At the heart of the proposed approach is the evolution of two pieces of information per cell, instead of one cell average which characterizes all central and upwind Godunov-type finite
volume schemes. Overlapping cells lend themselves to the development of a central-type discontinuous Galerkin (DG) method, following the series of works by Cockburn and Shu [J. Comput. Phys., 141
(1998), pp. 199–224] and the references therein. In this paper we develop a central DG technique for hyperbolic conservation laws, where we take advantage of the redundant representation of the
solution on overlapping cells. The use of redundant overlapping cells opens new possibilities beyond those of Godunov-type schemes. In particular, the central DG is coupled with a novel
reconstruction procedure which removes spurious oscillations in the presence of shocks. This reconstruction is motivated by the moments limiter of Biswas, Devine, and Flaherty [Appl. Numer. Math., 14
(1994), pp. 255–283] but is otherwise different in its hierarchical approach. The new hierarchical reconstruction involves a MUSCL or a second order ENO reconstruction in each stage of a multilayer
reconstruction process without characteristic decomposition. It is compact, easy to implement over arbitrary meshes, and retains the overall preprocessed order of accuracy while effectively removing
spurious oscillations around shocks. 2007-01-01T00:00:00Z
|
{"url":"http://drum.lib.umd.edu/feed/atom_1.0/1903/1595","timestamp":"2014-04-16T21:52:00Z","content_type":null,"content_length":"9036","record_id":"<urn:uuid:0ffdf7db-3da9-42af-ad0b-6ac0aefc8e0a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Edu-sig] casino math (unit three)
kirby urner kirby.urner at gmail.com
Tue Sep 1 07:23:55 CEST 2009
This might be from a 3rd hour in our "casino math" segment.
There's no "deep math" here, just getting familiar with dot notation,
methods disguised as attributes (properties):
import random
class Deck:
def __init__(self):
self.deck = [
(suit + str(rank))
for suit in ['H','D','C','S']
for rank in range(1,14)
] + ['joker']*2
def deal_off_top(self):
if len(self.deck) > 0:
return self.deck.pop(0)
def deal_off_bottom(self):
if len(self.deck) > 0:
return self.deck.pop(-1)
def shuffle(self):
def howmany(self):
return len(self.deck)
Interactively... (inserting some blank lines for readability):
>>> from casino_math import *
>>> thedeck = Deck()
>>> thedeck.shuffle()
>>> thedeck.deck
['S10', 'joker', 'S1', 'C11', 'C1', 'C8', 'C3', 'D1', 'S6', 'S3',
'D12', 'D5', 'D10', 'S12', 'C2', 'D6', 'S2', 'D2', 'C9', 'H3',
'C13', 'D8', 'C7', 'D4', 'H6', 'H11', 'H2', 'D13', 'S11',
'H1', 'H8', 'H7', 'S8', 'H12', 'S4', 'D9', 'S9', 'C12', 'H9',
'D7', 'S5', 'S7', 'C5', 'C10', 'H5', 'D11', 'C4', 'joker',
'C6', 'S13', 'H4', 'H13', 'H10', 'D3']
>>> thedeck.deal_off_top()
>>> thedeck.deal_off_top()
>>> thedeck.deal_off_bottom()
>>> thedeck.deal_off_bottom()
>>> thedeck.howmany
http://www.flickr.com/photos/17157315@N00/3876500673/sizes/o/ (akbar font)
Of course this is a rather primitive deck without explicit
aces or royalty, just 1-13 in each suit plus two jokers.
The next step would be for students to write a Hand
class i.e. that which receives from the deck.
Before ya know it, we're playin' poker or twenty-one or
whatever well-known game of skill 'n chance.
On break, we encourage playing with Pysol maybe...
More information about the Edu-sig mailing list
|
{"url":"https://mail.python.org/pipermail/edu-sig/2009-September/009512.html","timestamp":"2014-04-17T06:32:24Z","content_type":null,"content_length":"4755","record_id":"<urn:uuid:05de8520-7b14-4156-b0e3-78f9a5249034>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Misc ideas
From Gerris
Revision as of 08:27, 14 August 2007; view current revision
←Older revision | Newer revision→
From Daniel Lörstad and Laszlo Fuchs, Journal of Computational Physics Volume 200, Issue 1, 10 October 2004, Pages 153-176
"The presence of discontinuous fields may compromise the convergence significantly. Hence, a few issues regarding multi-phase flows have to be addressed in order to maintain the convergence rate for
large density ratio cases. The density field is computed at cell edges on the finest grid level using the phase distribution (described in the following section) and is transferred to the coarser
grids. The density of one edge of a coarse cell of grid (m−1) is the average of the corresponding four cell edges on the finer grid (m). The viscosity field is obtained at cell centers and the value
of a coarse cell on grid (m−1) is the average of the corresponding eight cells belonging to the finer grid (m). In order to maintain global mass conservation in each iteration, the sum of residuals
of the continuity equation on coarser grids is always corrected to yield zero (i.e., global continuity). In addition, one has to ensure that the mass fluxes through the boundaries of the locally
refined grids are conserved. Without global mass conservation, the MG solver cannot converge to machine accuracy (only to a level of the order of the truncation error)."
|
{"url":"http://gfs.sourceforge.net/wiki/index.php?title=Misc_ideas&oldid=1734","timestamp":"2014-04-18T03:39:33Z","content_type":null,"content_length":"10929","record_id":"<urn:uuid:b8b094df-58fa-4461-9d3a-7c98eb77abfe>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
regarding distributions of two random variables
November 5th 2009, 07:18 AM #1
Oct 2009
There's a homework problem in my text that asks me to select an (odd) integer randomly from the set {1,3,5,7} and call it X. Then select an integer randomly from the set {0,1,2,3,4} add it to X.
and then let Y be this sum.
(a) how can you show the joint p.m.f. of X and Y on the sample space of X and Y? I have drawn a table and labeled the axis accordingly, and matched up each sum and computed the sum at their
(b) How could one compute the marginal p.m.f. s?
(c) and are X and Y independent? Reasons? I wrote yes but I'm not sure why.
Any help is appreciated thank you very much.
There's a homework problem in my text that asks me to select an (odd) integer randomly from the set {1,3,5,7} and call it X. Then select an integer randomly from the set {0,1,2,3,4} add it to X.
and then let Y be this sum.
(a) how can you show the joint p.m.f. of X and Y on the sample space of X and Y? I have drawn a table and labeled the axis accordingly, and matched up each sum and computed the sum at their
(b) How could one compute the marginal p.m.f.’s?
(c) and are X and Y independent? Reasons? I wrote yes but I'm not sure why.
Any help is appreciated thank you very much.
(a) What are the possble values of Y? How do you get them for each value of X? Therefore what probability is in each cell?
(b) Use the definition. Where do you get stuck?
(c) Is there any pair such that Pr(X = x).Pr(Y = y) does not equal Pr(X=x and Y=y)?
November 6th 2009, 12:28 PM #2
|
{"url":"http://mathhelpforum.com/advanced-statistics/112584-regarding-distributions-two-random-variables.html","timestamp":"2014-04-16T11:08:09Z","content_type":null,"content_length":"35416","record_id":"<urn:uuid:afee2e7a-4f27-4636-abd4-7731bce70ba8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
|
help with this problem...
December 7th 2009, 10:19 PM #1
Junior Member
Sep 2009
i got this question i know how he got everything else expect the 13
2x^2+10x+3x-15=100 form this he got=====> 2x^2+13x-115=0
i know how he got everything else expect for the 13
i got 2x^2+7x-115=0
how i got this==> i move over the 100 making it -100 and then i combine the -15 with the -100 and i got==> -115
and then i minus -3 from 3x and 10==> the 3 and -3 cancel out
so i got====> 7x by taking away 3 from 10x
did he just add the 10x and 13x????
A very important concept in solving equation is that if you want do something to the equation, you must do it for BOTH sides so that you won't change the question.
You are doing $2x^{2}+10x-3x+3x-3x-115=0$. You minus altogether $6x$ on the left ONLY, and this is not "balanced". You didn't do BOTH sides.
Note that when you said you bring the 100 over so that it becomes -100, you are actually minus 100 on BOTH sides, i.e.
Anyway, you shoud just add the $3x$ and $10x$.
Hope this helps
December 7th 2009, 11:55 PM #2
Oct 2006
|
{"url":"http://mathhelpforum.com/pre-calculus/119249-help-problem.html","timestamp":"2014-04-20T10:23:51Z","content_type":null,"content_length":"29326","record_id":"<urn:uuid:48994fd1-a368-4651-b87c-fd430d7c90d5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elmhurst, IL Trigonometry Tutor
Find an Elmhurst, IL Trigonometry Tutor
...I have trained math teams which have ranked highly at the state level and individuals who have competed nationally. I have a B.S. in Mathematics, a Masters in Math Education, a Masters Level
Certificate in Gifted, and have Illinois Certification for teaching math 6-12. I have four years experie...
24 Subjects: including trigonometry, calculus, geometry, GRE
...I tutor because I love working with children. I am happy to work with anyone who is willing to work and am very patient with students as they try to understand new concepts. I have been in the
Glenview area the past four years and have tutored high schoolers from Notre Dame, New Trier, GBS, GBN, Deerfield High, Loyola Academy and Woodlands Academy of the Sacred Heart.
20 Subjects: including trigonometry, chemistry, calculus, physics
...In my current role I work in a problem solving environment where I pick up issues and deliver solutions by working with different groups and communicating to management level. This type of work
helped me to communicate better and to make people understand at all levels. My goal in teaching is to provide the nurturing environment that allows children to learn and grow intellectually.
16 Subjects: including trigonometry, chemistry, physics, calculus
...Microsoft Outlook is an email client that has grown to become a personal information manager. It can work as a stand alone application or it can be linked with a network. The program allows
users to access emails from multiple sources, track schedules and tasks, enter contact information, and utilize journal and note features.
39 Subjects: including trigonometry, reading, English, writing
...I spent a summer teaching at a high school math camp, I have been a coach for the Georgia state math team for a few years now, and I was a grader for the IU math department. I've tutored many
people I've encountered, including friends, roommates, and people who've sat next to me on trains, aside...
13 Subjects: including trigonometry, calculus, geometry, statistics
|
{"url":"http://www.purplemath.com/Elmhurst_IL_trigonometry_tutors.php","timestamp":"2014-04-18T08:31:40Z","content_type":null,"content_length":"24565","record_id":"<urn:uuid:e96a7332-9093-459e-b005-7ffa03ab9f8b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lifetime of Tires
March 22nd 2009, 08:49 PM #1
Mar 2009
Lifetime of Tires
Here is the question:
For a tire manufacturing company, it is known that only 7% of their tire
products last less than 35 months, whereas 11% of them last more tham 63
months. Assuming the distribution of the lifetime of the manufactured tires
is normal,
(a) what is the expected lifetime for a tire manufactured by this company,
and its standard deviation?
(b) The manufacturer company gives a warranty that if the tire does not last
more than 40 months, they will give a replacement for it. The company
sold 1000 tires during the last year. What is the probability that they
will have to replace more than 50 tires?
So I have
a) P (X <= 35 months) = 0.07
P (X >= 63 months) = 0.11
and for b I've started with...
b) P ( X <= 40 months) = ...?
Where do I go from here, for each part?
Thank you so much in advance : )
$Z={X-\mu\over \sigma}$, so $X=\mu + \sigma Z$
Now match up percentile points.
When X=35 we have the lower 7 percentile of a Z, known as $Z_{.07}\approx$ -1.47 or -1.48.
That's via the tables, but you can do better via a link on line.
And when X=63 we have the upper 11 percentile of a Z, known as $Z_{.11}\approx$ 1.22 or 1.23.
You now have two equations with two unknowns. Find $\mu$ and $\sigma$.
IN part (b) you have a binomial question, where n=1000 and a success is for a tire to fail before 40 months.
So $p=P(X<40)=P\biggl(Z<{40-\mu\over\sigma}\biggr)$.
HOWEVER, the question should ask you to APPROXIMATE this via the Central Limit Theorem.
Otherwise you need to obtain via a summation
$P(BINOMIAL >50)=\sum_{x=51}^{1000}{1000\choose x}p^x(1-p)^{1000-x}$
or you can use the complement.
BUT I bet they want you to approximate via the normal distribution.
I was just surfing and I found...
Plug in mean 0, st dev 1, prob .07 and I got....-1.4749
Plug in mean 0, st dev 1, prob .89 and I got....1.2249
Last edited by matheagle; March 22nd 2009 at 09:54 PM.
Gotcha, thank you!
March 22nd 2009, 09:23 PM #2
March 22nd 2009, 10:17 PM #3
Mar 2009
|
{"url":"http://mathhelpforum.com/advanced-statistics/80059-lifetime-tires.html","timestamp":"2014-04-21T08:05:48Z","content_type":null,"content_length":"37962","record_id":"<urn:uuid:89281f64-cbee-49da-b4c8-1c9b48dbfd5f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is oracle 10g insert with select atomic?
02-07-2006, 07:15 AM
Is oracle 10g insert with select atomic?
We have a table T with columns A and B with a composite unique constraint on the columns A and B.
Now I insert records into the table T using the following single statement in one session -
insert into T values (
(select a, b from dual)
(select c, d from dual)
(select e, f from dual))
and in another session I fire -
insert into T values (
(select e, f from dual)
(select a, b from dual)
My expectation is since there is a composite unique constraint, insert statement in one session will wait
since there is a common subset.
There is clearly a chance of deadlock, in the case, the multiple row inserts using a single insert statement is non atomic.
By atomic, I mean all rows get inserted by this single insert statement at one shot.
What I observe is, with the real dataset, a deadlock "is" occuring -
question is whether that is the expected behavior from Oracle 10G or a bug?
02-07-2006, 05:39 PM
Is it actually a deadlock - ie ORA-00060: deadlock detected while waiting for resource ?
If Transaction A wants to insert a row ['H'], but it already exists in an uncommitted state (from an insert by Transaction B), it will wait. Then Transaction B finds it wants to insert a row
['G'] but it also finds that row exists in an uncommited state from an insert from Transaction A.
We get a deadlock, and Oracle throws an error for one of the sessions and will rollback that statement (not the transaction). The other will then be able to complete and eventually commit (or
It is the expected behaviour since there's nothing else it can do.
The insert is atomic in the ACID definition (the insert succeeds or fails as a whole - all or none of the rows are inserted).
The way around it is to generally to try to get the inserts happening in the same order (ie if the inserts are in ascending order, then they'll both try to insert 'G' first, then 'H' so there's
no risk of a deadlock).
A UNION (as opposed to a UNION ALL) implies a DISTINCT to remove duplicates, which used to imply a full SORT and an ordered result set. 10GR2 (possibly R1) has some smarter algorithms, and
DISTINCTs and UNIONs won't necessarily do the same full sort, so won't necessarilly give an ordered result set.
So if you are suddenly experiencing this problem after a database upgrade, I'm not surprised. Add an explicit ORDER BY to the select, and it will probably go away.
02-08-2006, 06:26 AM
Thanks for your reply and you are correct, I am getting ORA-00060.
I am using 10GR1 and union, as I can see in Toad, is returning an ordered result set. Since the selection is from dual, is there any way, I can verify that union may not return me an ordered
result set.
Can you give me further references where it can be substantiated that "10GR2 (possibly R1) has some smarter algorithms, and DISTINCTs and UNIONs won't necessarily do the same full sort, so won't
necessarilly give an ordered result set."
02-08-2006, 10:25 PM
has some discussion on sorts and order bys. Basically, if you don't have an ORDER BY, you should not assume that the result set is ordered at all.
You basically want both sessions to do their inserts like :
insert into T (col_1, col_2)
select a col_1, b col_2 from dual
select c, d from dual
select e, f from dual
order by col_1,col_2
02-08-2006, 11:54 PM
That explains!
Thank you, Sir.
|
{"url":"http://www.dbasupport.com/forums/printthread.php?t=50438&pp=10&page=1","timestamp":"2014-04-19T01:54:05Z","content_type":null,"content_length":"9072","record_id":"<urn:uuid:1fa42c92-c797-4d05-8442-bf29512aa0d9>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Daisyworld: a tutorial approach to geophysiological modelling
The intention of geophysiological modelling is the analysis of the feedback mechanisms acting between geo- and biosphere (Lovelock, 1989, Lovelock, 1991). It is one approach to Earth System Analysis
(Schellnhuber and Wenzel, 1998). The planet Earth can be understood as a "superorganism" which itself regulates to a state allowing life on Earth. (Gaia hypothesis, see, e.g., Lovelock and Margulis,
1974, Lenton, 1998). First attempts were done by Vernadsky (1927) on a qualitative level in the form of an idea about the interdependence between vegetation and climate (see, e.g., Vernadsky, 1998).
Later on Kostitzin (1935) realized this idea in the first mathematical model for the coevolution of atmosphere and biota. So-called "minimal models" of the Earth are described in (Svirezhev and von
Bloh, 1996, 1997). According to the "virtual biospheres" concept (Svirezhev and von Bloh, 1999) the evolution of the Earth system can be understood as a sequence of bifurcations.
An example of geophysiological regulation behaviour can be found in the Earth system: Silicate-rock weathering as a process to regulate the surface temperature. Due to silicate rock weathering carbon
dioxide is taken out of the atmosphere. At geologic time scales there is an equilibrium between this sink of carbon and the sources by volcanic emissions. An increase in temperature accelerates the
chemical processes of weathering reducing the amount of carbon in the atmosphere. Due to the lower greenhouse effect the temperature decreases; we have a negative feedback regulating the surface
Based on the work of Lovelock and Whitfield (1982) Caldeira and Kasting (1992) have developed a model to estimate the life span of the biosphere taken into account the above mentioned feedback
mechanism. In Fig. 1 the global surface temperature is plotted under increase of the solar luminosity for the whole history of the Earth. In contrast to the curve for fixed carbon dioxide content of
the atmosphere the temperature is always above the freezing point of water. Such a regulation mechanism can be an explanation for the "faint young Sun" paradox (Sagan and Mullen, 1972), where we have
at least for 3.8 Gyr in the past always liquid water on the surface of Earth. The role of the biosphere is to increase the weathering rate.
The Caldeira/Kasting model assumes constant volcanic activity and continental area during the Earth evolution, Franck et al. (2000) extend this model by adding a geodynamic description of the
geospheric processes. A short description of the model used can be found here. Such a model can be used to determine the habitability of Earth-like planets in the solar system and for extrasolar
planetary systems.
Fig. 1: Evolution of the global surface temperature T for the Caldeira/Kasting model (solid curve) and a model with fixed (present) atmospheric CO[2] (dashed curve). S denotes the corresponding solar
luminosity in solar units.
The basic characteristics of such feedback models can be illustrated using the "Daisyworld" scenario (Watson and Lovelock, 1983): In this model a planet with surface albedo A[0] can be covered by two
types of vegetation: one species ("white daisies") with an albedo α[1]> α[0] and the other species with albedo α[2] < α[0] ("black daisies") covering fraction N[1] and N[2] of the planetary
surface normalized to one. Then the dynamic of the two daisies can be described by the following differential equations:
where β(T) is the growth rate and γ the death rate of daisies. The global temperature is determined by the balance between ingoing solar radiation and outgoing black-body radiation
where α=α[1]N[1]+&alpha[2]N[2]+&alpha[0]x is the global albedo. Due to their different albedos the local temperatures T[1,2] differ from each other.
where the factor q depends on the heat exchange. The growth function β depends only on the local temperature T[1,2]. Because the global albedo of the planet is changed by the growth process, the
global temperature is changed altering the growth rates.
One interesting aspect is the behaviour of the coupled vegetation-climate model on external perturbations, e.g., an increase of the external solar luminosity. In Fig. 2 the solar luminosity was
successively increased. It must be pointed out that the global temperature of the populated planet - in contrast to the unpopulated one - is remarkable constant in a wide range of solar luminosity:
the planet itself regulates the optimum state of life (homeostasis).
The original Daisyworld model has been analyzed by other authors in great detail, informations can be found in Isakari and Somerville (1989) and Saunders (1994).
Fig. 2: Coevolution of climate and vegetation at increasing (red) and decreasing (blue) insolation. T[0] is the global temperature of the corresponding planet without vegetation.
The conceptual simple zero-dimensional Daisyworld model allows in an equilibrium state due to theoretical considerations only the coexistence of at most two different species. This restriction can be
avoided if we introduce spatial dependency into the model. The local temperatures T[1], T[2] and albedos α[1], α[2] are replaced by 2-dimensional fields T(x,y) and α(x,y). The temporal and spatial
evolution of the temperature is determined by a partial differential equation based on an energy-balance model (EBM) (see, e.g., Henderson-Sellers and McGuffie, 1983). The evolution of the albedo
connected directly with the vegetation is modelled by a cellular automaton (CA) approach (for a description of the Cellular automaton concept see, e.g., Wolfram, 1983, Goles, 1994). CA models have
been successfully applied to a variety of problems. An application of such a CA approach to the modelling of calcite formation in microbial mats is presented in (Kropp et al., 1996). Fig. 3 describes
the non-deterministic rules for the update of the lattice.
Fig. 3: Update rules for the cellular automaton for an empty cell(right-hand side) and a covered cell (left-hand side). The state of the cell in the following time step depends only on the state of
the cell itself and their next neighbours.
The model was implemented on a RS/6000 SP parallel computer (IBM). Because of only next-neighbour interactions a high efficiency of the parallelization can be expected. The parallelization was
performed using the GeoPar library based on the message-passing paradigm. Some documentation and links about parallel programming can be found here.
A typical state of the temperature and albedo distribution is plotted in Fig. 4.
Fig. 4: Typical configuration for the albedo distribution (right-hand side) and the temperature distribution (left-hand side).
The 2-dimensional model has a similar self-regulation behaviour as the original model. Due to the cellular automaton approach the model can be simply extended to incorporate several biological
effects like
1. mutations
2. seeding
3. habitat fragmentation
into the model. Only the set of rules defining the CA must be modified. One expects that adding mutations, i.e., slight random changes of the albedo in the growth process, will decrease the
regulation capability. But after doing the simulation one gets a counter-intuitive result: with mutations the solar luminosity can be significantly increased until the vegetation breaks down (see
Fig. 5).
Fig. 5: Global temperature T vs. increasing insolation S with (b) and without (a) mutation of the albedo.
An animation is available as a MPEG video(11 MByte). It illustrates the temperature and albedo evolution under increasing insolation. After the vegetation is wiped out the insolation is decreased:
It can be clearly seen that the CA version of the Daisyworld exhibits hysteresis, i.e. the behaviour of the model depends on its history. The breakdown of the vegetation is a first-order phase
transition. This model has also been used by Ackland et al. (2003) in order to describe the formation of deserts in Daisyiworld.
In the real world the growth area is not simply connected, but is fragmented due to anthropogenic influences by, e.g., human settlements and infrastructure. It is possible to model such
fragmentations with our 2-dimensional CA and to analyze the effects on the regulation capability.
The fragmentation is modelled by a percolation model (Stauffer and Aharony, 1992). In each time step a cell can be marked as "infertile" with a given probability p independent of its state.
Configurations for different concentrations p of infertile cells are shown in Fig. 6:
Fig. 6: Fragmentation of landscape for different concentrations of infertile cells p. Some connected clusters are marked (red color).
For a critical p an infinite cluster of connected cells is formed. At 1-p the growth area splits up into disconnected areas. The critical concentration p[c] can be numerically determined to p[c]=
The simulations of the 2-dimensional Daisyworld model with fragmentation are done for different constant solar luminosities S. The regulation capability breaks down at p=0.407 independent of the
initial solar luminosity S. More details can be found in (von Bloh et al., 1997).
Fig. 7: Global average temperature as a function of the fragmentation p and the temperature T[0] of the uncovered planet.
In order to reflect the fundamental ecosystems dynamics on trophic interactions as well, we extend the 2D model even further and add herbivores to our planet (see also, e.g., Harding and Lovelock,
1996). These vegetarian "lattice animals" move on the grid as random walkers. The state of the herbivore can be changed by four different events:
The herbivore deceases with a certain probability γ[h].
If the herbivore survive and if the cell is covered by a daisy then the animal consumes the plant.
The herbivore in question may give birth to a "lattice child". The probability of such an event depends on the local temperature and the number of daisies "eaten" by the herbivore in a
multiplicative manner.
The herbivore may also walk to a randomly chosen next neighbour cell if the cell is not already occupied by another herbivore.
Fig. 8 shows some results for increasing fragmentation at different mortality rates γ[h] for the herbivores. In von Bloh et al. (1999) the Daisyworld model including herbivores is described in
Fig. 8: (a) Dependence of global mean temperature T on fragmentation parameter p for distinct herbivore mortality rates γ[h]. (b) Dependence of herbivore concentration on p for the same mortality
rates as in (a).
Starting from a very simple geophysiological model the developed 2-D model shows some remarkable effects:
• Local interactions of competing species lead to global effects: Self organized regulation behaviour of the global temperature under external perturbations.
• The system undergoes a first order phase transition: After the regulation capability is exhausted, an irreversible breakdown occurs immediately.
• By splitting the growth area into disconnected regions via, e.g., successive fragmentation the regulation behaviour also breaks down.
This research was part of the POEM core project at the Potsdam Institute for Climate Impact Research.
Ackland, G. J., M. A. Clark, and T. M. Lenton, 2003. Catastrophic desert formation in Daisyworld. J. Theo. Biol. 223(1), 39-44.
Caldeira, K., and J. F. Kasting, 1992. The life span of the biosphere revisited. Nature 360, 721-723.
Esser, G., 1991. Osnabrück Biosphere Modelling and Modern Ecology. In: G. Esser, D. Overdieck (Eds.). Modern Ecology, Basic and Applied Aspects. Elsevier, Amsterdam, 773-804.
Franck, S., A. Block, W. von Bloh, C. Bounama., H.-J. Schellnhuber, and Y. M. Svirezhev, 2000. Reduction of life span as a consequence of geodynamics. Tellus 52B, 94-107 (abstract, full text as a
Goles, E., (Ed.), 1994. Cellular automata, dynamical systems and neural networks . Kluwer Acad. Pub., Dordrecht.
Harding, S. P., and J. E. Lovelock, 1996. Exploiter mediated coexistence and frequency-dependent selection in a geophysiological model. J. Theor. Biol. 182, 109-116.
Lüdeke, M. K. B., S. Dönges, R. D. Otto, et al. 1995. Responses in NPP and carbon stores of the northern biomes to a CO[2]-induced climatic change as evaluated by the Frankfurt biosphere model (FBM).
Tellus 47B, 191-205.
Henderson-Sellers, A., and K. Mc Guffie, 1983. A climate modelling primer. John Wiley & Sons, Chichester.
Isakari, S. M., and R. C. J. Somerville, 1989. Accurate numerical solutions for Daisyworld. Tellus 41B, 478-482.
Kostitzin, V. A., 1935. L' evolution de 'l atmosphere: Circulation organique, epoques glaciaries. Hermann, Paris.
Kropp, J., W. von Bloh, T. Klenke, 1996. Calcite formation in microbial mats: modeling and quantification of inhomogeneous distribution patterns by a cellular automaton model and multifractal
measures. Geol. Rundsch. 85, 857-863 (abstract).
Lenton, T. M., 1998. Gaia and natural selection. Nature 394, 439-447.
Lovelock, J. E., and M. Whitfield, 1982, Life span of biosphere. Nature 296, 561-563.
Lovelock, J. E., 1989. The ages of Gaia. Oxford University Press, Oxford.
Lovelock, J. E., 1991. GAIA - the practical science of planetary medicine. GAIA books, London.
Lovelock, J. E., and L. Margulis, 1974. Atmospheric homeostasis by and for the biosphere: the Gaia hypothesis. Tellus 26, 2-10.
Sagan, C., and G. Mullen, 1972. Earth and Mars: Evolution of atmospheres and surface temperatures. Science 177, 52-56.
Saunders, P. T., 1994. Evolution without natural selection: Further implications of the Daisyworld parable. J. theor. Biol. 166 (4), 365-373.
Schellnhuber, H.-J., and V. Wenzel (eds.), 1998. Earth System Analysis: Integrating Science for Sustainability. Springer, Berlin (abstract).
Stauffer, D., and A. Aharony, 1992. Introduction to Percolation Theory, 2nd ed.. Taylor and Francis, London.
Svirezhev, Y. M., 1994. Simple model of interaction between climate and vegetation: virtual biospheres. IIASA Seminar, Laxenburg.
Svirezhev, Y. M., and W. von Bloh, 1996. A minimal model of interaction between climate and vegetation: qualitative approach. Ecol. Mod. 92, 89-99 (abstract, full text in a HTML-document).
Svirezhev, Y. M., and W. von Bloh, 1997. Climate, vegetation, and global carbon cycle: the simplest zero-dimensional model . Ecol. Mod. 101, 79-95 (abstract, full text in a HTML-document).
Svirezhev, Y. M., and W. von Bloh, 1999. The climate change problem and dynamical systems: virtual biospheres concept. In: G. Leitmann, F. E. Udwadia, A. V. Kryazhimski (Eds.). Dynamics and control.
Gordon and Breach, 161-173 (abstract).
Vernadsky, V., 1998. The Biosphere, Complete Annotated Edition. Springer, New York, 250 p..
von Bloh, W., A. Block, and H.-J. Schellnhuber, 1997. Self stabilization of the biosphere under global change: a tutorial geophysiological approach. Tellus 49B, 249-262 (abstract, full text as a
von Bloh, W., A. Block, M. Parade, and H.-J. Schellnhuber, 1999. Tutorial modelling of geosphere-biosphere interactions: the effect of percolation-type habitat fragmentation. Physica A 226, 186-196 (
abstract, full text as a HTML-document).
Watson, A. J., and J. E. Lovelock, 1983. Biological homeostasis of the global environment: the parable of Daisyworld . Tellus 35B, 286-289.
Wolfram, S., 1986. Theory and Applications of Cellular Automata. World Scientific, Singapore.
|
{"url":"http://www.pik-potsdam.de/~bloh/","timestamp":"2014-04-17T01:17:01Z","content_type":null,"content_length":"30972","record_id":"<urn:uuid:5dcdc78e-f6e6-4862-9ec6-081e20c87f60>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Car crumple zone deformation physics
The key is its is a zone, i.e extensive in space. The impactor hitting the car is not accelerating the whole of the car at the same rate so it is imparting less force on the driver.
Start with you me and a third person named "Mr. Headlight" in a race.
• You take a running start.
• "Mr. Headlight being very light and quick is able to get up to the same speed as you almost the instant you cross the starting line He experiences an intense bit of acceleration at that point.
It's as if you hit him as you cross the start line and carry him with you.
• I being old and heavy and fragile and weak need a longer time to accelerate to your speed and so to make it fair you give me enough of a head start so that by the time you and Mr. Headlight catch
up with me I'm up to speed and we're all running even.
Can you visualize such a race start?
Now imagine if your car (made of concrete so it won't crumple, but very massive) hits the front my stationary car with a front crumple zone, the headlights and bumper of my car almost instantaneously
accelerate to your speed. I in the driver's seat accelerate up to your speed but over a longer period of time. Eventually you I and my headlights are moving at the same speed. The headlights of my
car are closer to me and the space in between them and me has to have crumpled.
Imagine you're designing a new special forces landing device to replace the parachute. It consists of a very long telescoping pogo stick (pneumatically driven). Some aerodynamic fins keeps the faller
oriented so the stick lands upright. The pogo stick compresses but locks before the rider can bounce and the hot compressed air inside is released via vents.
He steps off and begin his intended combat mayhem.
|
{"url":"http://www.physicsforums.com/showthread.php?p=3772383","timestamp":"2014-04-21T12:15:02Z","content_type":null,"content_length":"33300","record_id":"<urn:uuid:f22bae5e-c108-4344-9417-db90e053f526>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lottery Probability 49 Choose 6
October 10th 2010, 06:17 PM #1
Oct 2010
Lottery Probability 49 Choose 6
Any input regarding this question would really be appreciated. I know that all possibilities of 49C6 is 13,983,816. So the odd of matching 6 numbers is 1/49C6, but let's suppose I choose 10
numbers and play all the combinations of the 10 ten numbers which is 10C6=210. Therefore if any of the 6 numbers that's is in my 10 number domain I am gauranteed to win the 1st prize along with
other smaller prizes. My question is what are my probability of winning the lottery now if I play 10 numbers and of the 10 chosen numbers choose all the combinations of 6. I hope this make
sense.. I thought the answer would be 10C6/49C6 or 210/13,983,816, but I know this is not correct. Please explain your answer. Thanks.
Any input regarding this question would really be appreciated. I know that all possibilities of 49C6 is 13,983,816. So the odd of matching 6 numbers is 1/49C6, but let's suppose I choose 10
numbers and play all the combinations of the 10 ten numbers which is 10C6=210. Therefore if any of the 6 numbers that's is in my 10 number domain I am gauranteed to win the 1st prize along with
other smaller prizes. My question is what are my probability of winning the lottery now if I play 10 numbers and of the 10 chosen numbers choose all the combinations of 6. I hope this make
sense.. I thought the answer would be 10C6/49C6 or 210/13,983,816, but I know this is not correct. Please explain your answer. Thanks.
10C6/49C6 is right.
October 10th 2010, 07:11 PM #2
|
{"url":"http://mathhelpforum.com/statistics/159108-lottery-probability-49-choose-6-a.html","timestamp":"2014-04-16T14:02:19Z","content_type":null,"content_length":"34027","record_id":"<urn:uuid:d050a322-83c6-4abd-b5dd-04c0e53e7439>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
|
from arXiv.org
Series data maintained by arXiv administrators ().
Access Statistics for this working paper series.
Track citations for all items by RSS feed
Is something missing from the series or not right? See the RePEc data check for the archive and series.
Peter Sarlin
Giovanni Conforti, Stefano De Marco and Jean-Dominique Deuschel
Alain Bensoussan, Jens Frehse and Phillip Yam
Guy Katriel
Fabio Caccioli, Imre Kondor, Matteo Marsili and Susanne Still
Mark Higgins
Robert Azencott, Yutheeka Gadhyan and Roland Glowinski
Vadim Borokhov
Adam Aleksander Majewski, Giacomo Bormetti and Fulvio Corsi
Jean-Bernard Chatelain and Kirsten Ralf
Y. Lemp\'eri\`ere,, C. Deremble, P. Seager, M. Potters and J. P. Bouchaud
Sourish Das and Dipak K. Dey
Pablo Olivares and Alexander Alvarez
Hong Pi and Carsten Peterson
Christopher J. K. Knight, Alexandra S. Penn and Rebecca B. Hoyle
Pablo Olivares
Matthew Lorig, Stefano Pagliarani and Andrea Pascucci
Kasper Larsen, H. Mete Soner and Gordan Zitkovic
Didier Sornette and Peter Cauwels
Maciej Kostrzewski
Nicole El Karoui, Mohamed Mrad and Caroline Hillairet
Nicole El Karoui, Caroline Hillairet and Mohamed Mrad
Robert Stelzer and Zavi\v{s}in, Jovana
Paulo Rocha, Frank Raischel, da Cruz, Jo\~ao P. and Pedro G. Lind
Y. Dolinsky and H. M. Soner
Boualem Djehiche, Hamidou Tembine and Raul Tempone
A. Agreda and K. Tucci
Carol Alexander and Johannes Rauch
Calypso Herrera and Louis Paulot
Mikio Ito, Kiyotaka Maeda and Akihiko Noda
Gunther Leobacher and Philip Ngare
Zura Kakushadze and Jim Kyung-Soo Liew
Alfonsi, Aur\'elien and Pierre Blanc
Figueroa-L\'opez, Jos\'e E. and \'Olafsson, Sveinn
Tahir Choulli, Anna Aksamit, Jun Deng and Monique Jeanblanc
Vitor V. Lopes, Teresa Scholz, Frank Raischel and Pedro G. Lind
Areski Cousin and Ibrahima Niang
D. Sornette
Zuo Quan Xu
B. Podobnik, D. Horvatic, M. Bertella, L. Feng, X. Huang and B. Li
Damian Eduardo Taranto, Giacomo Bormetti and Fabrizio Lillo
Mauro Bernardi and L. Petrella
Martha G. Alatriste Contreras and Giorgio Fagiolo
Reza Farrahi Moghaddam, Fereydoun Farrahi Moghaddam and Mohamed Cheriet
Zuo Quan Xu
Mel MacMahon and Diego Garlaschelli
Thomas R. Hurd, Davide Cellai, Huibin Cheng, Sergey Melnik and Quentin Shao
Eric Beutner, Janina Schweizer and Antoon A. J. Pelsser
Erhan Bayraktar, Yuchong Zhang and Zhou Zhou
Davide Fiaschi, Imre Kondor, Matteo Marsili and Valerio Volpati
Gu\'eant, Olivier and Guillaume Royer
Andrey Itkin
Claudio Fontana, Zorana Grbac, Monique Jeanblanc and Qinghua Li
Fatma Haba and Antoine Jacquier
Jacky Mallett
Matyas Barczy, Leif Doering, Zenghu Li and Gyula Pap
Erhan Bayraktar and Zhou Zhou
Erhan Bayraktar and Song Yao
Figueroa-L\'opez, Jos\'e E., Ruoting Gong and Houdr\'e, Christian
Jaehyung Choi
Brahim Brahimi
Mikio Ito, Akihiko Noda and Tatsuma Wada
Mario Filiasi, Giacomo Livan, Matteo Marsili, Maria Peressi, Erik Vesselli and Elia Zarinelli
Xiang Yu
Nikolai Dokuchaev
Edward Hoyle, Lane P. Hughston and Andrea Macrina
Hai-Chuan Xu, Wei Zhang, Xiong Xiong and Wei-Xing Zhou
Jian Zhou, Gao-Feng Gu, Zhi-Qiang Jiang, Xiong Xiong, Wei Zhang and Wei-Xing Zhou
Jaehyung Choi
Pedro Lencastre, Frank Raischel, Pedro G. Lind and Tim Rogers
Vicky Henderson and Gechun Liang
Pierre Degond, Jian-Guo Liu and Christian Ringhofer
Gabriele Sarais and Damiano Brigo
Zhenyu Cui
Thomas Conlon and John Cotter
Menelaos Karanasos, Alexandros Paraskevopoulos, Faek Menla Ali, Michail Karoglou and Stavroula Yfanti
Bradly Alicea
Karol Przanowski
Stefan Bornholdt and Kim Sneppen
Benedikt Fuchs and Stefan Thurner
Oleksii Mostovyi
Dominique Pépin
Jaehyung Choi, Aaron Kim and Ivan Mitov
Romuald N. Kenmoe S and Carine D. Tafou
Salil Mehta
Alexander Alvarez and Sebastian Ferrando
Nguyet Nguyen and \"Okten, Giray
Rafael Mendoza-Arriaga and Vadim Linetsky
Mike Giles and Yuan Xia
Archil Gulisashvili and Josep Vives
Marcos Escobar, Daniela Neykova and Rudi Zagst
Fred Espen Benth and Salvador Ortiz-Latorre
Stephen J. Hardiman and Jean-Philippe Bouchaud
Zeyu Zheng, Zhi Qiao, Joel N. Tenenbaum, H. Eugene Stanley and Baowen Li
Thomas Bury
Tiziano Squartini and Diego Garlaschelli
Alexandra Rodkina and Nikolai Dokuchaev
Yaya Li, Yongli Li, Yulin Zhao and Fang Wang
Philipp Arbenz, Mathieu Cambou and Marius Hofert
Giuseppe arbia
Fred Espen Benth and Kr\"uhner, Paul
Dieter Hendricks, Dariusz Cieslakiewicz, Diane Wilcox and Tim Gebbie
Tung-Lam Dao
D. J. Manuge and P. T. Kim
Giulia Iori, Rosario N. Mantegna, Luca Marotta, Micciche', Salvatore, James Porter and Michele Tumminello
Claudiu Tiberiu Albulescu, Dominique Pépin and Aviral Kumar Tiwari
Rudolf Fiebig and David Musgrove
Iacopo Mastromatteo, Bence Toth and Jean-Philippe Bouchaud
Gao-Feng Gu, Xiong Xiong, Wei Zhang, Yong-Jie Zhang and Wei-Xing Zhou
Tahir Choulli and Jun Deng
Dorje C. Brody and Stala Hadjipetri
Lyudmila A. Glik and Oleg L. Kritski
Trybu{\l}a, Jakub
Trybu{\l}a, Jakub and Dariusz Zawisza
Dong Han Kim and Stefano Marmi
M. Nabil Kazi-Tani, Possama\"i, Dylan and Chao Zhou
Dieter Hendricks and Diane Wilcox
Michael B. Walker
Paweł Fiedor
Thierry Roncalli
Asim Ghosh, Arnab Chatterjee, Anindya S. Chakrabarti and Bikas K Chakrabarti
Andrey Itkin
Damien Challet and Ahmed Bel Hadj Ayed
Charles D. Brummitt, Rajiv Sethi and Duncan J. Watts
V. Gontis and A. Kononovicius
Peter Klimek, Sebastian Poledna, J. Doyne Farmer and Stefan Thurner
Michael B. Walker
Xiaobing Feng, Woo Seong Jo and Beom Jun Kim
David Landriault, Bin Li and Hongzhong Zhang
Johan Gunnesson and Mu\~noz de Morales, Alberto Fern\'andez
Behzad Mehrdad and Lingjiong Zhu
Dominique Pépin
Andreas Joseph, Irena Vodenska, Eugene Stanley and Guanrong Chen
Xiangyu Cui, Duan Li and Xun Li
Jinli Hu and Amos Storkey
Joseph P. Byrne, Dimitris Korobilis and Pinho J. Ribeiro
Matyas Barczy, Gyula Pap and Tamas T. Szabo
Truc Le
Moritz Duembgen and Leonard C G Rogers
Ladislav Krištoufek
Yang-Yu Liu, Jose C. Nacher, Tomoshiro Ochiai, Mauro Martino and Yaniv Altshuler
Matteo Smerlak, Brady Stoll, Agam Gupta and James S. Magdanz
Soumik Pal and Ting-Kam Leonard Wong
Bin Zou and Abel Cadenillas
Petr Jizba and Jan Korbel
E. M. S. Ribeiro and G. A. Prataviera
Scott Lawrence, Qin Liu and Victor M. Yakovenko
Adam D. Bull
Claudio Fontana
Thomas Bury
Hansjoerg Albrecher and Jevgenijs Ivanovs
Paulwin Graewe, Ulrich Horst and Jinniao Qiu
R\'asonyi, Mikl\'os and Andrea Meireles Rodrigues
Kyong-Hui Kim and Myong-Guk Sin
Kondor, D\'aniel, P\'osfai, M\'arton, Csabai, Istv\'an and Vattay, G\'abor
Walter Farkas, Pablo Koch-Medina and Cosimo Munari
Jozef Baruník, Evžen Kočenda and Lukas Vacha
Stanislao Gualdi, Marco Tarzia, Francesco Zamponi and Jean-Philippe Bouchaud
Gordan Zitkovic
Damien Challet and Ahmed Bel Hadj Ayed
Philip Protter
Matthew Lorig, Stefano Pagliarani and Andrea Pascucci
Albert Ferreiro-Castilla and Kees van Schaik
M. H. A Davis and M. R. Pistorius
Lachapelle, Aim\'e, Jean-Michel Lasry, Charles-Albert LEHALLE and Pierre-Louis Lions
Figueroa-L\'opez, Jos\'e E., Ruoting Gong and Houdr\'e, Christian
Gu\'eant, Olivier
Gabriel Frahm
Cody Blaine Hyndman and Polynice Oyono Ngou
Johanna F. Ziegel
Gabriel Frahm
Chuancun Yin, Yuzhen Wen and Yongxia Zhao
Mazen KEBEWAR
Maria B. Chiarolla and Tiziano De Angelis
M. Nabil Kazi-Tani, Possama\"i, Dylan and Chao Zhou
Maria Letizia Bertotti and Giovanni Modanese
Florin Avram, Zbigniew Palmowski and Martijn R. Pistorius
Taisei Kaizoji, M. Leiss, A. Saichev and D. Sornette
Florian Ziel and Rick Steinert
Hanqing Jin and Yimin Yang
Lyudmila A. Glik and Oleg L. Kritski
Christian Bender and Nikolai Dokuchaev
Abdelali Gabih, Hakam Kondakji, Sass, J\"orn and Ralf Wunderlich
F. Bagarello and E. Haven
G. S. Vasilev
Imre Kondor
Anna V. Vilisova and Qiang Fu
Konstantinos Spiliopoulos
Ren Liu, Johannes Muhle-Karbe and Marko Weber
Ludovic Moreau, Johannes Muhle-Karbe and H. Mete Soner
Erhan Bayraktar, David Promislow and Virginia Young
Bhaskar DasGupta and Lakshmi Kaligounder
Samuel Palmer and David Thomas
Peter Tankov
Wenjun Zhang and John Holt
Rossana Mastrandrea, Tiziano Squartini, Giorgio Fagiolo and Diego Garlaschelli
A. O. Glekin, A. Lykov and K. L. Vaninsky
M. Koz{\l}owska,, T. Gubiec, T. R. Werner, M. Denys, A. Sienkiewicz, R. Kutner and Z. Struzik
Paweł Fiedor
Barski, Micha{\l}
Annika Birch and Tomaso Aste
Bin Zou and Abel Cadenillas
Pi\v{s}korec, Matija, Nino Antulov-Fantulin, Petra Kralj Novak, Mozeti\v{c}, Igor, Gr\v{c}ar, Miha, Irena Vodenska and \v{S}muc, Tomislav
Jianjun Gao, Ke Zhou, Duan Li and Xiren Cao
Teycir Abdelghani Goucha
Fernando F. Ferreira, A. Christian Silva and Ju-Yi Yen
Arash Fahim and Yu-Jui Huang
Erhan Bayraktar and Zhou Zhou
Ludvig Bohlin and Martin Rosvall
Alice X. D. Dong, Jennifer S.K. Chan and Gareth W. Peters
Anatoliy Swishchuk, Maksym Tertychnyi and Winsor Hoang
Anton Golub, Gregor Chliamovitch, Alexandre Dupuis and Bastien Chopard
Sandrine Jacob Leal, Mauro Napoletano, Andrea Roventini and Giorgio Fagiolo
Anatoliy Swishchuk, Maksym Tertychnyi and Robert Elliott
Erhan Bayraktar and Yuchong Zhang
Dietmar Janetzko
Kais Hamza, Fima C. Klebaner, Zinoviy Landsman and Ying-Oon Tan
Ashadun Nobi, Sungmin Lee, Doo Hwan Kim and Jae Woo Lee
Enrico Onali and John Goddard
Dror Y. Kenett, Xuqing Huang, Irena Vodenska, Shlomo Havlin and H. Eugene Stanley
Thibault Jaisson
Ovidiu Racorean
Samuel E. Vazquez
Oumar Mbodji, Adrien Nguyen Huu and Traian A. Pirvu
F. Y. Ouyang, B. Zheng and X. F. Jiang
Yavni Bar-Yam, Marcus A. M. de Aguiar and Yaneer Bar-Yam
Fabian Dickmann and Nikolaus Schweizer
Sorin Solomon and Natasa Golo
Arslan Tariq Rana and Mazen KEBEWAR
Sebastian Poledna and Stefan Thurner
Beatrice Acciaio, Claudio Fontana and Constantinos Kardaras
Steven Kou and Xianhua Peng
Pablo Koch-Medina, Santiago Moreno-Bromberg and Cosimo Munari
Hayafumi Watanabe
Brian P. Hanley
M. Wilinski, B. Szewczak, T. Gubiec, R. Kutner and Z. R. Struzik
Anna Aksamit, Tahir Choulli, Jun Deng and Monique Jeanblanc
Aleksejus Kononovicius and Vygintas Gontis
Erhan Bayraktar, Yu-Jui Huang and Zhou Zhou
Adrien Nguyen Huu
O\'swi\c{e}cimka, Pawe{\l}, Dro\.zd\.z, Stanis{\l}aw, Marcin Forczek, Jadach, Stanis{\l}aw and Kwapie\'n, Jaros{\l}aw
Ladislav Krištoufek and Miloslav Vošvrda
Marcus Hildmann, Andreas Ulbig and Andersson, G\"oran
Bruno Bouchard and Marcel Nutz
Chuancun Yin
Yannis G. Yatracos
Frey, R\"udiger, Abdelali Gabih and Ralf Wunderlich
Ban Zheng, Roueff, Fran\c{c}ois and Abergel, Fr\'ed\'eric
Marcel Nutz
V. Sasidevan and Deepak Dhar
Lorenzo Torricelli
Imre Kondor, Csabai, Istv\'an, Papp, G\'abor, Enys Mones, Czimbalmos, G\'abor and S\'andor, M\'at\'e Csaba
Walter Farkas, Pablo Koch-Medina and Cosimo Munari
Ying Shen, Chuancun Yin and Kam Chuen Yuen
Maria Letizia Bertotti and Giovanni Modanese
Adrien Nguyen Huu and Nadia Oudjane
Kie{\ss}ling, Miriam, Sascha Kurz and Rambau, J\"org
Stanislav S. Borysov and Alexander V. Balatsky
Yoshihiro Yura, Hideki Takayasu, Didier Sornette and Misako Takayasu
Lorenz Schneider
Marcelo M. de Oliveira and Alexandre C. L. Almeida
Sorin Solomon and Natasa Golo
B. Podobnik, A. Majdandzic, C. Curme, Z. Qiao, W. -X. Zhou, H. E. Stanley and B. Li
Brian P. Hanley
John Goddard and Enrico Onali
Radoslava Mirkov, Thomas Maul, Ronald Hochreiter and Holger Thomae
Marcelo J. Villena and Axel A. Araneda
Jarno Talponen and Lauri Viitasaari
M. Duembgen and Leonard C G Rogers
G. Papaioannou, P. Papaioannou and N. Parliaris
Laura Morino and Wolfgang J. Ruggaldier
Eduardo Viegas, Stuart P. Cockburn, Henrik Jeldtoft Jensen and Geoffrey B. West
Gani Aldashev, Serik Aldashev and Timoteo Carletti
Beiglb\"ock, Mathias and Marcel Nutz
W. P. Weijland
Lorenzo Pareschi and Giuseppe Toscani
Fausto Bonacina, D'Errico, Marco, Enrico Moretto, Silvana Stefani and Anna Torriero
Pi\v{s}t\v{e}k, Miroslav and Frantisek Slanina
Damiano Brigo and Andrea Pallavicini
Alfred Galichon, P. Henry-Labord\`ere, and N. Touzi
Worapree Maneesoonthorn, Catherine Scipione Forbes and Gael M. Martin
Boualem Djehiche and L\"ofdahl, Bj\"orn
Didier Sornette and Peter Cauwels
Possama\"i, Dylan and Guillaume Royer
Kr\"atschmer, Volker, Alexander Schied and Z\"ahle, Henryk
Stefano Nasini, Jordi Castro and Pau Fonseca Casas
Pablo Koch-Medina and Cosimo Munari
Peiteng Shi, Jiang Zhang, Bo Yang and Jingfei Luo
James J. Angel
Elisa Appolloni and Andrea Ligori
Guy Cirier
Frantisek Slanina
Paweł Fiedor
Weiyin Fei
Nassim Nicholas Taleb
Masaaki Fujii and Akihiko Takahashi
Stefan Gerhold, Johannes F. Morgenbesser and Axel Zrunek
Tao Xiong, Yukun Bao and Zhongyi Hu
Li-Xin Wang
Li-Xin Wang
Li-Xin Wang
Alexander Kushpel
Robert A Jarrow and Martin Larsson
Mark Tucker and J. Mark Bull
Qian Lin and Frank Riedel
Luxi Chen
Anna Zaremba and Tomaso Aste
Chih-Hao Lin, Chia-Seng Chang and Sai-Ping Li
Emmanuel Bacry and Jean-Francois Muzy
Maxim Bichuch and Ronnie Sircar
Chester Curme, Michele Tumminello, Rosario N. Mantegna, H. Eugene Stanley and Dror Y. Kenett
Gu\'eant, Olivier, Jiang Pu and Guillaume Royer
Gu\'eant, Olivier and Jiang Pu
Thomas Bury
Yipeng Yang and Allanus Tsoi
Peter Carr and Sergey Nadtochiy
Sergey Nadtochiy and Michael Tehranchi
Nassim N. Taleb and Constantine Sandis
William L. Gottesman, Andrew James Reagan and Peter Sheridan Dodds
Andreas Joseph, Stephan Joseph and Guanrong Chen
Alexander Schied
Keita Owari
Ob{\l}\'oj, Jan and Peter Spoida
Matthew Ames, Guillaume Bagnarosa and Gareth W. Peters
Carole Bernard, Jit Seng Chen and Steven Vanduffel
Sebastian Poledna, Stefan Thurner, J. Doyne Farmer and John Geanakoplos
Audrone Virbickaite, Aus\'in, M. Concepci\'on and Pedro Galeano
Misako Takayasu, Hayafumi Watanabe and Hideki Takayasu
Anis Matoussi, Lambert Piozin and Possama\"i, Dylan
Thomas Bury
Jean-Pierre Fouque, Matthew Lorig and Ronnie Sircar
Paolo Guasoni and Gu Wang
Bhaskar DasGupta and Lakshmi Kaligounder
Thomas Bury
Christoph Czichowsky, Johannes Muhle-Karbe and Walter Schachermayer
Kl\"ock, Florian, Alexander Schied and Yuemeng Sun
Kr\"atschmer, Volker, Alexander Schied and Z\"ahle, Henryk
Walter Farkas, Pablo Koch-Medina and Cosimo Munari
Boris Ettinger, Steven N. Evans and Alexandru Hening
Peter Bank and Dmitry Kramkov
Sergio Pulido
David Wozabal and Ronald Hochreiter
|
{"url":"http://econpapers.repec.org/paper/arxpapers/","timestamp":"2014-04-20T00:46:30Z","content_type":null,"content_length":"98044","record_id":"<urn:uuid:ce9b1bbe-6706-499b-9e66-b1215f5aa6f8>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roth's Theorem: Tom Sanders Reaches the Logarithmic Barrier
Roth’s Theorem: Tom Sanders Reaches the Logarithmic Barrier
Click here for the most recent polymath3 research thread.
I missed Tom by a few minutes at Mittag-Leffler Institute a year and a half ago
Suppose that $R_n$ is a subset of $\{1,2,\dots, n \}$ of maximum cardinality not containing an arithmetic progression of length 3. Let $g(n)=n/|R_n|$.
Roth proved that $g(n) \ge log logn$. Szemeredi and Heath-Brown improved it to $g(n) \ge log^cn$ for some http://l.wordpress.com/latex.php?latex=c%3E0&bg=ffffff&fg=000000&s=0″ alt=”c>0″ />
(Szemeredi’s argument gave $c=1/4$.) Jean Bourgain improved the bound in 1999 to $c=1/2$ and in 2008 to $c=2/3$ (up to lower order terms).
Erdös and Turan who posed the problem in 1936 described a set not containing an arithmetic progression of size $n^c$. Salem and Spencer improved this bound to $g(n) \le e^{logn/ loglogn}$. Behrend’s
upper bound from 1946 is of the form $g(n) \le e^{C\sqrt {\log n}}$. A small improvement was achieved recently by Elkin and is discussed here. (Look also at the remarks following that post.)
In an earlier post we asked: How does $g(n)$ behave? Since we do not really know, will it help talking about it? Can we somehow look beyond the horizon and try to guess what the truth is? (I still
don’t know if softly discussing this or other mathematical problems is a fruitful idea, but it can be enjoyable.)
We even had a poll collecting people’s predictions about $g(n)$. Somewhat surprisingly 18.18% of answerers predicted that $g(n)$ behaves like $(\log n)^c$ for some $c<1$. Be the answer as it may be,
reaching the logarithmic barrier was considered extremely difficult.
A couple of months ago Tom Sanders was able to refine Bourgain’s argument and proved that $g(n) \ge (\log n)^{3/4}$. Very recently Tom have managed to reach the logarithmic barrier and to prove that
$g(n) \ge (\log n)/(\log \log n)^{5}.$
Quoting from his paper: “There are two main new ingredients in the present work: the first is a way of transforming sumsets introduced by Nets Katz and Paul Koester in 2008, and the second is a
result on the $L_p$-invariance of convolutions due to Ernie Croot and Olof Sisask (2010).”
This is a truly remarkable result.
Let me mention that a closely related problem can be asked in $\Gamma=$$\{0,1,2\}^n$. It is called the cap set problem. A subset of $\Gamma$ is called a cap set if it contains no arithmetic
progression of size three or, alternatively, no three vectors that sum up to 0(modulo 3). If $A$ is a cap set of maximum size we can ask how the function $h(n)=3^n/|A|$ behaves. Roy Meshulam proved,
using Roth’s argument, that $h(n) \ge n$. Edell found an example of a cap set of size $2.2^n$. So $h(n) \le (3/2.2)^n$. Again the gap is exponential. What is the truth? Improving Meshulam’s result
may be closely related to crossing the $\log n$ barrier for Roth’s theorem. Two and a half years ago Tom Sanders himself managed to achieve it , not for the cup problem, but for a related problem
over Z/4Z. Passing the logarithmic barrier for Roth’s theorem suddenly looks within reach. Such a development will also be amazing.
But we met a short time later…
9 Responses to Roth’s Theorem: Tom Sanders Reaches the Logarithmic Barrier
1. Where is Tim Gower…
□ Tim Gowers found bounds of the form $g_r(n) \le (\log\log n)^{c_r}$ where $g_r(n)$ is defined as in the post but with “arithmetic progressions(AP) of length 3″ replaced with “AP of length r”.
To keep the post finite, I did not discuss larger AP’s, not the related questions about AP in primes, and not many other related things. Tim was also Tom’s Ph. D advisor and he also expressed
his belief that $g(n)$ is close to Behrend’s upper bound. (I also first heard about Tom’s results from Tim Gowers a couple of weeks ago.)
2. what if the value of r – ” arithmetic progressions(AP) of length 3″ replaced with “AP of length r” -> increases asymptotically?
via olga-and-katrina-kaif
□ Good question. If I remember correctly $c_r$ behaves in Gowers’s proof like $1/2^r$.
3. A very detailed blog posting about the new results and earlier results was written by William Gasarch and is posted in “computational complexity” http://blog.computationalcomplexity.org/2010/12/
4. This is indeed a very nice result! It would be neat if one could somehow extend this to give improvements for kAPs, k >= 4.
I suspect that the CAP set problem in F_3^n won’t last too much longer — I expect it won’t be long before someone like Tom improves upon the Roth-Meshulam density bound of c/n (where c > 0 is
some constant), to show r_3(F_3^n) = o(3^n).
In fact, I can imagine that something like the following might do it: if the usual Roth-Meshulam Fourier proof just barely fails to show that some set S of density just below density c/n has
3APs, then one can show that that set has a very unusual sub-structure. Using a hypothetical generalization of the L^p-invariance result of myself and Olof Sisask, in combination with this
substructure, I feel it ought to be possible to show that there is a quasi-random subset T of some coset a+H of some subgroup H with dim(H) > loglog(n) or something, such that |S intersect T| >>
|T|. Then, using something like a finite field generalization of Green’s ideas from Roth’s Theorem in the Primes (to show that a positive density subset of a highly quasirandom set contains
3APs), one should be able to show that S intersect T has a 3AP, which then means that S has a 3AP.
5. Dear Ernie, this is very interesting. Of course the sweetest prize (The Hebrew saying is “the cherry in the cream” ok I suppose its somewhat like “the Jewel in the crown” in English) for breaking
the log n barrier for 3-AP and r-AP was the results about primes until Green and Tao found the way to use quasi randomness and Szemeredi’s theorem for getting the cherry instead. It did not occur
to me before that the quasi randomness idea can be used (perhaps already been used) to improve bounds on general Roth-type theorems (rather than avoid the need for better bound for specific
cases). Also I wished that an opportunity will arise for an explanation of the Croot-Sisask L^p invariance result. So if you are in the mood to explain about it, this could be great.
This entry was posted in Combinatorics, Open problems and tagged Endre Szemeredi, Jean Bourgain, Klaus Roth, Roger Heath-Brown, Roth's theorem, Tom Sanders. Bookmark the permalink.
|
{"url":"http://gilkalai.wordpress.com/2010/11/24/roths-theorem-sanders-reaches-the-logarithmic-barrier/?like=1&source=post_flair&_wpnonce=104f7cb3e5","timestamp":"2014-04-19T01:51:59Z","content_type":null,"content_length":"123401","record_id":"<urn:uuid:aeb07417-22f9-4ae2-8d15-115fd4be515f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elmhurst, IL Trigonometry Tutor
Find an Elmhurst, IL Trigonometry Tutor
...I have trained math teams which have ranked highly at the state level and individuals who have competed nationally. I have a B.S. in Mathematics, a Masters in Math Education, a Masters Level
Certificate in Gifted, and have Illinois Certification for teaching math 6-12. I have four years experie...
24 Subjects: including trigonometry, calculus, geometry, GRE
...I tutor because I love working with children. I am happy to work with anyone who is willing to work and am very patient with students as they try to understand new concepts. I have been in the
Glenview area the past four years and have tutored high schoolers from Notre Dame, New Trier, GBS, GBN, Deerfield High, Loyola Academy and Woodlands Academy of the Sacred Heart.
20 Subjects: including trigonometry, chemistry, calculus, physics
...In my current role I work in a problem solving environment where I pick up issues and deliver solutions by working with different groups and communicating to management level. This type of work
helped me to communicate better and to make people understand at all levels. My goal in teaching is to provide the nurturing environment that allows children to learn and grow intellectually.
16 Subjects: including trigonometry, chemistry, physics, calculus
...Microsoft Outlook is an email client that has grown to become a personal information manager. It can work as a stand alone application or it can be linked with a network. The program allows
users to access emails from multiple sources, track schedules and tasks, enter contact information, and utilize journal and note features.
39 Subjects: including trigonometry, reading, English, writing
...I spent a summer teaching at a high school math camp, I have been a coach for the Georgia state math team for a few years now, and I was a grader for the IU math department. I've tutored many
people I've encountered, including friends, roommates, and people who've sat next to me on trains, aside...
13 Subjects: including trigonometry, calculus, geometry, statistics
|
{"url":"http://www.purplemath.com/Elmhurst_IL_trigonometry_tutors.php","timestamp":"2014-04-18T08:31:40Z","content_type":null,"content_length":"24565","record_id":"<urn:uuid:e96a7332-9093-459e-b005-7ffa03ab9f8b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Place Value: Teaching Strategies? - ProTeacher Community
Place Value: Teaching Strategies?
Hello All,
I'm a tenth-year teacher, but this is my first year doing inclusion support for upper primary grades. My kiddos (3rd grade, autism spectrum and/or learning disabilities) are struggling mightily with
place value (tens, hundreds, thousands, ten thousands--writing out expanded notation and also identifying which digit represents which quantity in, say, a number like 3456) , and I confess I've never
been trained in delivering upper grade math instruction.
Any tips or tools you've found particularly helpful with this skill? Any two-minute arguments for why, exactly, it's worth spending a lot of time on it?
|
{"url":"http://www.proteacher.net/discussions/showthread.php?t=198183","timestamp":"2014-04-21T09:44:00Z","content_type":null,"content_length":"53546","record_id":"<urn:uuid:719a6871-01e6-4a39-9d76-d8d30796f8e1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|