content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Wolfram Demonstrations Project
Johnson's Theorem
Let three circles of equal diameter intersect at a point H and intersect pairwise at points A, B, and C. Then the circumcircle of the triangle ABC has the same diameter as the other circles.
Drag the purple points or the slider to change the figure. | {"url":"http://demonstrations.wolfram.com/JohnsonsTheorem/","timestamp":"2014-04-21T15:03:44Z","content_type":null,"content_length":"41707","record_id":"<urn:uuid:8bd3e7ef-4076-4ef6-999b-70c7032e04e0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Top 10 “Negative” Inventions
Hubble Key Project Team and High-Z Supernova Search Team, NASA, ESA
Throughout the history of math and science, supposedly impossible negative things have repeatedly turned out to be important both mathematically and physically. Here is my unofficial (no, let’s make
that official) Top 10 list of “negative” inventions.
10. Negative refraction: Victor Veselago, 1967
Refraction refers to how much light slows down (and therefore appears to be bent) when it passes through some medium. Refraction is quantified by an index relative to the refractive index of the
vacuum, which is equal to 1. All natural materials have a positive index of refraction, which means light is always bent in the same direction. Veselago, a Russian physicist, figured out that a
refraction index of less than zero was possible in theory, meaning light would bend in the opposite direction from the usual. Three decades later physicists began to figure out how carefully
constructed artificial “metamaterials” actually could bend light the “wrong” way, leading to current research on cloaking devices.
9. Negative electric charge: Benjamin Franklin, 1747
Franklin figured out that electric charge comes in positive and negative forms; he just guessed wrong about which was which, which is why electrons have negative charge even though they’re carriers
of electric current.
8. Negative mass (or negative weight): Friedrich Albrecht Carl Gren, 1786
OK, this one is tricky. Around 1700 the German physician Georg Stahl articulated the phlogiston theory (based on an idea of Johann Becher), an elaborate explanation for why things burn. Supposedly
they contained a flammable substance (phlogiston) that disappeared into the air during combustion. It’s often asserted that Stahl’s phlogiston had negative weight, but that idea appeared only much
later, when experiments showed that sometimes the combustion products (ashes) weighed more than the original burned substance. Gren, a German chemist, suggested that negative mass could account for
the discrepancy. Both Stahl and Gren were wrong, by the way.
7. Negative Energy: Hendrik Casimir, 1948
Paul Dirac imagined a sea of negative energy electrons in the late 1920s during his work on quantum mechanics that led to the prediction of the existence of antimatter. But let’s give this prize to
Casimir, who figured out how to create negative energy in a physical apparatus. You just have to put two mirrors, or shiny metal plates, very close to each other. Since the amount of energy in empty
space is set at zero, the plates should just sit wherever you put them. But in fact, they are slightly attracted to each other (the Casimir effect). That’s because empty space actually isn’t empty,
but has a bunch of quantum particles popping into and out of existence. Being quantum particles they behave like waves. When the plates are close enough together, the in-between space isn’t big
enough for some of the waves. So there are fewer particles in the gap than there should be, hence less than zero energy. Really.
6. Negative pressure: Saul Perlmutter et al, Brian Schmidt et al, 1998
We’re not talking about vacuum pumps here, but rather cosmological negative pressure, which requires the universe’s expansion to accelerate. That’s what the two teams led by Perlmutter and Schmidt
discovered when they measured the brightness of distant supernovas — evidence that the universe has, for the last few billion years, been expanding at an ever increasing rate. Because the universe is
expanding faster and faster, some force other than ordinary gravity must be at work, because gravity would slow the expansion rate down. That force must exert negative pressure, because ordinary
pressure would compress space; negative pressure expands it.
5. Negative temperature: Robert Pound, Norman Ramsey, 1951
We’re not talking about Antarctica here, but rather negative temperature on the absolute scale, where absolute zero represents the complete absence of heat, and hence supposedly the coldest
temperature possible. Which it is. But it turns out that mathematically, coldest is not the same as lowest. On the absolute scale, temperature and entropy are related in such a way that in all
ordinary circumstances the temperature is positive. Temperature is related to the average velocity (or energy) possessed by molecules, and most of the molecules won’t be as energetic (fast) as the
very fastest. If they were, the fastest would just go even faster. But if you put an upper limit on how fast the molecules can go, then they all could be as fast as the fastest. In this case, when
the majority of molecules are at the maximum energy, the ordinary formula for temperature is turned upside down, and that makes the temperature negative. Even though the temperature is negative, most
of the atoms are very energetic, so the system is technically hotter than any system with a positive temperature (heat would always flow from a negative temperature system to a positive temperature
system, which by definition makes the positive system colder).
4. Negative probabilities: Paul Dirac, 1920s
In his work leading to the prediction of antimatter, Dirac found not only that negative energies entered into the equations, but also so did negative probabilities. Ordinarily, the chance of
something happening (its probability) is regarded as somewhere between 0 (no chance at all, like the Cubs winning the World Series) and 1 (absolutely certain, like A-Rod guilty of using PEDs). Having
a less-than-zero chance of happening seems meaningless. But Dirac showed that in some situations negative probabilities at intermediate steps in quantum calculations could be useful, a point later
discussed by Richard Feynman. Recently the mathematician John Baez has blogged in detail about the whole negative probability business.
3. Negative curvature: Carl Friedrich Gauss, 1824
Except maybe for Newton, Gauss was the greatest mathematician of his millennium. He figured out that it would be possible to devise a geometry in which the sum of a triangle’s angles was less than
180 degrees, which means the curvature of such a space would be negative. He usually doesn’t get credit for inventing non-Euclidean geometry, though, because he didn’t publish that work. He was a
perfectionist and wouldn’t publish anything until he had everything worked out so well that nobody could find any way to criticize it. (In other words, if Gauss had written this blog it would never
have been posted.)
2. Negative numbers: Brahmagupta, seventh century
There is some evidence that the ancient Chinese possessed the concept of negative numbers, but Brahmagupta, a Hindu astronomer, gets credit for explicitly articulating their status as actual numbers
(and not “absurd impossibilities” as some of the Greeks thought). Brahmagupta called negative numbers “debts” (positive numbers were “fortunes”) and he outlined the arithmetical rules governing them.
For instance: “The product of two debts … is one fortune.” Thus Brahmagupta anticipated by more than 13 centuries the mantra of Edward James Olmos in the movie Stand and Deliver: “A negative times a
negative equals a positive.”
1. Square roots of negative numbers: John Wallis, 1673
Like negative numbers, the idea of the square root of a negative number was initially regarded as an impossibility, as negative numbers are not the square of anything. But Wallis, an English
mathematician, argued otherwise; as Paul Nahin says in his book on the subject, Wallis “made the first rational attempt to attach physical significance” to the square root of –1. Wallis pointed out
that negative numbers are not hard to visualize — they’re just the numbers to the left of zero on a number line. But if you add another axis to the number line (pointing straight up, from zero) you
then have a whole plane to the left of zero. “Now what is admitted in lines must, on the same Reason, be allowed in Plains also” (he meant “planes”), Wallis wrote. And since you can draw a square in
a plane, a side of a square on the negative side of zero would correspond to the square root of the negative number. Far from being physically meaningless, roots of negative numbers turn out to be
necessary ingredients in the equations of quantum mechanics.
Note: To comment, Science News subscribing members must now establish a separate login relationship with Disqus. Click the Disqus icon below, enter your e-mail and click “forgot password” to reset
your password. You may also log into Disqus using Facebook, Twitter or Google. | {"url":"https://www.sciencenews.org/blog/context/top-10-%e2%80%9cnegative%e2%80%9d-inventions","timestamp":"2014-04-19T15:48:17Z","content_type":null,"content_length":"83432","record_id":"<urn:uuid:88f04078-dc08-40ab-b4a9-d2d9e89b4b05>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
True Shooting Percentage is the Truth
DeAndre Jordan led the NBA with a 64.3 field goal percentage. But Jordan stinks at the free throw line (38.6%) and doesn’t make three-pointers. Was he really the most accurate shooter? Of course not.
After all, if Kevin Durant goes 5-for-10 (50.0% field goal percentage) on three-pointers, and Jordan goes 7-for-10 (70.0% FG%) on two-pointers, Durant has just outscored him 15-to-14 even though his
field goal percentage is 20 points lower! That’s where True Shooting Percentage comes in.
TS% factors in threes and free throws. Here's the calculation:
POINTS divided by (((All Field Goal Attempts) + (ALL Free Throw Attempts X 0.44)) X 2)
The calculation is based on the expectation that two-points per shot attempt is a perfect outcome. That's why all field goal attempts are multiplied by two. But why are free throw attempts multiplied
by 0.44?
If you go to the free throw line in the NBA, you usually get two free throws. So why not free throw attempts times 0.5? A two-shot trip to the line would be 0.5 + 0.5, equaling one field goal
attempt. Right?
Well, not every trip to the line results in two free throw attempts. If you get fouled while making a basket, you go to the line for one shot (an "and-1"). And in the NBA, technicals result in one
free throw. It's based on an assumed league average. So no, TS% isn't perfect, but it is much better than FG%.
The top five in TS% last season (minimum 40 games and 20 minutes per game):
1. Tyson Chandler, New York Knicks: 67.1%
2. Kevin Durant, Oklahoma City Thunder: 64.7%
3. LeBron James, Miami Heat: 64.0%
4. Kyle Korver, Atlanta Hawks: 63.7%
5. Shane Battier, Miami Heat: 62.3% | {"url":"http://www.sikids.com/print/55991","timestamp":"2014-04-19T05:17:51Z","content_type":null,"content_length":"11093","record_id":"<urn:uuid:296b70c6-c545-4cb0-811f-fb88827b8ba6>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: November 2009 [00189]
[Date Index] [Thread Index] [Author Index]
Re: Bug with Sequence
• To: mathgroup at smc.vnet.net
• Subject: [mg104619] Re: [mg104568] Bug with Sequence
• From: Leonid Shifrin <lshifr at gmail.com>
• Date: Thu, 5 Nov 2009 03:48:38 -0500 (EST)
• References: <200911040634.BAA08584@smc.vnet.net>
Hi Daniel,
This does not seem like a bug to me. Let us see how the list is stored after
the first assignment:
In[1] =
t={1, 2, 3}; j = 0;
We see that the first element is still there (you have actually noticed
that!). We don't really change the size of a list (as stored in OwnValues
etc) by assigning Sequence[] to some of its elements - Sequence[] is just as
good an element as anything else. All the Sequence[] magic happens at
evaluation stage - the size of an *evaluated* list becomes *effectively*
smaller as a result of evaluation (during which Sequence-s inside heads
without SequenceHold attribute are spliced).
This seems entirely consistent to me, given that lists are internally
implemented as arrays and then to really change the size of a list as it is
stored, we would generally need O(n) operations where n is the size of the
list. OTOH, operations like Part and asignments should not be concerned with
anything else than just assigning parts of expressions, and such list
rearrangements should be beyond their scope IMO.
Returning to your example, all subsequent assignments simply re-assign
Sequence[] to the first element, and don't touch the rest - thus the result
that puzzled you.
The following two modifications achieve what you presumably wanted:
In[2] =
t={1, 2, 3}; j = 0;
In[3] =
t = {1, 2, 3}; j = 0;
While[++j < 4, t[[j]] = Sequence[]; Print[t]]
Note again that the final state of the list <t> is different in each case -
it is an empty list in the first method and a list of 3 Sequence[] elements
in the second. However, should you just use the <t>, in most cases you won't
see the difference.
Hope this helps.
On Tue, Nov 3, 2009 at 10:34 PM, dh <dh at metrohm.com> wrote:
> Hello,
> has anybody an explanation for the behavior of "Sequence"? I think it is
> an ugly bug.
> Consider the following that shoud succesively shorten the list t:
> t = {1, 2, 3}; j = 0;
> While[ ++j < 4, t[[1]] = Sequence[]; Print[t]]
> this returns: {2,3} three times.Dropping of the first element only seems
> to work once.
> If you say Information[t] you get:
> t={Sequence[],2,3}
> Daniel
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2009/Nov/msg00189.html","timestamp":"2014-04-17T09:39:01Z","content_type":null,"content_length":"27739","record_id":"<urn:uuid:ec40babd-9fa8-40e0-861d-1722a1b4bb5d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comparison of two normative paediatric gait databases
The availability of age-matched normative data is an essential component of clinical gait analyses. Comparison of normative gait databases is difficult due to the high-dimensionality and temporal
nature of the various gait waveforms. The purpose of this study was to provide a method of comparing the sagittal joint angle data between two normative databases. We compared a modern gait database
to the historical San Diego database using statistical classifiers developed by Tingley et al. (2002). Gait data were recorded from 60 children aged 1–13 years. A six-camera Vicon 512 motion analysis
system and two force plates were utilized to obtain temporal-spatial, kinematic, and kinetic parameters during walking. Differences between the two normative data sets were explored using the
classifier index scores, and the mean and covariance structure of the joint angle data from each lab. Significant differences in sagittal angle data between the two databases were identified and
attributed to technological advances and data processing techniques (data smoothing, sampling, and joint angle approximations). This work provides a simple method of database comparison using
trainable statistical classifiers.
One of the main objectives of gait analysis is to identify deviations in a patient's gait from 'normal' movement patterns. A critical component of gait analysis therefore, is the availability of
age-matched normative databases. Researchers of paediatric gait typically develop their own normative databases or refer to published data. Substantial data on temporal-spatial [1-6] and kinematic
parameters [7-10] are available in the literature for paediatric gait. Existing trunk kinematic [11] and joint kinetic [12-18] databases tend to consist of small sample sizes and are sparser in the
Until recently, we used a database that was developed at the Children's Hospital, San Diego [2]. This large database contains temporal-spatial and joint angle data for 409 gait cycles for children
aged 1.0 to 7.0 years old. Trunk and kinetic data were not available in this database. Difficulties in comparing patient data to normative data originating from another lab are partially attributable
to differences in marker sets, data processing techniques, and consistency of clinicians. Other disparities arise from advances in computer technology, which have dramatically improved motion
analysis systems and data processing capability over the last decade. These differences hamper construction of algorithms to separate 'abnormal' individual gait patterns measured at modern labs from
normative gait patterns that were established using older technologies and algorithms. Based on this, we began developing a new paediatric database at our lab using modern instrumentation and
numerical algorithms. Of interest, were the differences in normative profiles between the two databases and its affect on gait classification results. Therefore, the objective of this study was to
provide a method of comparing the sagittal joint angle data between two normative databases. We compared the new database to the historical San Diego database using statistical classifiers.
Sixty children aged 1–13 years old were recruited from by distributing bulletins campus and local daycare centres. One child was non-compliant during data collection, reducing the sample size to 59
children. These children were divided into two groups: an 'immature' group consisted of 14 children aged less than 3 years, and 2) a 'mature' group contained 45 children aged 3.0 years and older.
This age division was based on previous research regarding the onset of mature gait patterns [2]. This assumption was verified using statistical classifiers that could identify mature, normative gait
from immature patterns [19]. Ethical approval for this study was obtained from the University Ethics Committee.
A Vicon 512 motion capture system (Oxford Metrics Ltd.) with six infrared cameras (JAI 60 Hz interlaced) was employed to track the three-dimensional trajectories of reflective markers placed on the
subjects' skin. Markers of 25 mm and 14 mm diameter were used depending on body size to reduce crossover and merging. Each trial was subjectively examined for merges or crossovers of marker
trajectories. The calibrated volume was approximately 6.7 m × 2.4 m × 1.5 m. Two force plates (Kistler 9281B21, Kistler Instruments, Winterthur, Switzerland and AMTI BP5918, Advanced Mechanical
Technology, Incorporated, Newton, MA, USA) collected the three-dimensional ground reaction forces and moments during each gait cycle. Two digital cameras, a weight scale, and calipers were used to
obtain anthropometric measures.
All data collection was conducted at the motion analysis laboratory at the University of New Brunswick (UNB). Twenty reflective markers, representing key anatomical landmarks, were placed directly on
the skin of each participant (Table 1). Children were encouraged to perform at least 20 trials if possible. Immediately following completion of the gait trials, the reflective markers were removed
and a new segment inertia marker set [20] was applied. Participants were then asked to stand in the anatomical position within a calibration frame, while simultaneous front and side digital
photographs were taken. Correct positioning of the body for these images was accomplished through verbal instructions or passive positioning of limbs by parents within the calibration frame.
Anthropometric data such as joint width (using calipers), height and mass were also measured [21].
Table 1. Marker locations for gait trials
Data Analysis
The biomechanical model consisted of the left and right foot, shank, thigh and the pelvis and trunk segments. Embedded coordinate systems were created using the three non-collinear markers on each
segment. The segment-based coordinate systems were transformed to the instantaneous, joint center-based, embedded coordinate systems using alignment data from the static capture trial and joint width
measurements. Joint center determination, marker configuration, marker alignment, and kinematics data reduction protocol were identical to Davis et al. (1991) with the exception of the following: 1)
the heel marker was used during dynamic trials, and 2) an embedded coordinate system was created at the ankle joint using the long axis of the foot (heel – toe), and the transverse axis of the shank
segment. In doing so, the flexion axis of the ankle was aligned with the anatomical frontal plane of the shank.
Cadence, velocity, and percent of cycle spent in single stance were calculated for each successful gait cycle. The single gait cycle, which most closely approximated the individual mean of all gait
cycles on these three measures (based on an unweighted, least-squares calculation), was selected as the final trial for analysis [2]. Joint angles were computed using Euler angles in a yxz sequence,
corresponding to flexion/extension, adduction/abduction, and internal/external rotation. Similar to Sutherland et al. [2], joint angle data were also computed using the projection angle algorithms
and then approximated by finite Fourier series using 6 harmonics. Net joint moments and joint power for the hip, knee, and ankle joints were estimated using an inverse dynamics approach. A
mathematical model of the human body was used to estimate the segment inertial properties of each child [20]. The required absolute linear and angular velocities and accelerations were calculated
from the embedded coordinate systems using a five-point central difference method of derivation [22]. Prior to differentiation, raw coordinate data was filtered using a second order, 6 Hz low-pass
Butterworth filter.
Statistical Analysis: Comparisons with San Diego Database
Only the sagittal hip, knee, and ankle kinematics were compared to the San Diego database, for 2 reasons: 1) only kinematic data were readily available for comparison from the San Diego database, and
2) sagittal hip, knee, and ankle joint angles tend to demonstrate greater consistency across labs than do smaller rotations in other planes [23]. The statistical analysis was based on a
one-dimensional index of normal gait developed by Tingley et al. [24]. To calculate the index of normal gait, Tingley et al. [24] calculated eleven interpretable functions from the San Diego mature
normative data (children aged 3–7 years), namely the mean sagittal joint angle patterns for hip, knee and ankle (3 functions), the mean angular velocities of the three joints (3 functions), the
angular acceleration patterns of the three joints (3 functions), and two functions that capture the primary frequencies of knee and ankle angle patterns. To remove bias due to marker misplacement,
the classifier subtracts each child's mean angle from the data prior to analysis. This recentred data is used for statistical analyses (Figures 1, 2, 3, 4). A key finding in Tingley's study [24] was
that each child's pattern of variation from the group mean could be approximated as a linear combination of these interpretable functions. The gait index developed in this work is simply a squared
distance calculated in 11 dimensions (Mahalanobis distance). The gait index classifies children as normal, abnormal, or unusual based on calculations of population percentiles and standard tables of
the F distribution [2]. Using this statistical tool, gait patterns of mature children in the UNB normative database were evaluated based on their deviation from San Diego mean normative values.
Figure 1. Mean knee flexion angle versus percent cycle for 45 normal subjects, with a 95% bootstrap prediction band. For statistical purposes, the mean of each individual curve was removed prior to
computation of the overall mean and bootstrap prediction bands.
Figure 2. Mean ± (2 S.E.) hip, knee, and ankle joint angles versus percent cycle for UNB normative data (thin lines) with San Diego mean data superimposed (thick lines). For statistical purposes, the
mean of each individual curve was removed prior to computation of the overall mean.
Figure 3. Mean results for knee flexion angles versus percent cycle using UNB's euler (-) and projected angle data (---) and San Diego projected angle data (- -). For statistical purposes, the mean
of each individual curve was removed prior to computation of the overall mean.
Figure 4. Sagittal hip, knee, and ankle angles (+) versus percent cycle for a hypotonic subject, with 95% bootstrap prediction band. For statistical purposes, the mean of each individual curve was
removed for the individual and mean data.
We expected that the classification of UNB normative data using the San Diego mean normative values would not produce accurate results based on the differences in technology and computational methods
between the two databases. Therefore, after classifying the UNB gait patterns using the San Diego mean normative values, the gait index classifier was 'recalibrated' so that each UNB normative gait
cycle was classified against the UNB mean normative data (instead of San Diego mean data). New values for the interpretable functions and covariance matrix (required for the distance calculation)
were computed. New index scores were calculated for the UNB data and the two sets of classification results were compared. The ability of the recalibrated index to detect abnormal gait patterns was
tested by computing the gait index for children under the age of 3 years.
A further examination of the differences between the San Diego and UNB normative data sets was conducted using a multivariate analogue of the two-sample t-test [25]. Differences in the mean and
covariance structure of the joint angle data from each lab were investigated. These tests compared differences between 1) San Diego projected angle data and UNB Euler angle data, and 2) San Diego
projected angle data and UNB projected angle data. To examine whether the sampling methods and Fourier approximations used by Sutherland et al. [2] were responsible for observed differences in
sagittal joint angle data between UNB and San Diego, independent t-tests were used to test for mean differences between raw and smoothed San Diego and UNB data.
The classification results of the index scores for the UNB data were compared to those of the more commonly used bootstrap prediction band methods for assessing clinical cases. Bootstrap techniques
(B = 500 samples) were applied to the UNB normative data (n = 83 cycles) to establish prediction regions of normal sagittal knee angle data and ninety-five percent prediction regions (m[p ]= 3.02)
were calculated [26]. A knee flexion curve was considered abnormal if any data point was more than 3.02 standard deviations from the mean curve (Figure 1). The classification results obtained using
these bootstrap techniques were then compared to the UNB index scores for a clinical case (hypotonic gait).
The classification of sagittal joint angle data (Euler method) for children aged 3–13 years at UNB using the classifier based on San Diego mean normative values, resulted in 49% of 83 cycles being
classified as unusual or abnormal. When the gait index was recalibrated using the UNB normative data, the new classification results were similar to those of Tingley et al. [24]: the score behaved
like an F[11,61 ]statistic for the training data, classifying 94% of cycles as normal (as expected by the nature of the training). Further tests using the gait patterns of younger children showed
that the recalibrated classifier was also capable of detecting 82% of immature gait patterns (23 out of 28 cycles) at UNB as unusual or abnormal.
The differences between the two gait index results (49% versus 94% classification) were investigated by comparing the mean and covariance structure of the sagittal angles from each database. Both
tests yielded highly significant p-values (p = 0.000). Figure 2 shows the UNB mean hip, knee and ankle joint angle curves (± 2 S.E.) with the San Diego mean normative data superimposed. Although the
databases appear similar, the two are quite distinct. For example, the peak mean knee flexion between the two databases is more than 2 standard errors apart. When UNB sagittal joint angles were
recalculated using a projected angle approach, mean angle patterns were slightly closer to those of San Diego at the beginning of the cycle and midswing (Figure 3), but were still significantly
different (p = 0.000).
Data processing techniques between the two labs were suspected to be partially responsible for observed differences between the databases. Sutherland et al. [2] filmed the gait cycles at
approximately 50 Hz and later reduced each individual's data to 15–35 evenly spaced frames. The approximation of joint angle curves using Fourier series could yield slightly different results for a
curve sampled at 15–35 Hz versus 60 Hz. The results of the independent t-tests of the mean sagittal knee angle (prior to recentering) for both databases showed a significant difference of 8.91° (S.E.
± 1.10°) and 12.04° (S.E. ± 0.89°) for Fourier and raw data, respectively (Table 2). The Fourier approximations actually reduced the difference between the mean knee angles of both datasets.
Table 2. Comparison of raw vs smoothed sagittal knee angle across two labs
A comparison of the index scores and the bootstrap prediction bands revealed differences in classification results for clinical gait data. Sagittal hip, knee, and ankle angle data for a hypotonic
child [19] are shown in Figure 4 with respect to the 95% prediction range. Both knee and ankle data were classified as 'unusual' and 'abnormal' by the UNB index score, but not by the 95% prediction
bands. The main reason for the discrepancy in classification between the two methods is that the predictions bands simply analyzed deviations in magnitude and do not consider differences in the
pattern of motion. Unless data points deviate from the mean curve by more than 3.02 standard deviations, the child will not be detected. Only a few ankle angle data points extend beyond the
boundaries during terminal swing in Figure 4. However, the three graphs show temporal delays in angle data generating a different pattern of motion compared to normative data. The UNB index scores
detected this difference in the shape of the curve.
Differences in normative gait databases are difficult to assess due to the high-dimensionality and temporal nature of the various kinematic waveforms. The purpose of this study was to provide a
method of comparing the sagittal joint angle data between two normative databases. We compared a modern gait database to the historical San Diego database using statistical classifiers developed by
Tingley et al. [24]. Differences between the two normative data sets were explored using the index scores, and the mean and covariance structure of the joint angle data from each lab [24]. In
addition, the boundaries of normality established by the statistical classifier were compared to Bootstrap methods.
The significant differences found between the San Diego and UNB normative databases are likely due to multiple factors. Since the databases were established over 20 years apart, technological
differences are most likely a predominant factor. Sutherland et al. [2] used a pseudo three-dimensional system that consisted of four independent 16 mm motion picture cameras, a Vanguard motion
analyzer and a Graf-pen sonic digitizer, requiring the manual digitization of images and joint centers. In contrast, UNB used a three-dimensional system with 6 high-resolution cameras, semi-automatic
marker digitization and labeling, and automated joint center estimation techniques.
Changes in data processing techniques over the last two decades are also likely a major cause of differences between the two normative datasets. The results of the multivariate analyses showed that
the data sets differ in part due to the algorithms used to calculate the joint angles. UNB's normative data showed more similarity to the San Diego values when joint angles were calculated as
projected angles, similar to Sutherland et al. [2], as opposed to Euler angles. In addition, comparisons of sampling rates and smoothing techniques showed that the Fourier approximations were related
to decreases in the mean amplitude of the knee angle data. San Diego mean sagittal knee angle data was approximately 9° higher than UNB mean knee angle data when Fourier approximations were compared.
In contrast, raw angle comparisons generated a difference of approximately 12°.
Once normative data has been collected, establishing bounds of normality for each gait parameter is necessary. Bootstrap methods may be used to establish prediction bands of normality [26,27]. This
technique captures large point-wise deviations from the mean of the training data set. It does not necessarily capture deviations in patterns of motion, nor does it consider correlations between
curves (i.e. knee and ankle angle). The UNB index scores are capable of classifying gait data based on magnitude of deviation, pattern of motion, and correlations between multiple joint angle curves.
Therefore, the UNB classifier is able to extract more complex features of each gait cycle that may be missed by bootstrap methods. The finding that the calibrated index of normality identified 80% of
cycles for children under 3 years old as unusual or abnormal supports the findings of Tingley et al. [24] that the eleven interpretable functions can successfully classify gait patterns.
Given that only robust sagittal angles were compared in this analysis, it is likely that the differences between labs would be greater for the other planes of motion. The clinical significance of the
differences found between the UNB and San Diego databases in this study are uncertain. However, in light of these differences, we will continue efforts to expand the new normative database and
retrain the gait classifier as individuals are added.
The authors wish to gratefully acknowledge the Natural Sciences and Engineering Research Council (NSERC), the New Brunswick Women's Doctoral Scholarship Program, and the Institute of Biomedical
Engineering, UNB.
Sign up to receive new article alerts from Dynamic Medicine | {"url":"http://www.dynamic-med.com/content/6/1/8/","timestamp":"2014-04-17T12:35:42Z","content_type":null,"content_length":"97906","record_id":"<urn:uuid:26275316-2234-49f9-b14c-4df40322bbb0>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 290
what is the difference between a solution set and a replacement set
whats five times a number squared?
I need a good explination to understand why a positive number minus a negative number equals a positive or in other words you add it. ex 10-(-5)= 15
Please explain why a positive number minus a negative number equals positive and you add it together??? Makes no sense at all. ex 10-(-3)=13
Area of a painting. A rectangular painting with a width of x centimeters has an area of x2 + 50x square centimeters. Find a binomial that represents the length.
I need help with this problem. Factoring with Two Variables h^2 9hs + 9s^2
how high must a 2 kg basketball be thrown so that the top of its arc, it has a potential energy of 160J?
The perimeter of a rectangular backyard is 6x + 6 yards. If the width is x yards, find a binomial that represents the length.
Multiply 3x^8y^5 * 6y^5 * 2x Simplify your answer as much as possible.
A total of tickets were sold for the school play. The number of student tickets sold was two times the number of adult tickets sold . How many adult tickets were sold?
How do i write a thesis statement about greed for the novel The Lion, The Witch and The Wardrobe?
Is this a correct answer, I'm trying to integrate the marginal revenue function to get the function for total revenue. R'(x)=(30/(2x +1)^2)+30 =R(x)=(-15/2x+1)+30x
I am stump on how to break this down for I can use the chain rule to get the integral of this equation. (-30/(2x+1)^2)=30
suppose that the height of an object shot straight up is given by h=544t -16t^2. h= feet and t= seconds Find the maximum height and the time at which the object hits the ground
The reaction of zinc and chlorine requires 0.00624 mol of Zn and 0.0125 mol of Cl to react exactly to form zinc chloride. What is the mole ratio of Zn to Cl? What is the empirical formula of zinc
Proprietorship, partnership or corporation, Why would an entrepreneur want to choose one over the other?
The answer given by Reiny is not correct. Let the central angle be theta. The area of the sector is proportional to the ratio of the central angle to the angle which goes the whole way around the
clockwhich is theta/2pi. If we multiply the formula for the area of a circle by t...
1. CheckPoint: Interfaces and Communication Messages Understanding object-oriented methodologies is often difficult. You already understand that object-oriented analysis and design emulates the way
human beings tend to think and conceptualize problems in the everyday world. Wi...
college algebra
Please help in solving this problem using the method of substitution -2/3x + Y= 2 2x -3y =6 thanks for your time.
When combining Ca(OH)2 with Na2CO3 it entraps SO2 out of a gas stream from kiln emissions where ore is processed into nodules. When there is not enough Na2CO3 in this combination, will it not allow
the solids to form out and settle?
The postoperative death rates are calculated by patients who have expired in what period of time?
sorry I thought that it said four sheets
Use the mole ratio of CuO produced per mole of malachite to write a balanced equation for the malachite decomposition reaction. Assume that CO2 is also a product of the decomposition.
If you are trying to write an essay I strongly suggest that you don't copy the answers from a book. I see that is what you did up there, I'm homeschooled and even in regular school they will get you
for that, put it into your own words. It's not hard. Just trying t...
which term means blood in the chest? a. hemothorax b. aerothorax c. pneumothorax d. gastrothorax
Where are keratinized cells found in the body? a. Sweat and sebaceous glands b. hair and nails (my guess) c. capillaries and nerve fibers d. the subcutaneous layer of the skin
Structurally, what is the difference between the dermis and epidermis? A. The epidermis is thicker. B. The dermis contains more types of tissue. C. The dermis is mostly of dead skin cells. D. The
epidermis consists of mostly adipose cells. I'm thinking b or c.
behavior modification
ellen was referred to the clinic shortly after her fake miscarriage who was employed as an admin assist. with mr johnson whom she had an affair with for 2 years and mr johnson was unwilling to
divorce his wife because of his children. what issues do you think are important to ...
When the cell is not dividing dna is uncoiled into what
50.0 G SILVER SPOON AT 20.0 C PLACED IN A CUP OF COFFEE AT 90.0C
Social Studies
Canadas geography is noteworthy because the country has what?
critical thinking
What if a person whom you did not know made a prejudicial rhetorical statement towards you?
Many composers of the Impressionist era wrote harmonies consisting of chords that moved in _______ motion. A. reverse B. indirect C. parallel D. convex 4. Who was the virtuoso violinist that had a
profound influence on Franz Liszt? A. Giuseppe Verdi B. Franz Schubert C. Giaco...
Richard Wagner's major work is a four-opera cycle called A. Carmen. B. The Barber of Seville. C. The Ring of the Nibelung. D. Siegfried.
romantic era of music
Richard Wagner's major work is a four-opera cycle called
4th grade
medical coding and billing
I am also doing this project and I must say, I'm freaking out a bit. I haven't been in school since 1978, so I am hesitant to submit my paper. Would anyone who has already completed it be willing to
critique it before I submit it? I would greatly appreciate it and it w...
social studies
What was the moving of Africans to the New World colonies to be used as slaves?
what is the prime factorization of 54
Science Biology
Explain how the roles different parts of a plany have in converting light evergy into chemical energy. I need it broken down into easier terminology.
I need information on plants. "Explain the roles different parts of a plant have in converting light energy into chemical energy." I need to teach this to a special education child in as simple terms
as is possible. Can you tell me some sites I can look at to find in...
Suppose the number of new homes built ,H, in a city over a period of time , t, is graphed on a retangular coordinate system where time is on a horizontal axis. Suppose that the number of new homes
built can be modeled by an exponential funtion, H=p*a^t where p is the number of...
Suppose the number of new homes built ,H, in a city over a period of time , t, is graphed on a retangular coordinate system where time is on a horizontal axis. Suppose that the number of new homes
built can be modeled by an exponential funtion, H=p*a^t where p is the number of...
International Business
Does Western Union standardize products or adapt them for different markets
International Business
What resources does an International Internet retailer need other than merely a storefront on the Internet? Does it require fewer physical, financial, and human resources than a traditional retailer,
or just as many? Explain.
I need help writing a googal as a power of 10
HHS 265
Assignment: Community Foundations Paper Resources: Chapter 13 (pp. 174-79) in Financial Management for Human Service Administrators Due Date: Day 7 (post to your Individual Forum) Identify a
community foundation in your area. Write a 700 to 1,050 word paper describing the orga...
Hum 130
Thank you.
Hum 130
Is indigenous religion still practiced in Japen today?
Canceled or cancelled? with one L or two L's?
Is this how you say to spend more time? Passer plus de temps?
Literature (short story)
Omg Thank you just took the test got 100!!!
Hello I need to check my grammer, nouns, verbs, ect. Reflection Paper The book as well as the CD has been enjoyable for me. I have used both within my classes. Students had to listen to the beats and
the rhythm of the music being played. Once they heard it they were to stop it...
College Algebra
Maximize the objective function C = 4x + 5y with the constraints x + y 5, x + 2y 6, x 0, y 0. Your constraints don't make sense. They need to be equations. Are you leaving out "=" signs? If x=0 and y
=0, there no variables and you can't maximize C
social studies
Characterize the united states culture according to the five categories of value orientation theory:man-nature,activity, time,human nature, relational. What is your view on Us culture. I am reading a
book about the US cultures. I just wanted to hear or read someone else views....
business and finance
The cost of producing a number of items x is given by C=mx+b, inwhich b is the fixed cost and m is the variable cost (the cost of producing one more item). (a) If the fixed cost is $40 and the
variable cost is $10, write the cost equation. Could someone help me please with thi...
business and finance
The revenue for a sanwich shop is directly proportional to its advertising budget. When the owner spent $2000 a month on advertising,the revenue was $120,000. If the revenue is now $180,000, how much
is the owner spending on advertising? Please Help!! Set up a proportion. $2,0...
M varies directly with n; m=144 when n=8. What is the constant of variation k? I don't know what this mean. Could you help me out in solving this/ A statement such as "M varies directly with n" can
be changed to the equation M=kn, where k is a constant. since you...
In the role of religion in the U.S, how does it compares to Egypt's religion, and how is they different? Thank you for using the Jiskha Homework Help Forum. Here is some help on religion in Egypt:
http://en.wikipedia.org/wiki/Religion_in_Egypt Egypt has a state religion, I...
two consecutive integers such that the sum of 3 times the frist integer and 6 times second integer is 24. Expalin the variables that you used. If I could get help in solving this ASAP, I wiould
really appreicate it? Let the two consecutive integers be x and x =1. The verbal st...
In your opinion, how is the order operation used to solve equation. exponets, groups, multiplication, division, addition, subtraction. This allowes you to get the same result every time..... I hope
this helps :) I searched Google under the key words "math operation order&...
The cost of a long-distance telephone call is $.36 for the first minute and $.21 for each additional minute or portion thereof. Write an inequality representing the number of minutes a person could
talk without exceeding $3. cost= .36 + .21(m-1) then put the cost less than or ...
Ted must have an average of 70 or more in his summer course to obtain a grade of C. His first three test grades were 75, 63,and 68. Write an inequality representing the score that Ted must get on the
last test to get a C grade. Let X be the grade on the next test. The average ...
If the total bill at a restaurant, including a 15% tip, is $65.32, what was the cost of the meal alone? 1.15 X = $65.32 I am assuming that you can solve this equation for X (the cost of the meal
alone). If you cannot, respond to this post with more questions. I hope this helps...
400 people found that 66 were left-handed. What percent of the 400 were left-handed? percent= 66/400 * 100
2x(x-2)=x+3 2x-2=x+3 What error has been made in solving this equation, and what could be done to aviod this error? The 2x was NOT multiplied by the contents of the parenthsis. Should have been 2x2 -
850 douglas fir and ponderosa pine trees in a section of forest bought by a logging company. The company paid an average of $300 for each Douglas fir and $225 for each ponderosa pine. If the company
paid $217,500 for the trees, how many of each kind did the compnay buy? Same c...
A bus leaves a station at 1 p.m, travling at an average rate of 44 mi/h. One hour later a second bus leaves the same station, travling east at a rate of 48 mi/h. At what time will the two buses be
274mi. apart? Set up two equations as I did in the two previous problems. Let us...
A bicyclist rode into the 5h. In returning, her speed was 5 mi/h faster and the trip took 4h. What was her speed each way? distance =rate*time The distance is the same. y = rate going z = rate
returning Going: d = y*t = y*5 = 5y return: d = z*t = z*4 = 4z we know the return ra...
The length of a rectangular playing field is 5ft. less than twice its width. If the perimeter of the playing field is 230ft, find the lenght and width of the field. 2W+2L= perimeter = 230 L=2W-5 ====
======== Solve these simultaneous equations. Post your work if you need furthe...
how do I solve this problem 2x/2=(6-3y)/2? 2x/2=(6-3y)/2? I solved it for you in my initial answer. The 2 in the numerator cancels with the 2 in the denominator on th left side and x = (6-3y)/2 which
is the solution of the initial equation you had for x in terms of y. You CAN...
how do I solve this problem for x? 2x/2=6-3y You need to finish the post. What is the question? I solved it for x for you. answered above, also.
2x+3y=6 (solve for x) 2x+3y=6 Get unknown on one side and everything else on the other side. To do this we ned to move 3y. How to do that? Subtract 3y from both sides. 2x+3y-3y = 6-3y combine terms.
2x = 6-3y Now divide both sides by 2 2x/2 = (6-3y)/2 x=(6-3y)/2
The river rose 4 feet above flood stage last night. If a= the river s height at flood stage, b= the river s height now (the morning after), which equations below say the same thing as the statement?
Explain your choices by translating the equations into English and c...
The Randolph s used 12 more gal of fuel oil in October than in September and twice as much oil in November as in September. If they used 132 gal for 3 months, how much was used during each month? Let
the usage in the three months be S, O and N. O = S + 12 N = 2S S+O+N = S...
Yuri has a board that is 98 in. long. He wishes to cut the board into two pieces so that one piece will be 10 in. longer than the other. What should the length of each be Let the shorter board's
length be x. The other will be x+10. x + x + 10 = 98 2x = 88 Solve that for x ...
-4(2x-3)=-8x+5 Hi Cheryl! The first thing you have to do with this is multiply the -4 into the stuff in the parenthesis. -4*2x = -8x -4*-3 = +12 So the left side is -8x+12 = -8x +5 Now that I'm this
far along, we see that there's no real answer to this. If we add -8x t...
I still don't get it
i really need help!! graphing
what do i put the viewing window as on my calculator if i have an equation of 2x^2+40000/x is x=0 an allowed value in the domain? So have to make allowances for x near zero. Check the problem: is x
cited to have a particular domain? If not, then plot this in two parts x large ...
pre calc. help!!!!!!
i am building a rectangular swimming pool. the pool is to have a surface area of 1000 ft.^2. it also has a uniform walk way of 3 ft. surrinding it. X is the lengh of one side of the pool, and Y is te
lengh of the other side. A.) express the area of the plot of land needed for ...
A can with the ratio of the height of the radius is 4 to 1. a. express the volume of the can as a function of the radius, r. b. express the volume of the can as a function of the height, h. c. if you
want the can to have a volume of 100cm cubed, what does the height have to eq...
Please help with English!!!
Which one of the following sentences does not contain a serious verb error? 1. They staying there now. 2. She want a promotion. 3. If I was younger, I'd pursue another line of work. 4. We had sang
that hymn last week. Cheryl, if you read those out loud, you will quickly fi...
I'm not sure how to work this oroblem. Can someone assist me. revenues of $230,000 and expenses, including income taxes, of $190,000. On December 31, 2005, Edgemont had assets of $350,000,
liabilities of $80,000, and capital stock of $210,000. Edgemont paid a cash dividend...
Punctuation within Sentences—Parentheses
The police officer checked her rear-view mirror when she heard the screeching tires. (Someone had rear-ended her the night before).
brain teaser help-jenn
50=lashes with a wet noodle
Pages: <<Prev | 1 | 2 | 3 | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Cheryl&page=3","timestamp":"2014-04-21T13:57:50Z","content_type":null,"content_length":"27594","record_id":"<urn:uuid:582ef6d0-c201-4629-87aa-467ca85cfbd2>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conceptual Models and Applied Mathematical Problem-Solving Research
Rational Number Project Home Page
Lesh, R., Landau, M. & Hamilton, E. (1983). Conceptual models in applied mathematical problem solving research. In R. Lesh & M. Landau (Eds.), Acquisition of Mathematics Concepts & Processes (pp.
263-343). NY: Academic Press.
Chapter 9
Conceptual Models and Applied Mathematical
Problem-Solving Research*
Richard Lesh, Marsha Landau,
and Eric Hamilton
This chapter defines and illustrates a theoretical construct, the conceptual model, as an adaptive structure central to research at the interface of two NSF (National Science Foundation)-funded
projects. The Rational Number (RN) project (Behr, Lesh. & Post, Note 1) investigates the nature of children's rational-number ideas in Grades 2-8. The Applied Problem Solving (APS) project (Lesh,
Note 2) investigates successful and unsuccessful problem-solving behaviors of average-ability students, working on problems that involve easy to identify substantive mathematical content, and on
realistic problem-solving situations, in which a variety of outside resources (including calculators, resource books, other students, and teacher-consultants) are available. Lesh (1981) gives a
rationale for these emphases. The kinds of problems we focus on in this chapter are much "smaller" than those that have received our greatest attention in the APS project, more closely resembling the
"word problems" that appear in elementary and middle school textbooks.
The chapter consists of three major sections. The first section particularizes the discussion of conceptual models by referring specifically to rational-number concepts.
The second section discusses results from interviews in which 80 fourth through eighth graders solved problems presented in a number of formats. The interviews revealed that: (a) during the solution
process, subjects frequently change the problem representation from one form to another (e.g., written symbols to spoken words, spoken words to pictures or concrete models, etc); and, (b) at any
given stage, two or more representational systems may be used simultaneously, each illuminating some aspects of the situation while de-emphasizing or distorting others. A major conclusion drawn from
the data is that purportedly realistic word-problems often differ significantly from their real-world counterparts in their difficulty, the processes most often used in solutions, and the types of
errors that occur.
The third section of the chapter presents results from the written testing program of the RN project (see Chapter 4). Item difficulty depended on the structural complexity of the underlying
conceptual models, and on the types of representational translations that the item required. Baseline information was derived comparing fourth through eighth graders' abilities to translate within
and between representational modes (written language, written symbols, pictures) on sets of structurally related tasks.
The chapter concludes with remarks about how the reported research advances our understanding of the growth and development of children's conceptual models and contributes to the theoretical
framework for current work on the RN and APS projects.
Conceptual Models
A conceptual model is defined as an adaptive structure consisting of (a) within-concept networks of relations and operations that the student must coordinate in order to make judgments concerning the
concept; (b) between-concept systems that link and/or combine within-concept networks; (c) systems of representations (e.g., written symbols, pictures, and concrete materials), together with
coordinated systems of translations among and transformations within modes; and (d) systems of modeling processes; that is, dynamic mechanisms that enable the first three components to be used, or to
be modified or adapted to fit real situations
For a given mathematical concept, the first two components of a student's conceptual model make up what might be called the student's understanding of the idea; within- and between-concept systems
define the underlying structure of the concept. The third component includes a variety of qualitatively different systems for representing these understandings using written symbols, spoken language,
static figural models (e.g., pictures, graphs, diagrams), manipulative models (e.g., concrete materials), or real-world ''scripts." The fourth component contains processes for (a) changing the real
situation to fit existing understandings, (b) changing existing understandings to fit the situation, and (c) changing the model to fill gaps, eliminate internal inconsistencies, and resolve conflicts
within the model itself.
Representational systems differ from one another because they emphasize or de-emphasize different aspects of the underlying structure of the concept. They also differ in generative power and in their
ability to manipulate relevant ideas and data simply and economically in various situations. For example, sometimes a picture is worth a thousand words; sometimes language is clearer and more
The distinction between understandings and representations of understandings is quite important in mathematics. Some major advances in mathematics have resulted from the creation of clever or
powerful representations (e.g., Cartesian coordinates and decimal notation) that initially functioned primarily as externalized models of ideas (i.e., structures) that were already known. Later these
representations provided new tools for generating new ideas.
The first three components of a conceptual model contain most of the ''actions" commonly associated with condition-action pairs in computer-simulated information-processing models of cognition. These
actions transform information within the model, but their application does not lead to the development of new, more refined, or higher order conceptual models. Conceptual models are closed (in a
mathematical sense) under the operations in the first three components. The fourth component consists of dynamic mechanisms that enable the first three components to develop and adapt to everyday
Although between-concept systems (part b of the definition) and modeling processes (part d) are the components of conceptual models that are most salient for the types of problems that have been the
major focus of the APS project, an adequate description of those components depends on a firm understanding of within-concept networks (part a) and systems of representations (part c), which are
priority foci of the RN project. Therefore, this chapter focuses on parts a and c. Detailed treatments of parts b and d will appear in future publications from the APS project.
Background for Current Research
on Applied Problem-Solving
The APS project is unusual among problem-solving projects in mathematics education because, rather than emanating from instructional development or research on problem solving itself, it grew out of
research on concept formation. The goals of that research included tracing the development of selected mathematical ideas and identifying task characteristics that influenced students' abilities to
use the ideas in particular situations. Many of our theoretical perspectives evolved during investigations of mathematical abilities that are deficient in "learning disabilities'' subjects (Lesh,
1979b), social and affective factors that influence problem-solving behavior (Cardone, 1977; Lesh, 1979a), the development of spatial and geometric concepts in children and adults (Lesh &
Mierkiewicz, 1978), and the role of representational systems in the acquisition and use of rational-number concepts (see Chapter 4 of this volume).
A central question that the APS project is designed to address is, ''What is it, beyond having an idea, that enables an average ability student to use it in realistic everyday situations?'' We
believe that many of the most important applied problem-solving processes contribute significantly to both the meaningfulness and the usability of mathematical ideas. It is not necessarily the case
that students first learn an idea, then add some general problem-solving processes, and finally (if ever) use the idea and processes in real situations, that is, those in which some knowledge about
the situation is needed to supplement the underlying mathematical ideas and processes. Rather, there is a dynamic interaction between the content of mathematical ideas and the processes used to solve
problems based on those ideas. This assumption has both practical and theoretical implications. Applications and problem solving are unlikely to be fully accepted in the school mathematics curriculum
unless teachers and other practitioners are convinced that they play an important role in the acquisition of basic mathematical ideas. We believe that applications and problem solving should not be
reserved for consideration only after learning has occurred; they can and should be used as a context within which the learning of mathematical ideas takes place.
The task of selecting or designing problems to use in research on applied mathematical problem-solving can be approached from at least three directions. First, one can start with important elementary
mathematical ideas, sort out the different interpretations that these ideas can have in realistic situations, and identify problem situations in which these interpretations occur. This is the
perspective of a recent project headed by Usiskin (Note 4) and Bell as well as parts of our RN project. One conclusion from these projects is that textbook word-problems typically represent only a
narrow sample of idea interpretations and problem types that should be addressed.
A second perspective, and the one that has characterized the largest portion of our efforts on the APS project, starts with realistic everyday situations in which mathematics is used and attempts to
identify the processes, skills, and understandings that are most important in their solution. Again, one conclusion is that the problem types, ideas, and skills that appear to be most critical are
quite different from those that have been emphasized by mathematics education spokespersons for "basic skills" or ''problem solving," and by research on textbook word-problems (Lesh, 1983).
A third approach, which is the one taken in the second section of this chapter, is to start with typical textbook word-problems and create concrete or real-world situations characterized by the same
mathematical structures. From this perspective, the goal is not to investigate responses to word problems as an end in itself; rather, the goal is to understand problem solving in realistic everyday
Although an abundance of research has been conducted related to problem solving (see, for example, the literature review in Lester, Chapter 8 of this volume), there has been very little applied
problem-solving research that investigates the development of conceptual models or that incorporates any of the three approaches to task selection described above.
Before describing the concrete or realistic word-problem isomorphs used in the interviews reported in the second section of this chapter, we first characterize 18 realistic problems, not based on
word problems, which have provided the research sites for most of the investigations in the APS project.
Most of our problems have been designed to require 10-45 minutes for solution by average-ability seventh-graders, and to resemble problem-solving situations that might reasonably occur in the
everyday lives of the students or their families. Realistic ''outside" resources were available, including other students, calculators, resource books and materials, and teacher consultants. Solution
attempts were therefore not blocked by deficient technical skills (e.g., computation) or memory capabilities (e.g., recall of measurement facts).
All our problems were based on straightforward uses of easy-to-identify concepts from arithmetic, measurement, or intuitive geometry. No tricks were needed; the most direct solution path was a
correct path, although it was not necessarily the most efficient or elegant.
In contrast with simple, one-step word-problems, a variety of solutions and solution paths were possible, varying in complexity and sophistication. The relevant ideas seldom fit into neat
disciplinary categories; most of our problems required the retrieval and integration of ideas and procedures associated with a number of distinct topics, including several arithmetic operations,
various number systems (e.g., rational numbers, negative numbers, decimals), several qualitatively distinct measurement systems (e.g., length, time), or intuitive ideas from geometry, physics, or
other subject-matter areas.
Many of our problems were designed so that the critical solution stages would be ''nonanswer giving" stages. For example, in many realistic problem situations, problem formulation or trial answer
refinement are crucial in the solution process. Thus, for many of our problems, the goal was not to produce a numerical "answer;" instead it was to make nonmathematical decisions, comparisons, or
evaluations using mathematics as a tool.
In typical textbook word-problems, only two or three numbers usually appear, and the most common errors result when youngsters use one of their "number crunching routines" to produce an answer,
sensible or not. When "too much" information is given, (for example, three numbers may appear instead of two) the student is expected to ignore the irrelevant information. When "not enough'' is
given, the student is expected to conclude that the problem cannot be done. In more realistic situations, and in many of the problems that were the foci of our APS research, there was simultaneously
''too much'' and "not enough'' information. Often, there was an overwhelming amount of information, all relevant, and the main difficulty was to select and organize the information that was "most
useful'' in order to find an answer that was "good enough.'' Furthermore, the given information may have consisted of both qualitative and quantitative information that had to be combined in some
sensible way. In other problems, not enough information may have been provided, but a usable answer still had to be found. It may have been necessary to identify or generate additional information or
data as part of a solution attempt when not all the information was given initially. Thus, in most of our problems, conceptual models serve as "filters" to select information from real situations,
and as "interpreters" to organize or transform data, or to fill in (or compensate for) missing information. In such situations, a model always distorts or de-emphasizes some aspects of the real
situation in order to clarify or emphasize others and a major goal of our APS research is to model students' modeling behaviors (Lesh, 1982).
In much the same way that our attempts to understand students' problem behaviors are embodied in the creation of models of their cognitive processing, we assume that our subjects' attempts to
understand the problems we pose are embodied in the creation and adaptation of conceptual models. Below, each of the four components of a conceptual model (within-concept networks, between-concept
systems, representational systems, and modeling mechanisms) is reconsidered in greater detail, with brief references to past research.
Within-Concept Networks
To most mathematicians, mathematics is the study of structure, the content of mathematics consists of structures, and to do mathematics is to create and manipulate structures. It is our hypothesis
that these structures (whether they are embedded in pictures, manipulative materials, spoken language, or written symbols), and the processes used in manipulating and creating these structures,
comprise the ''conceptual models'' that mathematicians and mathematics students use to solve problems.
In past research on the development of number and measurement concepts (Lesh, 1976), spatial and geometric concepts (Lesh & Mierkiewicz, 1978), and rational-number concepts (Lesh, Landau, & Hamilton,
1980), the similarities and differences between formal axiomatic structures and children's cognitive structures have been investigated. Formal axiomatic systems were used to generate tasks and
clinical interview questions to help identify the nature of children's primitive conceptualizations of mathematical ideas. For example, the relationships that a child uses to think about and compare
ratios are similar to the relations that define a ''complete ordered field' in which the elements are equivalence classes of ordered pairs of whole numbers. On the other hand, when a rational number
is interpreted as a position on a number line, many of the relationships that children notice and use are simplified and restricted versions of the structural properties that define the metric
topology of the rational-number line such properties as betweenness, density, distance, and (non)completeness. According to the "number line'' interpretation the set of rational numbers is regarded
(intuitively at first) as a subset of the set of real numbers, whereas, using the ''ratio'' interpretation, rational numbers are thought of as extensions of the whole numbers. A youngster's
rational-number conceptual model has a within-concept network associated with each rational-number subconstruct, for example, ratio, number line, part-whole, operator, rate, and indicated quotient.
Between-Concept Systems
One of the most important properties of mathematical ideas, whether they occur as conceptual models used by children or as formal systems used by mathematicians, is that they are embedded in
well-organized systems of ideas, so that part of the meaning of individual ideas derives from relationships with other ideas in the system or from properties of the system as a whole (Lesh, 1979b).
In rational-number conceptual models, these between-concept systems have three components: (a) within-concept networks associated with different rational-number types (e.g., part-whole fractions,
ratios, rates, decimals, and operator transformations); (b) links between those networks (including understandings of the "sameness'' and/or ''distinctness'' of the rational-number types); and (c)
operations that, among other things, enable the transformation of a given rational number into different forms. Further, between-concept systems link rational-number ideas with other concepts such as
measurement, whole-number division, and intuitive geometry concepts related to areas and number lines.
For most youngsters, between-concept systems associated with rational numbers are poorly organized and unevenly formalized. The between-concept systems derive some of their meaning from
within-concept networks, and, in turn, the within-concept networks derive some of their meaning from the between-concept systems in which they are embedded. For example, the transformation of a
simple fraction to a percent gives the fraction a meaning that includes proportion ideas.
That a whole structure and its parts each derive some meaning from the other has profound implications for human learning, development, and problem solving. The resulting chicken-and-egg dilemma
concerning the ''whole structure'' versus "parts within the whole'' dictates that cognitive growth must involve more than quantitative additions to knowledge or processing capabilities qualitative
reorganizations must also occur. That is, as various ideas, relationships, and operations evolve into a whole system with properties of its own, elements within the system achieve a new status by
being treated as parts of the whole.
Representational Systems
An introduction to the role of representational systems in applied problem solving is given in Lesh (1981), which includes a discussion of translation processes that contribute to the meaningfulness
of mathematical ideas. Figure 9.1 shows some of the translations among different representational modes and some of the contributions to understanding ideas that those translations can make.
Though this depiction of modes and translations is neither exclusive nor exhaustive, it has suggested useful areas of investigation for the RN and the APS projects. Just as mathematical ideas are
embedded in larger between-concept systems that contribute to the meaningfulness of those ideas, so are those ideas and between-concept systems embedded in representational systems such as those
When we say a student understands a mathematical concept, part of what we mean is that he or she can use the kinds of translation processes depicted in Figure 9.1. For example, when we say a student
understands fractions, we mean, in part, that he or she can express fraction ideas presented with circular regions using rectangular regions, or using written symbols.
Figure 9.1 Translations among modes of representation.
Modeling Mechanisms
In applied problem-solving, important translation and/or modeling processes include (a) simplifying the original problem situation by ignoring "irrelevant" characteristics in a real situation in
order to focus on other characteristics; (b) establishing a mapping between the problem situation and the conceptual model(s) used to solve the problem; (c) investigating the properties of the model
in order to generate information about the original situation; and (d) translating (or mapping) the predictions from the model back into the original situation and checking whether the results
As the results we present in this chapter show, Figure 9.2 represents a useful, though oversimplified, conceptualization of the problem-solving process. For example, different aspects of the problem
may be represented using different representational systems, and the solution process may involve mapping back and forth among several systems perhaps using pictures as an intermediary between the
real situation and written symbols.
In the next two sections of this chapter, we report data relating to two of the four aspects of conceptual models, namely within-concept networks and representational systems. Our goal is to lay
groundwork for future data presentations and elaborations relating to between-concept systems and modeling mechanisms.
Figure 9.2 Problem Solving.
Rational-Number Task Interviews
This section reports the results of interviews designed to explore the differences in student responses to sets of problems based on the same arithmetic structures but varying in presentation format.
Standard word-problems were included in the interview to permit the direct comparison of students' responses to word problems with their responses to corresponding real or more realistic problems.
Some generalizations emerged from the interviews taken as a whole; other important conclusions were found by examining in detail the results of individual problems. The chief findings were that (a)
word problems differ from their real-world counterparts with respect to difficulty, the predominant representational mode selected for solution, and most frequent error types; (b) varying the tasks
on any of a number of dimensions (e.g., number size, context, type of manipulative material present) is accompanied by variations in the subjects' performance, suggesting that most of our subjects
had unstable conceptual models relating to these tasks; and (c) subjects use a number of modes of representation, either simultaneously or sequentially, as they proceed through problem solutions.
The Subjects
Individual interviews were conducted with 80 subjects, 16 from each grade level, fourth through eighth. There were 46 girls and 34 boys, with approximately equal numbers of boys and girls at each
grade level. At each grade, the students were chosen to represent a range of levels of rational-number understanding, as measured by the battery of written tests (described in the third section of
this chapter) that were developed by the RN project. Among the 16 youngsters interviewed at each grade, 4 were selected from each quartile.
The Interviews
Each 45-60-minute interview involved the 11 basic problems described below. Some of the problems were presented as "word problems," typed on index cards, and handed to the student. Materials such as
papers, pencils, clay pies, Cuisenaire rods, and counters were available within easy reach on the interview table: their use was neither encouraged nor discouraged. The ''concrete problems'' or "real
problems'' were presented orally using the materials described in each problem.
The problems were always presented in the order shown below. The entire interview is reproduced here so the reader may refer to the wording and order of the problems.
The Problems
1. The Chocolate-Eggs Addition Problems:
E: Present 6 eggs in a 12-pack carton, and say, ''This is one-half carton of eggs.''
E: Remove the preceding carton, present 4 eggs in a 12-pack carton, and ask, How much is this ?''
E: Repeat the preceding question, using cartons containing: 3 eggs, 8 eggs, 5 eggs, 9 eggs, and 10 eggs.
E: Present 2 eggs in one carton and 3 eggs in a second carton, and ask. How much is this altogether?''
E: Repeat the preceding question, using cartons containing 4 eggs and 5 eggs, respectively.
2. The Chocolate-Eggs Multiplication Problem (Presented Orally):
E: Present one dozen chocolate eggs in a 12-pack carton. Then, pose (orally) the following question, ''Mike took of a carton of eggs to a picnic. He and his friends ate of the eggs. How many eggs
did they eat altogether?"
E: Immediately, say, "I'11 repeat the question slowly." (Repeat orally more than once, if requested.)
3. The Addition Word-Problem:
Jim's family ordered two pizzas for supper, one with sausage and one with mushrooms. Jim ate of the mushroom pizza, and of the sausage pizza. How much pizza did he eat altogether?
4. The Multiplication Word-Problem:
Yesterday, Karen ate of a chocolate cake. Today, she ate of the remaining cake. How much did she eat altogether?
5. The Cuisenaire Rod Addition-Problem:
Present a set of rods, glued onto a posterboard, where a "unit'' rod is created using a x wooden stick, cut 12 cm long.
Then, present a pile of loose rods in the following lengths:
2 12-cm rods to represent the unit, or 1
2 10-cm rods to represent the fraction 5/6
2 9-cm rods to represent the fraction 3/4
2 8-cm rods to represent the fraction 2/3
2 7-cm rods to represent the fraction 7/12
3 6-cm rods to represent the fraction 1/2
2 5-cm rods to represent the fraction 5/12
6 4-cm rods to represent the fraction 1/3
6 3-cm rods to represent the fraction 1/4
10 2-cm rods to represent the fraction 1/6
16 1-cm rods to represent the fraction 1/12
E: Point to the 12-cm rod that is glued onto the posterboard and say, ''This is one unit long.''
E: Then, point in turn to each of the other rods on the posterboard and (in each case) say, ''How long is this?''
E: Give S one of the loose 9-cm rods and say, "How long is this?" Then, repeat the question using the 7-cm rod, and the 6-cm rod.
E: Next, remove the posterboard from the table, give S the 5-cm rod, and say, "How long is this?" Then, repeat the question using the 8-cm rod.
E: Remove the 8-cm rod, put a 6-cm rod and a 4-cm rod end to end on the table in front of S, and say, "How long is this (pointing to the length of the two rods together) altogether?"
E: Remove the preceding rods, put a 9-cm rod and an 8-cm rod end to end on the table in front of S, and say, ''How long is this (pointing to the length of the two rods together) altogether?"
6. The Concrete (Pizza) Addition-Problem:
E: Present precut parts of 6-inch clay ''pizzas" on cardboard ''plates,'' like those in Figure 9.3a, and say "Pretend that these are pieces of pizza.''
Figure 9.3 Concrete (Pizza) Addition Problem.
E: Present the 1/2 piece and say, ''How much is this?''
E: Present the 1/3 piece and say, ''And, how much is this?''
E: Present the 1/2 piece and the 1/3 piece simultaneously, as in Figure 9.3b, and say, "Now, how much is this altogether?"
E: Remove the preceding pieces, present the 3/4 piece, and say, "How much is this?''
E: Remove the 3/4 piece, present the 5/6 piece, and again ask, ''How much is this?"
7. The ''Realistic'' (Cake) Addition-Problem:
E: Present precut parts of 6 x 8-inch clay ''cakes" on (scored) cardboard ''Plates,'' like those in Figure 9.4, and say, ''Pretend D that these are pieces of cake.''
Figure 9.4 "Realistic" (Cake) Addition Problem.
E: Present the 1/4 piece and say. ''Yesterday, I ate this (pointing) much cake. How much did I eat?''
E: Next, put the 1/4 piece into a covered box (because it was eaten) on the table, slightly to the left side in front of S. Then, slightly to the right side in front of S, present the 1/3 piece,
and say, ''Today, I will eat this (pointing) much. How much will I have eaten altogether (pointing to both the ''hidden" 1/4 piece and the 1/3 piece)?''
8. The Area Problems:
E: Show S two red shapes, like the ones shown in Figure 9.5, drawn on a "checkerboard" grid.
Figure 9.5 Area Problems - Part 1.
Point to the first shape (Figure 9.5a) and ask, "How many squares are in this shape?" Then, point to the second shape (Figure 9.5b) and repeat the question.
E: Show S a red triangle on a sheet of white paper (Figure 9.6a): then cover the shape with a clear acetate grid (Figure 9.6b), point to the covered shape (Figure 9.6c and ask. ''How many squares
are in this shape?"
Figure 9.6 Area Problems - Part II.
E: Repeat the preceding question, only hand the sheet of acetate to S and use a red trapezoid shape, like the one shown in Figure 9.7.
Figure 9.7 Area Problems - Part III.
9. The Plain Hershey Candy-Bar Multiplication-Problem: .
E. Give S a whole plain Hershey candy bar, saying, "The next problem we are going to do has to do with this candy bar. But, we will use this piece of brown paper to stand for the candy bar. Then,
as a reward for answering all of these questions for me, you can keep the candy bar. Is that OK?"
Take away the candy bar, and give S a piece of brown paper, like the appropriate one shown in Figure 9.8. Use the piece of paper that is marked into 10 squares to represent the plain candy bar,
or for Problem 10 below, use the unmarked piece to represent the almond candy bar.
Figure 9.8 Candy-Bar Multiplication Problems.
E: Say, "Yesterday. Bob's mother gave him a plain Hershey candy bar in his lunch. He ate part of it and saved for today. Today, he ate of the remaining candy. How much of the whole candy bar did
he eat altogether?"
E: Immediately, say, "I'll repeat the question slowly.'' (Repeat orally more than once if requested.)
10. The Almond Hershey Candy-Bar Multiplication-Problem
The almond candy-bar problem was identical to the plain candy-bar problem, except that an unmarked piece of paper was used, and the orally presented statement of the question was:
"Yesterday, Ann s mother gave her a plain Hershey candy bar in her lunch. She ate part of it and saved for today. Today, she ate of the remaining candy. How much of the whole candy bar did she
eat today?" (Note: The order of presentation of Questions 9 and 10 was reversed for 50 of the children in each quartile at each grade level.)
11. The Pencil-and-Paper Multiplication Computation-Problems:
One at a time, present the following two written computation problems. Each problem is written at the top of an otherwise blank sheet of paper.
1/2 X 1/3 =
2/5 X 3/4 =
Results and Discussion
Results of the interviews are presented below. First, the differences in the difficulty of the problems are displayed in a table of success rates for each problem in each grade. Then several of the
problems are discussed with respect to (a) solution processes used, in particular, the predominant mode of representation in which a solution was reached; (b) the types of errors that typically
occurred; and (c) the difficulty of the item compared with that of related items. The Addition Word-Problem (Problem 3) is discussed in greatest detail; discussion of other problems focuses on
results we view as most important, interesting, or unexpected. Many of the more obvious and predictable results are summarized in table form.
TABLE 9.1
Success Rates on Rational Number Interview Problems
Percentage correct at each grade level
Problem Gr 4 GR 5 Gr 6 Gr 7 Gr 8 Total
Addition word problem
1/4 + 1/5 .19 .25 .50 .44 .50 .38
Concrete addition
Name 1/2 1.00 1.00 1.00 1.00 1.00 1.00
Name 1/3 .94 .94 1.00 1.00 1.00 .97
Name .13 .06 .25 .19 .25 .17
1/2 + 1/3
Name 3/4 .63 .75 .88 .88 .88 .80
Name 5/6 .13 .19 .38 .19 .38 .25
Name 1/3 .63 .63 .75 .81 .88 .74
Name 1/4 .69 .63 .75 .81 .88 .75
Name 2/3 .50 .56 .63 .63 .69 .60
Name 5/12 .69 .81 .81 .88 .94 .83
Name 3/4 .56 .56 .56 .63 .63 .59
Name 5/6 .38 .44 .56 .56 .63 .51
Name .31 .31 .50 .56 .56 .45
1/6 + 1/4
Name .31 .31 .44 .56 .56 .44
1/3 + 1/5
Cuisenaire rods
Name 5/6 .38 .38 .50 .69 .69 .53
Name 2/3 .31 .38 .50 .69 .69 .53
Name .13 .19 .44 .56 .56 .38
1/2 + 1/3
Name .06 .19 .38 .44 .50 .31
3/4 + 2/3
Area problems
Triangle .63 .63 .75 .81 .81 .73
Trapezoid .13 .19 .25 .38 .38 .26
Word problem
(1 - 1/4) X 1/3 .06 .25 .38 .31 .44 .29
Plain Hershey bar
1/2 X 2/3 .00 .06 .19 .25 .25 .15
Hershey bar
with almonds
3/4 X 1/3 .25 .31 .38 .44 .44 .36
Chocolate-eggs problem
3/4 X 2/3 .31 .25 .44 .63 .69 .46
2/5 X 3/4 .69 .75 .75 .94 .88 .80
1/2 X 1/3 .75 .81 .88 1.00 .94 .88
The Addition Word-Problem
All the solution attempts began by translating Problem 3 into written symbols, for example,
1/4 + 1/5 = ___. Seventy percent of the students wrote the correct symbolic expression horizontally; 16% wrote it vertically. The remaining 14% of the subjects made errors in recording, usually
omitting one or more of the symbols +, /, or =. It is surprising that, although this problem immediately followed two problems involving concrete materials, none of the students initially used
materials or drew a picture to represent the problem.
The number of students at each grade level who gave each response type is shown in Table 9.2. Note that all the students who responded correctly (37% of the total) used an algorithmic
least-common-denominator (LCD) method.
TABLE 9.2
Addition Word-Problem Response types
Number at each grade
Response Gr 4 Gr 5 Gr 6 Gr 7 Gr 8 Total %
Correct answer using LCD 3 4 8 7 8 30 38
Incorrect answer using LCD 1 2 1 3 2 9 11
Tried to find LCD, then tried other 0 2 2 2 2 8 10
methods (e.g. materials)
Added both numerators and 10 6 4 4 4 28 35
No response or gave up 2 2 1 0 0 5 6
Some of the most interesting observations about the Addition Word-Problem stem from a comparison of responses to this item with those to a question posed later (following the structured interview)
that asked the subjects to "act out" the problem using clay circular regions to represent the pizzas. Students who had obtained the answer 2/9 for their written calculation of the sum were able to
look at the clay pizzas and recognize that 2/9 was incorrect. However, when confronted with their written work on the problem, about half of these subjects maintained that was still correct. The
discrepancy between the results obtained from the two different representations apparently did not trouble these children; several explicitly stated something like, "That s okay! These are pizzas and
those are numbers they aren't the same." Such comments seem to indicate either a belief that mathematical computations (in symbolic form) need not agree with real-world observations (of clay pizzas)
or that mathematics is simply unpredictable, so sometimes one obtains one answer and sometimes another for the same problem.
The presence of concrete materials in the follow-up problem did not enhance performance. Whereas 30 children had obtained the correct answer to the word problem (all of them using written symbols), a
total of only 20 children arrived at the correct answer when the same problem was posed using concrete materials. Three students attempted to solve the problem using the concrete materials; they all
obtained incorrect results. All 20 children who were correct were among the 27 children who persisted in using written symbols to obtain an answer. This means that 7 subjects who had previously
obtained the correct answer using written symbols were no longer able to solve the problem using written symbols after it was presented in concrete form. They became confused and reverted to adding
both numerators and denominators rather than using the LCD approach as before. This outcome runs counter to the widespread belief that materials make a problem easier to solve because it is more
meaningful and real.
The interviewer probed to find whether this difference in performance was the result of rote execution of a meaningless algorithm in the first case. Some of the students reported that, when they
wrote + = after reading the word problem, they were thinking about parts of circles, and that when they wrote their solution they again thought about circular regions. For these subjects, the symbols
were meaningfully related to stored images of concrete objects but the images were abandoned in favor of a more powerful symbolic procedure for actually carrying out the computation. The follow-up
request that the subjects act out the problem using the concrete materials required a demonstration of the symbolic algorithm relating it to the pieces of clay pizzas. Most of the students recognized
their inability to respond to the question using the materials and worked out answers using pencil and paper. We believe it is likely that the LCD procedure, although present, was still an unstable
element of the rational-number conceptual models of the 7 students who were no longer able to reach the correct result using written symbols. Attending to the concrete materials somehow contributed
to the breakdown of an effective symbolic method for solution.
Although concrete materials often provide a useful representation for some stages in solving a problem, it may be extremely difficult to carry out the entire solution in terms of them. Good students
eventually learn to select an appropriate representational system to fit each particular part of a problem situation or each specific stage in the overall solution; this understanding builds slowly
and requires coordinating complex within-concept networks and between-concept systems.
The Concrete (Pizza) Addition-Problem
In the Problem 6 sequence, subjects were asked to identify circular clay pieces cut in the sizes 1/2, 1/3, 3/4, and 5/6. The addition question was to find the sum of 1/2 + 1/3 immediately after
identifying the two (differently colored) component pieces, which were then put together on a single plate. Table 9.3 shows the number of students at each grade level who responded correctly to each
TABLE 9.3
Concrete (Pizza) Problems
Number correct at each grade
Size of piece Gr 4 Gr 5 Gr 6 Gr 7 Gr 8 Total^a
1/2 16 16 16 16 16 80 (100)
1/3 15 15 16 16 16 79 (98)
1/3 + 1/2 2 1 4 3 4 14 (18)
3/4 10 12 14 14 14 64 (80)
5/6 2 3 6 3 6 20 (25)
^aNumbers in parentheses are percentages of correct responses.
Identifying 1/2 and 1/3 caused no difficulties for the subjects. Of the other fractions, 3/4 was obviously much easier to recognize than was 5/6. Either the missing piece was identified as , so the
piece itself had to be , or the missing piece was used as a measure for cutting the remainder of the pizza. (Note that this procedure would be useful only when the missing piece represents a unit
An immediate verbal response (apparently based on perceptual cues, and accompanied by no overt actions) was given by 80% of the children for the 3/4 piece and by 63% of the children for the 5/6
piece. When the interviewer probed ("How did you figure that out?"), nearly all these students did draw lines on the clay or on the plate, but the action typically seemed to be a justification rather
than the source of the original response. This was especially apparent for the piece.
Table 9.4 shows the number of students at each grade level using each of the following five types of procedures to name the 5/6 piece: (a) relatively passive perceptual cues were used with no lines
drawn overtly; (b) lines were drawn in a seemingly trial-and-error way, apparently to fit some previous perceptual cues; (c) the missing piece was used as a standard for cutting the remaining pizza,
with the hope that cuts of this size would divide the remaining pizza into a whole number of equal parts; (d) the remaining pizza was cut into equal-size pieces, presumably with the hope that cuts of
this size would also fit the missing piece; and (e) the plate was divided into equal-sized pieces (as though the missing piece had not yet been removed), with the hope that cuts of this size would
simultaneously fit both the remaining pizza and the missing piece.
TABLE 9.4
Procedures of Identifying 5/6
Number correct at each grade
Procedure Gr 4 Gr 5 Gr 6 Gr 7 Gr 8 Total^a
Perceptual cues only 3 2 1 1 1 8 (10)
Trial-and-error drawing 5 4 2 4 3 18 (23)
Focus on missing piece 4 5 5 3 5 22 (28)
Focus on pizza 4 5 7 8 6 30 (38)
Focus on plate 0 0 1 0 1 2 (03)
16 16 16 16 16 80 (100)
^aNumbers in parentheses are percentages of total subjects.
Overall, only 25% of the students correctly identified the 5/6 piece. Among these 20 students, 15 used a procedure that appeared to be based on the missing piece, 3 seemed to focus on the pizza ( i.
e., their first cuts were not at all the same size as the missing piece), and only 2 successfully used trial and error guided by some sort of perceptual cues.
Table 9.5 shows the number of students at each grade level using various solution procedures for the concrete addition-problem ( 1/2 + 1/3). Twelve of the 14 successful subjects used a written
symbolic procedure; only 2 students obtained a correct answer using a concrete procedure.
TABLE 9.5
Predominant Procedures for Finding 1/2 + 1/3
Number correct at each grade
Procedure Gr 4 Gr 5 Gr 6 Gr 7 Gr 8 Total^a
Perceptual only 2 1 1 2 1 9 (09)
Trial-and-error 2 1 2 0 0 5 (06)
Missing piece 1 1 2 0 1 5 (06)
Pizza 5 6 2 4 5 22 (28)
Plate 0 0 0 0 0 0 (00)
Written/symbolic 6 7 9 10 9 41 (51)
16 16 16 16 16 80 (100)
^aNumbers in parentheses are percentages of total subjects.
Because the concrete addition-problem is more complex than the identification of single pieces, it was more difficult to sort solution procedures into distinct categories. Many students went back and
forth among (or combined parts of) several of the six basic procedures shown in Table 9.5. For example, some of the students who were classified as using a written symbolic procedure actually began
by trying to find an answer using a concrete procedure. Interestingly, however, no student who began by using a written symbolic procedure later switched to one of the concrete procedures.
Some interesting differences appear in a comparison of Tables 9.4 and 9.5. Even though the addition problem was presented in concrete form, 51% of the subjects used paper-and-pencil procedures to
solve it. The presence of the two differently colored pieces for the addition problem seemed to draw subjects' attention to the pizza that was present, so more solution attempts were based on the
focus-on-pizza procedure rather than the missing-piece procedure, which produced such good results for the single piece.
Among the errors that occurred in relation to concrete procedures for the identification of the 5/6 piece, 63% involved too many cuts, 37% too few cuts. In nearly all cases, errors were related to
the fact that pieces were not all cut the same size (nor the same size as the missing piece). The fraction name given as an answer often did not correspond to the visual representation. Some students
counted n pieces present and gave 1/n as an answer. Others counted m spaces in the missing piece and n pieces present and gave the answer m/n (instead of n+(m + n)), indicating some confusion between
part-whole (fraction) and part-part (ratio) relationships.
For the concrete addition-problem 1/2 + 1/3, the most frequent error committed by subjects using a concrete procedure was to divide each of the two pieces in either halves or thirds and conclude that
the sum was 4/5 or 6/7.
For the concrete addition-problem 1/2 + 1/3, the overall success rate was 17.5% (14 of the 80 students), compared with 25% for the identification of the 5/6 concrete piece and 37.5% for the addition
word-problem ( 1/4 + 1/5, discussed above). Because the concrete problem was presented after the word problem in the interview, and and are generally better understood than and respectively, the
greater difficulty with the concrete problem is somewhat surprising. It is therefore interesting to trace the procedures and the accuracy of the 80 students on the word problem and then on the
concrete problem (see Figure 9.9).
Figure 9.9 students' procedures and success on two related addition-problems.
Out of the 33 students who used concrete materials to solve the concrete addition problem only 2 were successful. On the other hand, 12 of the 40 who used written symbols on the concrete problem were
correct. Eleven of these 12 had previously obtained the correct answer using written symbols on the addition word-problem. Notice that switching from a written symbolic representation to a concrete
representation that matched the mode in which the problem was presented proved to be a poor strategy, only 2 out of the 12 who switched were correct, compared to 11 correct out of the 18 who
persisted in using written symbols.
Still, pencil-and-paper solution procedures appear to have been somewhat more difficult when they were applied to the ''concrete'' situation than to information given in the ''word" problem. This is
consistent with the results of the follow-up questions to the "word'' problem in which giving the students additional concrete aids actually made the ''word'' problem more difficult for some
students. Perhaps, for some students, even when a problem is immediately converted to a written expression such as 1/2 + 1/3 = ___, what is going on in the student's mind may be slightly different
for a "word" problem than for a "concrete" problem.
The "Realistic" (Cake) Addition-Problem
A problem that involves concrete materials, like the one in the preceding section, is not necessarily real in the sense that it would be likely to occur in an everyday situation. It is unlikely that
someone would actually put two pieces of pizza together and ask, "How much altogether?" Even if the question were asked, reasonable answers would vary from saying, "That much," pointing to the pizza,
to "Almost a whole pizza''; it would be surprising to see someone take out pencil and paper to calculate a response. In real situations, quantitative information usually is given for some purpose;
information is stated or recorded at a level of precision that is reasonable in terms of both the source of the information and the use to which it will be put. Estimation, approximation, and
rounding off reflect important properties of the models we use to describe real situations. Unfortunately, successful performance on textbook word-problems often requires the suspension of everyday
criteria for evaluating the sensibility of realistic questions and responses, with students learning, instead, to give answers that teachers and textbooks expect, sensible or not.
Another characteristic that distinguishes many real problems from their word problem counterparts is that, in real problem-situations, the relevant information is not necessarily all given in the
same representational mode. For example, in real addition-situations that involve fractions, the two (or more) items to be added may not occur as two written symbols, two spoken words, or two cakes;
the addends may be one piece of cake and one written symbol, one fraction word and one written symbol, or (in our ' realistic ' addition-problem) one fraction word and one piece of cake. This is
because, when symbols (written, spoken, or concrete-pictorial) are used to represent something, it is often because the thing being represented is not present spatially or temporally. Thus, part of
the difficulty for students in these multiple-mode situations is to translate both addends (as well as the answer) into a single representational system. Our ''realistic" problem exemplified this
type of problem, virtually ignored in textbooks, which inherently involves more than one representational system. In the problem the subject first identifies 1/4 of a cake, which is presumably eaten,
thus hidden, then the subject is shown another 1/3 of the cake to be eaten, and is finally asked how much cake will have been eaten altogether.
Table 9.6 shows the number of students at each grade who used each of four basic response types for the ''realistic'' addition problem. None of the concrete solution attempts was successful.
Forty-three students (54%) used pencil-and-paper procedures, which was slightly higher than for the concrete problem. Drawing pictures and using gestures (accounting for 31% of the responses), as
well as many of the responses in the miscellaneous category, were never used on either the concrete addition-problem or the addition word-problem discussed above.
In this problem, because the 1/4 piece was hidden, some representation of the hidden piece was required. The type of representation chosen (i.e., picture, gesture, symbol) directly influenced the
solution procedures and errors that were made.
TABLE 9.6
Procedures for the "Real" Addition Problem
Number correct at each grade
Procedure Gr 4 Gr 5 Gr 6 Gr 7 Gr 8 Total^a
Drew a picture 5 4 2 2 3 16 (20)
Used a gesture 2 1 1 2 3 9 (11)
Used written symbols 7 8 10 9 9 43 (54)
Miscellaneous 2 3 3 3 1 12 (15)
16 16 16 16 16 80 (100)
^aNumbers in parentheses are percentages of total subjects.
Students who drew a picture of the hidden piece often (in 8 out of 16 cases) drew a picture of the piece that was visible. Apparently, they were uncomfortable having one of the addends as a picture
while the other was a ''real'' piece of cake (i.e., having addends in two different representational systems). The most common (incorrect) responses were: (a) 2/7: the shaded pieces (see Figure 9.10)
were counted, and then the pieces (again, see Figure 9.10) were counted: (b) 2/5: the shaded pieces were counted, and then the unshaded pieces were counted. (Note: This latter response points again
to the confusion, quite common in children's primitive rational number thinking, between the part-whole relationships that are relevant for fraction situations and the part-part relationships between
distinct quantities that are relevant for ratio situations.)
Figure 9.10 Cake Addition Problem.
Students who used a gesture (usually accompanied by verbal descriptions) to represent the hidden piece, often used their hand to indicate how much more of the plate would be covered by the hidden
piece. Figure 9.11 illustrates roughly the procedure that was used. The most common (incorrect) response was ; that is, the size of the hidden piece was distorted significantly in order to fit an
answer that the students considered ''nice.'' (Note: The predominance of in the rational-number thinking of children is well known. For example, see Kieren & Southwell, 1979.)
Students who used a written symbol to represent the hidden piece made errors and gave answers similar to those discussed in connection with the concrete problem. There was again evidence that some
students shifted from one representation to another, and from internal to external representations, during the solution process.
Figure 9.11 Indicating the hidden piece.
None of the subjects who used concrete procedures obtained a correct result. Of the 43 children who used written symbols, 19 were correct, which represents 24% of the total. There was a higher
success rate among subjects using written symbols for this problem than for either the concrete addition or the addition word-problems discussed above. Perhaps adding
1/4 + 1/3 was easier because it is one of the most popular demonstration problems appearing in textbooks to illustrate the LCD method for adding fractions, and so was familiar to the students.
The Chocolate-Eggs Addition-Problem
Given a carton of 12 eggs as a unit, subjects were presented various numbers of eggs and asked to identify the fraction represented by each (1/3, 1/4, 2/3, 5/12, 3/4, 5/6). They were then asked to
find two sums (1/6 + 1/4 and 1/3 + 5/12) for which the two addends were displayed as sets of eggs in two separate cartons.
Virtually none of the students used paper and pencil for the egg addition problems. Characteristics of the materials themselves apparently facilitated the higher rate of correct responses (see Table
9.1). First, the carton itself was always present as a frame to remind subjects of the whole unit. Focusing on the carton as the whole and the eggs as the parts, with distinct vocabulary to refer to
them, made it easier for subjects to keep both in mind while making judgments about part-whole relationships. The fact that the whole consists of discrete objects rather than a continuous quantity
(such as an area or length) made it easier to subdivide into unit fractions consisting of equal-sized sets of eggs. These characteristics of the materials also made part-part relationships easier to
notice, which caused difficulties for some subjects.
Compared with errors occurring on the other concrete addition problems, part-part errors were more common on the egg problems. For example, for naming 5/12 (the easiest identification problem), the
most frequent incorrect answer was 5/7. When the subjects' attention was focused on the part, that is, the five eggs in the carton, they seemed to lose track of the whole, responding with the ratio
of eggs to empty spaces. This was, therefore, another situation in which part-part ratio ideas were sometimes confused with part-whole fraction ideas.
Another source of confusion was the relationship between the number of objects and the sizes of unit fractions. For example, 1/3 was more difficult to identify than 5/12 because four eggs had to be
recognized as one-third. Similarly, 2/3 was still more difficult, because eight eggs had to be recognized as two-thirds. In each case the whole carton had to be partitioned into unit fractions
consisting of equal sets of objects, then some number of these sets had to fit the part in the part-whole relationship.
For the egg addition problems, the presentation of the two addends in separate cartons was a source of confusion that was not present in other questions. Thus, 9/24 was one popular incorrect answer
for 1/3 + 5/12. Another common incorrect response was 9/15, the ratio of eggs to empty spaces in the two cartons. In both error types the students lost track of the unit.
Table 9.1 shows that the chocolate-egg problems were considerably easier than problems involving the same fractions but different materials or modes of representation. In spite of the increase in the
salience of part-part notions, the characteristics of these materials, described above, are apparently responsible for the subjects' greater success.
The Cuisenaire Rods Addition-Problems
For the Cuisenaire rods addition-problems, like the egg problems, the parts and wholes were separate and easy to identify, and appropriate unit fractions were fairly easy to recognize; these
characteristics, in addition to the subjects' familiarity with the rods, contributed to relatively good performance (see Table 9.1). On the other hand, rods are continuous rather than composed of
discrete pieces, and there is no ever-present frame to maintain the size of the unit as there is for the egg problems.
The typical solution procedure for the Cuisenaire rod problems consisted of three steps. The first step was to find an appropriate unit fraction piece that fit a whole number of times into both the
12-cm unit rod (call it n) and the number of pieces that fit the length being measured (call it m) were counted. Finally, if these steps were completed correctly, the answer was m/n.
Thus, unlike the case with the egg problems, for which the answers could almost be read from the materials, the Cuisenaire rod problems required the chaining of several (relatively simple) steps. The
part-whole judgments needed to find an acceptable unit fraction piece depend on within-concept networks in the subject's rational-number conceptual model; the measuring and labeling of lengths of the
part and the whole depend on between-concept systems (relating measurement and fraction ideas) and representational systems (to accommodate parallel processing of the concrete objects as visual
stimuli while attaching spoken symbolic labels during counting).
The Area Problems
For the area problems. the goal was to find how many square unites there were in a nonrectangular figure; specific fractions could not be abstracted and plugged into a written algorithmic procedure.
These problems were very much embedded in the graphical pictorial representation that was given to the subjects.
Nearly all the students used the following procedure on the area problems: First, they counted all the whole squares. Then, they looked for parts of other squares that would fit together to make
Finding pieces to make wholes seemed to be a relatively simple task for most students. When errors were made on the area problems, they usually resulted from memory overload. That is, the student
would forget how many whole pieces had been counted when he or she went on to find pieces to make up more wholes, or would lose track of which pieces had already been used in making previous wholes.
Throughout this counting process, it was striking that virtually none of the students used any recording system either to count or to keep tack of the counted pieces. They relied entirely on internal
memory. Still, their errors tended to be within +1 or -1 of the correct answer.
The Multiplication Problems
Six different multiplication problems were given. One problem was a word problem whose solution can be characterized by the expression 1/4 + 1/2 X (1 - 1/4). Two of the problems involved Hershey
candy bars, one using a plain candy bar whereas the other used a candy bar with nuts; this was an important distinction because the plain bar is scored into 10 squares. The fourth problem was also a
concrete problem, involving chocolate eggs in a 12-pack carton. The last two problems were straightforward computation problems, 1/2 X 1/3 and 2/5 X 3/4.
The success rates for the six problems were included in Table 9.1. The computation problems were the easiest, then the problem using the eggs, the candy bar with nuts, the word problem, and the plain
candy bar.
The Multiplication Computation Problems
The multiplication computation was quite easy, although probably for the wrong reasons. Recall that the most common (incorrect) computation procedure for addition involved adding the numerators and
denominators. For multiplication, a comparable procedure yields a correct answer.
The Multiplication Word-Problem
For the multiplication word-problem, which was slightly more difficult than the earlier addition word-problem, the solution procedures were entirely different. For the addition problem, no one drew
pictures, and the only successful procedures involved pencil-and-paper computations. For the multiplication problem, the attempted solution procedures are shown in Table 9.7. Sixty-five percent of
the students used spoken language and/or drew pictures; almost half of these were successful. None of the students using exclusively pencil-and-paper procedures was successful.
TABLE 9.7
Solution Procedures Used on the Multiplicatioin Word Problem
Number at each grade
Procedure Gr 4 Gr 5 Gr 6 Gr 7 Gr 8 Total^a
Used written symbols 4 5 6 6 5 26 (34)
Used concrete materials 0 0 0 0 1 1 (01)
Used pictures (with little language) 3 2 1 0 1 7 (09)
Used spoken language
only 2 1 2 4 4 13 (16)
and pictures 5 6 4 3 4 22 (28)
and written symbols 2 2 3 3 12 22 (28)
^aNumbers in parentheses are percentages of total subjects.
On the word problem, if a picture was used, errors occurred most often when subjects drew two pictures, one showing and another showing . The students did not know what to do with the two separate
pictures, and often resorted to doing "some number thing'' with some of the numbers 1, 3, 4, 1/3, or 1/4. Similar errors were committed by students who initially translated the word problem into
written symbols. The fractions most frequently written were 1/4 and 1/3, not (for example) 3/4, the amount remaining after the first piece of cake was removed.
Concrete Multiplication Problems
Looking at success rates in Table 9.1, one of the most interesting results is the radical difference in difficulty among the three concrete problems, two of which were closely related. The
explanation for this fact has been discussed in connection with several of the addition problems. Slight differences in materials often greatly hinder or facilitate a student's ability to use a
system of rational number relations and operations. Having a conceptual model and being able to use it in a given situation are quite different. A student's ability to use a given conceptual model
depends considerably on the stability (i.e., degree of coordination) of the constituent structures. For many of our fourth- through eighth-grade subjects, rational-number concepts were apparently
relatively unstable, particularly with respect to the task of translating from one representational system to another.
Tables 9.8., 9.9, and 9.10 display the number of subjects who arrived at correct and incorrect answers using each solution procedure. The use of the concrete materials that were part of the problem
presentation was least helpful for the plain candy-bar problem in which the division of the rectangle into 10 squares probably interfered with attempts to divide the bar into thirds. The concrete
materials were most facilitative for the chocolate-eggs problem in which the materials lend themselves to a ready representation of the fractions in the problem; that is, 1/4 of 12 eggs is 4 eggs, 3/
4 is 9 eggs, 2/3 of 9 eggs is 6 eggs, or 1/2 of the whole carton. For the candy-bar problem with nuts, the unmarked rectangle neither contributed to nor interfered with the subject's ability to
impose fourths.
Further, the plain candy-bar problem was the one for which a language representation would be least helpful. Finding 2/3 of 1/2 can be quite a different problem from 1/2of 2/3, the commutative
property notwithstanding. For 1/2 of 2/3, subjects can find an answer thinking of the problem as analogous to 1/2 of two objects, for which the result is one object, or, in this case, 1/3. This type
of language representation, implemented either overtly or covertly, would facilitate reaching a correct result in the candy-bar (with nuts) problem ( 1/3of 3/4) and the chocolate-egg multiplication
problem ( 2/3 of 3/4). It was not helpful for the plain candy-bar problem ( 2/3 of 1/2).
Apart from comparisons among the concrete multiplication problems, it is of interest to compare solution procedures on these concrete problems with those on the concrete addition-problems discussed
above. The most striking difference is the drastically reduced rate of written symbolic procedures on the multiplication problems, accompanied by a much greater reliance on concrete procedures.
Language representations, useful for some of the multiplication problems, did not occur on the addition problems.
TABLE 9.8
Plain Hershey Bar Solution Procedures^a
Procedure Correct Incorrect Overall Success (%)
Manipulated materials (using little language) 8 44 52 15
Overtly used spoken language 3 8 11 27
Sid it "in their heads" 1 7 8 14
(probably using internal language)
Used written symbols 0 9 9 00
^aFor all grades.
TABLE 9.9
Almond Hershey Bar Solution Procedures^a
Procedure Correct Incorrect Overall Success (%)
Manipulated materials (using little language) 19 34 53 36
Overtly used spoken language 7 7 14 50
Sid it "in their heads" 3 5 8 38
(probably using internal language)
Used written symbols 0 5 5 00
^aFor all grades.
TABLE 9.10
Chocolate Eggs Multiplication^a
Procedure Correct Incorrect Overall Success (%)
Manipulated materials (using little language) 25 21 46 54
Overtly used spoken language 8 12 20 40
Sid it "in their heads" 4 6 10 40
(probably using internal language)
Used written symbols 0 4 4 00
^aFor all grades.
Summary of Major Conclusions from the Interviews
The data collected in the Rational Number Task Interviews support the following generalizations, some of which are based on the data as a whole, and some of which stem from results on one or more
particular items. Reference is made to the problem or problems that were especially relevant.
1. Realistic word-problems often differ significantly from isomorphic concrete or real-world problems in difficulty, preferred solution processes, and the types of errors most commonly made. In
fact, virtually any small change in the task had a noticeable impact on performance for most of our fourth- through eighth-grade subjects. This variability is regarded as an indication of the
instability of subjects' rational-number conceptual models.
a. Problems presented in terms of concrete materials are not necessarily easier than those presented orally or using written language and symbols. For a number of children, performance declined
when they were encouraged to use concrete aids to help in solving problems. Compare, for example, the results on the addition word-problem with its concrete follow-up question and with the
concrete addition-problem. Some subjects were able to execute the written LCD procedure correctly on the word problem but were unable to do so (even when they again were working in written
symbols) after concrete materials had been brought into the problem situation.
b. The representational system favored by subjects was distinctly problem specific. For example, subjects were much more inclined to translate the addition word-problem into written symbols than
the multiplication word-problem. Even within the class of concrete problems, we found, for example, that the chocolate eggs were used for finding answers whereas the circular regions were
not. The fact that the specific fractions involved in the problems may have influenced the results only strengthens the point.
c. Some successful solution procedures are closely tied to specific problem-representations and are not consistently "called up" for solving an isomorphic problem presented in a different
representational mode. For example, the missing piece strategy used by subjects to identify the 5/6 circular region was apparently unavailable for finding the sum of 1/2 + 1/3 even when
circular pieces were placed next to each other on the same plate.
d. There was a great deal of evidence of confusion between part-whole fraction ideas and part-part ratio ideas; to whatever extent this was present in the subjects irrespective of the stimulus,
it appeared that certain materials (e.g., the chocolate eggs) elicited more inappropriate part-part responses than other materials.
e. The mode of representation was important not only in the initial problem presentation, but also when selected by the subject. For example, on the "realistic'' addition-problem, the mode in
which the subject chose to represent the hidden 1/4 piece seemed to affect the course of the solution dramatically. On particular multiplication problems, for which the denominator of the
first factor was a divisor of the numerator of the second factor, a language-based representation apparently facilitated correct responses.
2. Representation of a problem situation is a very active process.
a. During the course of a solution attempt, subjects often changed their external representation of the problem from one mode to another. For example, very few subjects solved the concrete
addition-problem in the concrete mode; most changed to written symbols.
b. Subjects often used two or more systems of representation simultaneously. The most obvious occurrences were in response to the multiplication word-problem for which a large number of subjects
used spoken language and pictures, and the Cuisenaire rod addition-problems, for which many subjects monitored their solution steps by narrating their actions.
c. Internal and external representations of the problem situation interacted with each other and influenced the course of the solution. On several problems (for example, in identifying concrete,
pizza circular regions) subjects responded based on perceptual cues, then constructed post hoc explanations of their answers for the interviewer. For the "realistic'' addition-problem, the
strength of the understanding of the fraction 1/2 led many children to adjust their external representation of 1/4 + 1/3 so that would look like an appropriate response.
3. Some beliefs children have about mathematics and about how to respond to mathematics problems affect performance.
a. Most of our subjects seemed to be working toward precise answers as the goal of solving problems; there were few instances of estimates or approximations. This tendency was most apparent in
response to the more typical textbook word-problems and least apparent on the area problems.
b. Several children were unconcerned about obtaining different results for the addition word-problem when they used concrete materials versus paper-and-pencil procedures
Rational-Number Written-Test Results
Written language, written symbols, and several types of pictorial representations were used in assessing children's rational-number understandings in the set of written tests developed in the RN
project. These tests, the subject of this section, provided the baseline data about paper-and-pencil modes of representations and fraction and ratio within-concept networks that the interviews
described above were designed to extend and explore.
Three paper-and-pencil tests were developed for the RN project: the Assessment of Rational Number Concepts (CA), the Assessment of Rational Number Relationships (RA), and the Assessment of Rational
Number Operations (OA). The first tests basic fraction and ratio concepts. The second tests understanding of relationships between rational numbers, involving ordering, equivalent rational forms, and
simple proportions. The third tests abilities to perform addition and multiplication operations with fractions. This section of the chapter identifies the rational-number characteristics tested and
summarizes some general results from two of the tests (the OA and the CA). Further information on the Rational Number testing program appears in Lesh and Hamilton (1981).
The written tests were administered to about 1000 students in Grades 2 through 8 in Evanston, Minneapolis, DeKalb, and Pittsburgh between early November and late January of the 1980-1981 school year.
The tests were prepared in two parallel versions, with most of the students taking Version I. The tests were administered in a modularized form so that a core of items was given to all grades,
younger students did fewer items than older students, and the most difficult items were done only by older students.
Two of the tests, the CA and the OA, are referred to extensively in this section. Each item from those tests is reproduced in Appendix 9.B, with data on the fourth through eighth graders who took the
texts. In all, 650 fourth through eighth graders took the CA and 608 took the OA. The items for each test are arranged in order of percentage correct for the students who were given the item. The
purpose of this arrangement is to enable a broad overview of order of difficulty for the items on each instrument. A sample item from the Concepts test is given in Figure 9.12.
Figure 9.12 Item C-8
Beneath the item are two lines of information, which can be interpreted as follows: "Item C-8" refers to the overall rank (by proportion correct) of this item on this test. Thus, out of the 60 items
on the Concepts test, this one had the eighth highest score. Next, "(5)" refers to the original number of the item on the test (and corresponds to the number next to the item's question stem). "Gr.
4-8, n = 650" means that Grades 4-8 were given the item, involving 650 students. Next is a set of letters and numbers corresponding to the answer choices: "a)2 b)596 . . . " This means that 2
students out of the 650 selected choice a, 596 chose b, and so forth, "na/1" means that one student gave no answer.
On the second line, two results for each grade that took the item are given: the proportion correct for that grade, and the item rank for that grade. Thus. "G4-.797 (7/43)" means that 79.7% of the
fourth graders did this item correctly, and this item was the seventh easiest out of 43 items for the fourth graders. Item ranks varied across grades on the items. Notice, for example, that this item
was the sixth easiest (out of 60 items) for the sixth and seventh graders, but the fourth easiest (out of the same 60 items) for the eighth graders.
Two caveats are in order. First, because the ordering of these items collapses across all grade levels, the order of difficulty of an item is somewhat different from its order for all grades that did
that item. Second, all items are included in the list, even though not all grades did all items. Thus, for example, the rank order of 7 on item 8 for the fifth graders is out of 43 items, whereas the
rank of 6 for the sixth graders is out of 60 items (which include all 43 items done by the fifth graders).
Items on each test are identified along several dimensions, including type of representational translation, rational-number size, and rational subconstruct (fraction or ratio). A list of
characteristics for each test is included in Appendix 9.B, along with a generating scheme for the test items.
Technical Information
Cronbach-alpha reliabilities across the tests averaged 0.881 for all the tests, excluding the short 15-item OA given to fourth graders, which had an alpha reliability of 0.489. Within-subject
reliabilities averaged .850, excluding the fourth grade OA, which had a within-subject reliability of 0.318. One external validity measure involved the correlation between eighth-grade performance on
six OA items and six corresponding National Assessment items administered to 13-year-olds in 1979. Although item stems in each pair were similar or identical, answer sets were different in that the
OA limited responses to five given choices or omit, whereas the National Assessment items required students to write in their answers. Students scored higher on all OA items, given answer choices,
than their National Assessment counterparts on comparable items with no answer choices given. The high correlation between the six score pairs (0.918), and the fact that one would expect an
incrementally higher score for items that give answer choices compared with those that do not, are evidence of the comparability of the eighth-grade sample with the National Assessment sample.
Discussion of CA and OA Results
A plethora of observations from these results are readily made. This section will identify and discuss a few of these, including:
1. A difficulty ordering of translations between representational modes.
2. The easiest and most difficult problems from each test.
3. Notable jumps in performance from one grade to a later grade, including examples of apparent declines from one grade to another.
4. Item pairs that suggest the "nonadditivity" of some rational-number understandings.
Ordering of Translations between Representational Modes
Representational translations on the CA were
1. Symbols to written language (e.g., C-3).
2. Written language to symbols.
3. Picture to pictures (e.g., C-12).
4. Written language to pictures (e.g., C-17).
5. Picture to written language (e.g., C-20).
6. Symbols to pictures (e.g., C-32).
7. Picture to symbols (e.g., C- I 1).
Because each item on the test can be characterized by one of these seven translations, one can think of the CA as consisting of seven "subtests." Considering the 43 items that were done by all five
grades, the order in which the translations are listed above proved to be the order of their increasing difficulty for each grade, with only minor exceptions. The fourth and fifth graders found the
written-to-symbol subtest more difficult than the picture-to-picture translations and more difficult than the written-language-to-picture translations. Also, the seventh graders did slightly better
on the picture-to-written translations than on written-to-picture translations. Table 9.11 shows the representation translation success rates for each grade.
TABLE 9.11
Translation Success Rate (%) for 43 Items on CA
Translation type 4 5 6 7 8 4-8 6-8
Symbol to Written 93.2 95.0 98.6 98.9 95.4 95.8 97.5
Written to Symbol 64.2 74.6 85.2 85.1 88.8 77.9 86.3
Picture to Picture 69.9 80.2 80.5 85.3 86.5 79.4 83.9
Written to Picture 67.3 78.3 78.3 80.4 84.5 76.8 81.0
Picture to Written 56.9 66.8 72.6 81.3 83.2 70.3 78.7
Symbol to Picture 52.0 65.8 71.3 77.5 81.4 67.7 76.5
Picture to Symbol 50.0 60.1 67.3 72.2 74.5 63.1 71.1
Such an ordering is plausible. The easiest translations are those that involve simply reading a rational number in two different modes, requiring little or no conceptual processing of the meaning of
the rational number (e.g., C-4). The relative lack of familiarity with the symbol notation provides a reasonable explanation for the apparent difficulty the early fourth and fifth graders experienced
with the items involving four symbolically expressed rationals in the answer set (e.g., the written-language-to-symbol subtest). Next are those that require mapping one picture to another that is
isomorphic with respect to the fraction shaded (e.g., C-24). Following that are translations between pictures and written language, followed by translations between pictures and symbols. The written
language representations of rationals appear to be easier to process than the symbolic coding of rationals. As will be discussed later, this may be attributed to the fact that symbolic
representations do not encode rational components (i.e., numerator and denominator) differently, though they have different meanings, whereas written language representations do express each
component in a different form. For both translation types, those involving written language and those involving symbols, the translations to pictures were more difficult than from pictures. The
former translations include four pictures in the item answer set, whereas for the latter, only one picture appears in the item stem. Translations to pictures thus demanded more visual processing than
did translations from a single picture in the item stem. Thus, the data support the implication that a written or symbolic expression is easier to process than is a pictorial representation.
Easiest and Most Difficult Items
The single exception to the assertion that "a written or symbolic expression (involving no conceptual processing) is easier to express than a pictorial representation" is in item C-1. This item asked
students to identify 1/2 as the shaded fraction of a circle. Earlier research has affirmed the primacy of "halfness" in children's earliest rational-number understandings (e.g., Kieren. 1976). This
was also the case on the OA. Item 01, involving "giving away half of six puppies," was the easiest item for all grades except sixth, for which it was second easiest. Otherwise, the easiest items on
OA involved the interpretation of concrete situations with simple part-whole ideas, expressed in written language.
The most difficult items on both tests required what could be called second order processing of the part-whole idea. For all grades, C-60 was the most difficult CA item. It required interpreting each
third of the configuration of nine circles as a whole, as per the given information. Understandably, the most popular distracter was 7/9, selected by 36% of the students. Furthermore, this item
elicited many more part-part responses ( . selected by 16% of the students) than other items involving discrete objects (such as C-5, for which less than 7% of students selected either of two
possible part-part distracters in the answer set).
A similar item was the fourth most difficult item on the CA, given only to sixth through eighth graders. C-57 required students to interpret each large rectangle as a third, with three large
rectangles thus comprising a whole. For this item, nearly 40% of the students selected the response interpreting each large rectangle as a whole, rather than as a third partitioned into fourths (or
twelfths) of the whole.
''Fractions involving fractions'' also proved to be among the most difficult on the OA. Less than 1 in 6 seventh graders and barely 1 in 5 eighth graders interpreted 0-32 as involving a fraction of a
fraction requiring multiplication. Even fewer students correctly interpreted a similar item, 0-34. Barely 1 in 10 seventh graders and 1 in 6 eighth graders could tell how many thirds equal 1/5; more
than half of the seventh and eighth graders simply answered "cannot be done" (0-33). The prevalence of this response is ironic because rational numbers are the first number system children encounter
that is closed with respect to all four basic operations. The most difficult item on the OA was 0-35, requiring the student to process a fourth of a half and a half of a half and then find their sum.
Fewer than 1 in 12 seventh and eighth graders selected the correct response for this item. For this particular item, a visual estimate of the fraction shaded in each half of the picture was required,
and this surely contributed to its difficulty. Interestingly, however, immediately preceding this item on the original test was 0-14, for which the student also had to compute a fourth of a half, and
a half of a half, and then their sum. On this item, however, we showed the whole rectangle partitioned, and explained each of the possible responses. Nearly one-half selected the correct response on
that item, though hardly any could do the next problem, 0-35, which was very similar and which had the same answer.
Jumps from One Grade to Another
The three largest performance jumps from one grade to another occur between the same two grades, fifth and sixth, for items 0-5, 0-6. and 0-12. The average increase from Grade 5 to Grade 6 on these
items was 43.2%, compared with an average increase of only 11.2%. All three of the largest jumps involved addition or subtraction of fractions with like denominators. One can infer a significant
instructional effect between early fifth grade and early sixth grade with respect to this skill. These items provide an interesting contrast with 0-7, another fraction addition-problem with the same
denominator. Fifth graders performed significantly better on this item, for which the denominator was expressed in written language rather than in symbols. The most popular distracter for 1/3 + 1/3
was .2/6 Understanding the numerator of each addend as a cardinal value is familiar to the students. The denominators, however, look like standard cardinal numbers, but their meaning is quite
different. Problems such as 0-5, with representations that preclude attaching the same meaning to numerator and denominator, do not require counterintuitive processing or algorithms for standard
fraction symbols. Attaching different meanings or algorithms to numbers with the same form, but different position in the fraction, is the principal instructional achievement discerned by the OA, and
it occurs between early fifth and early sixth grade.
Jumps ''backward" from one grade to the next also merit comment. On the CA, sixth graders outperformed seventh graders on four out of five consecutive items on the original test (52-55, and C-7,
C-56, C-34, and C-37 on the reordered version in Appendix 9.A). Three items required converting representations from mixed to improper form, and vice versa.
The greatest proportion of "downward jumps" occurred between sixth and seventh grade. On the CA, seventh graders as a whole averaged 71.3%, versus 67.8% for the sixth grade. On the OA, sixth graders
actually scored slightly better as a group than did seventh graders, by a margin of 50.8 to 49.9% (considering only the 28 items done by both grades). This suggests the possibility that fraction
operation skills from Grade 6 to Grade 7 are unstable and that instruction barely maintains or only slightly improves their level.
Task Variable Additivity
A natural question would be the feasibility of devising a hierarchy of rational number task variables that would enable prediction of performance on rational number tests. Such a hierarchy would
involve variables v1. . . vn, with additive properties such as
(v1 + v3) - (v1 + v4) = d ---> (v2 + v3) - (v2 + v4) = d
(v1 + v2) > (v1 + v3) ---> (v4 + v2) > (v4 + v3),
where the sum (vi + vj) represents the expected performance level for a rational number task comprising two task variables, vi and vj, where the difference, d, between two tasks is the expected
performance difference for a student population, and where order, >, is defined by order of difficulty among tasks.
Several examples from the CA and the OA suggest that such a hierarchy is not plausible. They indicate that the impact of changing a single variable in a pair of items may either be much greater than
or possibly even opposite to a similar or identical variable change for another pair of items.
EXAMPLE 1
The task variable change is from multiplying 2 fractions of the form p/q (v1), to multiplying a whole number by a fraction (v2). Sample item pairs are (0-18, 0-8) and (0-19,0-34).
Students found whole-number-fraction multiplication significantly more difficult than fraction-fraction multiplication with a symbol-only representation, such as Items 0-18 and 0-8. However, in the
context of a brief word-problem, the same variable change produced the opposite result, with students finding the fraction-fraction more difficult than the whole-number-fraction multiplication. In
this case, if v3 is symbol mode computation, and v4 is word-problem context, we would have:
O-18 O-8
(v3 + v2) > (v3 + v1) but
(v4 + v2) < (v4 + v1)
O- 19 O-34
whereas, under the assumption of additivity, if v2 > v1, and if (v3 + v2) > (v3 + v1), it would follow that (v4 + v2) would be greater than (v4 + v1).
EXAMPLE 2
The task variable change is from continuous pictorial representation (v1), to discrete object representation (v2). Sample item pairs are (C-39, C-60) and (C- 17, C-8). Students found item C- 17, with
a continuous representation, more difficult than the discrete item representation in C-8. C-39 contains a continuous representation of the whole, and proved to be one of the easier items on the CA.
In contrast, C- 17, representing a whole with three discrete objects, was the most difficult item on the CA. The intrinsic nature of a discrete object representation of the whole in C-60 elicited a
special distracter, the one that interpreted the entire configuration as a whole. For these two item-pairs, if v3 is written-to-picture identification of a simple fraction, and v4 is, given a whole,
identify a fraction greater than 1, we have
C-17 C-8
(v1 + v3) < (v2 + v3)
(v1 + v4) > (v2 + v4)
C-39 C-60
Conceptual-Model Processes in the Written Tests
Although applied problems evoke richer transformations in conceptual models than those observed on the CA and OA, it is helpful to consider such processes on those instruments. Such a discussion,
pertaining to any particular test item, would involve three components and the interactions among them: the conceptual model brought to bear on the problem, the item stem, and the item answer choice
The ''interactions'' depicted in Figure 9.13 involve such processes as imposing meaning in the direction of the arrow. For example, a conceptual model imposes a meaning on the representation "1/2".
Figure 9.13 Components affecting written-test-item response.
The item stem or answer set may distract a subject who has an unstable conceptual model. For example, on Item C-51, a rational-number conceptual model that has not fully differentiated cardinality
from part-wholeness, and is, therefore, unstable with respect to that difference, would be easily distracted by choices such as answer (d). In general, the more stable the conceptual model, the less
either the problem stem or the answer set will influence the model by refining or distracting it.
The conceptual model directs two different process types when the student is doing these kinds of paper-and-pencil items. The first is (within-stem) or (within answer-set) processing, for which
rational-number meanings are imposed or attached to each representation in the stem or answer set. The second processing type involves the translations between the stem and the answer set (that
contribute to the meaningfulness of within-stem or within-answer-set processing), and which compares meanings imposed on the stem with meanings imposed on the answer set until a "fit" is achieved.
The conceptual model effectively encodes and processes both the stem and the answer set and imposes a structure on each. Translations back and forth between the stem and the answer set continue until
an isomorphism is established between the stem and one choice in the answer set. Every time the model translates the stem structure to the answer choice set and fails to make an isomorphism, it
reprocesses the representations in each and perhaps modifies the meaning or interpretation attached to each representation. In this sense, the translation contributes to the within-stem and
within-answer-set processing.
The role of translations, vis-à-vis the amount of processing required to read symbol or written-language representations, merits comment. First, the easiest translations were those that required no
meaningful rational number conceptual understandings (e.g., C-3). The data suggest that youngsters can do this without part-whole understandings. It may be that written or symbolic expressions, such
as those in C-3, are intuitive and unstable for some children, as are some of their pictorial understandings of part-whole relationships. It may be, however, that when a symbol, for which a student
has an unstable or intuitive model, appears with four pictures that illustrate part-whole understandings that the student can identify only intuitively, the understandings in the two modes intersect
and the student selects the correct response. The translation stabilizes the understandings associated with the two representations by implicitly saying to the student "whatever is in Mode A means
the same thing that is in Mode B, so the overlap of understandings you have between the modes is the correct understanding within each mode." In this way, the multiple-choice items serve to refine or
stabilize a model.
An example in which this may occur is Item C-5, which was intended to distract students with part-part understandings. It appears to have had the opposite effect. Students who did not come to the
item with models differentiating part-part from part-whole had the opportunity for the item answer-set to facilitate such a differentiation, by giving two examples of part-part (c and d) and only one
example of part-whole (b), with the explicit assumption that only one answer is correct. Students could therefore deduce the correct response and learn something in the process.
Thus, it is possible to view each response as a function of six variables:
1. Within-item-stem processing, directed by the conceptual model.
2. Within-answer-set processing, directed by the conceptual model.
3. Translations and matching, from item stem to answer set.
4. Interaction between conceptual model stability and available distracters (i.e., in the answer set).
5. Modification of the conceptual model by previous items or the current problem.
6. Error term.
Concluding Remarks
Research on the acquisition and use of mathematical concepts and processes is influenced by the researchers' theoretical perspectives, whether or not they are explicitly stated. The investigators'
beliefs about how cognitive structures are organized and interrelated in the minds of their subjects shape the questions that are chosen to be addressed, the methods and tasks that are selected, and
what is regarded as important among the data that result. The chief goal of this chapter has been to define and illustrate our present understanding of a theoretical construct, the conceptual model,
that underlies our program of research on applied mathematical problem-solving. The rational-number data presented here, collected from written tests and structured interviews, are highly relevant to
our applied problem-solving research because the rational-number system is one of the most sophisticated systems familiar to middle-school youngsters and because the variety of subconstructs provides
a rich context for applications appropriate to our subjects.
We have focused on within-concept networks and systems of representations, the two components of conceptual models that are most elementary and crucial to an understanding of the construct. These
components have been described in relation to rational-number conceptual models both because they are more discernible in that context than they would be if we tried to describe them in more complex
applied problem-solving situations, and because it is apparent how our interest in the growth and use of these components provided a framework and direction for our research. The other two components
of conceptual models, between-concept systems and dynamic (modeling) mechanisms, are addressed in our current applied problem-solving research.
We have described three approaches to the construction of tasks for research focusing on the growth and development of conceptual models used by middle school students to solve realistic problems
involving mathematical ideas. Tasks may originate in the mathematical ideas themselves, or they may be created to be isomorphic to typical textbook word-problems, or they may be derived from real
problem-situations in which the use of mathematics arises naturally. The research included in this chapter has taken the second of these approaches. Furthermore, it has emphasized the rational-number
ideas themselves more than the processes for using the ideas. The bulk of our earlier research takes the first approach, whereas the APS project also utilizes the third.
Both the interviews and the written tests focused on fraction and ratio ideas, two of the within-concept networks subsumed by general rational-number understanding, and attended particularly to
various representations of these ideas and translations among these representations. The written tests assessed children's abilities to translate within and among paper-and-pencil modes of
representation: static figures, written symbols, and written language. The interviews made it possible to assess translations involving concrete materials and more realistic representations, in
addition to those mentioned above.
We have investigated other translations in modified testing situations in which the stimulus was not written (either spoken or displayed using concrete materials) but answers were written, and in
interviews, in which the stimulus and response could involve spoken language, written symbols and language, and concrete materials (Landau, Hamilton, & Hoy, 1981). The second phase of the RN project
is examining the role of spoken language as an intermediary between problem situations and written symbolism (Behr et al., Note 3); the use of pictorial representations as an aid in problem solving
is also being investigated (Landau, Note 5). Thus, we have a more general interest in modeling processes, of which paper-and-pencil translations emphasized on the written tests are only one type.
The two components of the conceptual model and the method for building tasks that have been investigated and used in the research reported here provide needed underpinnings for our current work in
the APS project, which addresses the remaining two components of the conceptual model within the context of tasks originating in real situations. The between-concept systems of the rational number
conceptual model must be addressed in the APS research because, in so many of the larger, applied problems appropriate for our seventh-grade subjects, the ideas fraction, ratio, proportion, percent
are inextricably connected to what the subjects know about measurement, area, and number lines, as well as to real-world understandings.
When the goal of research is to investigate the processes, skills, and understandings that enable youngsters to use their current understandings of a mathematical idea, it is appropriate to use the
kind of small problems discussed in this chapter. However, when the goal is to study the mechanisms by which learners modify and adapt their understandings in the course of solving problems, it
becomes more important to focus on the larger kinds of problems that characterize our current applied problem-solving research. The modeling mechanisms students use for creating and refining various
interpretations of problem situations will be emphasized in future reports on findings from the APS project.
Reference Notes
1 Behr, M., Lesh, R., & Post, T. The role of manipulative aids in the learning of rational numbers. RISE Grant #SED 79-20591. Northern Illinois University.
2 Lesh, R. Applied problem-solving in middle-school mathematics. RISE grant #SED 80 - 17771. Northwestern University.
3 Behr, M., Lesh, R., & Post, T. The role of representational systems in the acquisition and use of rational number concepts. RISE grant # SED 79-20591. Northern Illinois University.
4 Usiskin, Z. Arithmetic and its applications. RISE grant #SED 79-19065. The University of Chicago.
5 Landau, M. The effect of spatial abilities and problem presentation formats on problem solving performance in middle school students. Doctoral dissertation, in progress. Northwestern University.
Cardone, I. P. Centering/decentering and socio-emotional aspects of small groups: An ecological approach to reciprocal relations. Unpublished doctoral dissertation. Northwestern University, 1977.
Kieran, T. E. On the mathematical, cognitive, and instructional foundations of rational numbers. In R. A. Lesh (Ed.), Number and measurement: Papers from a research workshop. Columbus: ERIC/SMEAC,
Kieran, T. E., & Southwell, B. The development in children and adolescents of the construct of rational numbers as operators. The Alberta Journal of Educational Research, 1979, 25(4), 234-247.
Landau, M., Hamilton, E., & Hoy, C. Relationships between process use and content understanding. Paper presented at the Annual Meeting of the American Educational Research Association, Los Angeles,
April 1981.
Lesh, R. Directions for research concerning number and measurement concepts. In R. Lesh (Ed.), Number and measurement: Papers from a research workshop. Columbus: ERIC/SMEAC, 1976.
Lesh, R. Social/affective factors influencing problem solving capabilities. Paper presented at the Third International Conference for the Psychology of Mathematics Education, Warwick, England. 1979.
Lesh, R. Mathematical learning disabilities: Considerations for identification, diagnosis, and remediation. In R. Lesh, E. Mierkiewicz, & M. G. Kantowski (Eds.), Applied mathematical problem solving.
Columbus, ERIC/SMEAC, 1979. (B)
Lesh, R. Applied Mathematical Problem Solving. Educational Studies in Mathematics, 1981, 12, 235-264.
Lesh, R. Modeling students' modeling behaviors. In S. Wagner (Ed.), Proceedings of the Fourth Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics
Education. Athens, Georgia: University of Georgia, 1982.
Lesh R. Metacognition in mathematical problem solving (Tech. Rep.). Evanston, Illinois: Mathematics Learning Research Center, Northwestern University, 1983.
Lesh, R., & Hamilton, E. The rational number testing program. Paper presented at the annual meeting of the American Educational Research Association, Los Angeles, April, 1981.
Lesh, R., Landau, M., & Hamilton, E. Rational number ideas and the role of representational systems. In R. Karplus (Ed.), Proceedings of the Fourth International Conference for the Psychology of
Mathematics Education. Berkeley: Lawrence Hall of Science, 1980.
Lesh, R., & Mierkiewicz, D. Recent research concerning the development of spatial and geometric concepts. Columbus: ERIC/SMEAC, 1978.
*This research was supported in part by the National Science Foundation under grants SED 79-20591 and SED 80-17771. Any opinions, findings, and conclusions expressed in this chapter are those of the
authors and do not necessarily reflect the views of the National Science Foundation.
Click here for Appendix 9.A
Click here for Appendix 9.B
You will need Adobe Acrobat Reader to read the appendices.
Click here to go to the Adobe website and to download Acrobat Reader. | {"url":"http://www.cehd.umn.edu/ci/rationalnumberproject/83_2.html","timestamp":"2014-04-19T09:23:58Z","content_type":null,"content_length":"268543","record_id":"<urn:uuid:03407e6a-ef24-4ebe-99aa-8423e18b53de>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: What is an integer for Stata?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: What is an integer for Stata?
From Sergiy Radyakin <serjradyakin@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: What is an integer for Stata?
Date Thu, 8 Oct 2009 10:54:20 -0400
Thank you Stas, Scott and Austin!
I guess Stas has pointed an even more serious problem that
confirm integer number 1e4
yields that 1e4 is not integer, while
capture assert mod(1e+04,1)==0
yields that it is.
which has implications beyond the -set obs- statement.
So it does matter how you check for being an integer?
Scott's solution is an obvious workaround, but the point is that 1e+04
should not be treated by Stata as an expression, but rather as a
single entity - a number. In fact it is treated this way in the input
.set obs 1
.input z
1. 1e+01
2. end
And note that -input- does not evaluate the values as expressions,
e.g. an attempt to input 2+3 as an element of the input statement will
cause an error message "cannot be read as a number". So the parser
used in the input statement is properly programmed to recognize that
"e+" or "e-" after a number is a continuation of the same number,
while the regular parser does not do it same.
Best regards,
Sergiy Radyakin
On Thu, Oct 8, 2009 at 8:01 AM, Austin Nichols <austinnichols@gmail.com> wrote:
> Sergiy--
> This is such a longstanding property of the -set- command, probably
> most Stata users have internalized the behavior, but I suppose it
> should be documented in the help file for -set-. Note it applies
> equally to set memory, set tracedepth, etc.
> On Thu, Oct 8, 2009 at 5:20 AM, Scott Merryman <scott.merryman@gmail.com> wrote:
>> This works:
>> . set obs `=1e+04'
>> obs was 0, now 10000
>> I suppose the 1e+04 has to be evaluated first.
>> Scott
>> On Wed, Oct 7, 2009 at 6:57 PM, Sergiy Radyakin <serjradyakin@gmail.com> wrote:
>>> Dear All,
>>> how to explain this?
>>> version 10.1
>>> . display 1e+04
>>> 10000
>>> . capture confirm number 1e+04
>>> . display _rc
>>> 0
>>> capture assert mod(1e+04,1)==0
>>> . display _rc
>>> 0
>>> . clear
>>> . set obs 10000
>>> obs was 0, now 10000
>>> . set obs 1e+04
>>> '1e+04' found where integer expected
>>> r(198);
>>> So what is an integer then? (or what is 1e+04 if not an integer?)
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-10/msg00396.html","timestamp":"2014-04-16T19:12:51Z","content_type":null,"content_length":"9534","record_id":"<urn:uuid:d1e7b8ca-9b9c-422f-a87b-5322f4f93108>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conditional Probability
The probability P(A)P(A) of an event A is a measure of the likelihood that the event will occur on any trial. Sometimes partial information determines that an event C has occurred. Given this
information, it may be necessary to reassign the likelihood for each event A. This leads to the notion of conditional probability. For a fixed conditioning event C, this assignment to all events
constitutes a new probability measure which has all the properties of the original probability measure. In addition, because of the way it is derived from the original, the conditional probability
measure has a number of special properties which are important in applications. | {"url":"http://cnx.org/content/m23252/latest/?collection=col10708/latest","timestamp":"2014-04-17T01:02:35Z","content_type":null,"content_length":"331898","record_id":"<urn:uuid:aacecd1a-b496-40fa-8783-dc370f9d3fd8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Anaheim Hills, CA Math Tutor
Find an Anaheim Hills, CA Math Tutor
...I was exposed to an educational institution where spelling was a "MUST" in the learning environment. I may not be a "Spelling Bee" material, but I'm pretty competitive. I have elementary math
8 Subjects: including algebra 1, reading, English, grammar
...I was State Champion in high school, and still hold the 10K record at my college (have held it for the past six years!) I’ve participated in, and done well in, many races since then. I would
love to coach cross-country and track, and pass my love of running and fitness on to you! I have also always had a great passion for nutrition and fitness in general.
62 Subjects: including geometry, algebra 2, algebra 1, precalculus
...I learned a lot this summer thanks to you. You are a truly great teacher."-- G.R. Huntington Beach I can offer very effective tutoring in the GED exam: all areas of science and mathematics,
including algebra, geometry and graphs; and all areas of English, including grammar and sentence correction.
24 Subjects: including prealgebra, GRE, linear algebra, algebra 1
...I am committed to teaching meaningful lessons that foster academic curiosity through innovation and creativity. Tutoring is the best opportunity to teach to the strengths and weaknesses of a
student. It offers the chance to form a bond with a student and relate information in the best way.
12 Subjects: including algebra 1, Spanish, reading, English
...Algebra 2 builds on the topics explored in Algebra 1. These topics include: real and imaginary numbers, inequalities, exponents, polynomials, equations, graphs, linear equations, functions and
more. For me, the skills gained in Algebra 2 were the very foundation of my study in Civil Engineering.
10 Subjects: including trigonometry, SAT math, algebra 1, algebra 2
Related Anaheim Hills, CA Tutors
Anaheim Hills, CA Accounting Tutors
Anaheim Hills, CA ACT Tutors
Anaheim Hills, CA Algebra Tutors
Anaheim Hills, CA Algebra 2 Tutors
Anaheim Hills, CA Calculus Tutors
Anaheim Hills, CA Geometry Tutors
Anaheim Hills, CA Math Tutors
Anaheim Hills, CA Prealgebra Tutors
Anaheim Hills, CA Precalculus Tutors
Anaheim Hills, CA SAT Tutors
Anaheim Hills, CA SAT Math Tutors
Anaheim Hills, CA Science Tutors
Anaheim Hills, CA Statistics Tutors
Anaheim Hills, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Anaheim_Hills_CA_Math_tutors.php","timestamp":"2014-04-16T16:05:51Z","content_type":null,"content_length":"24110","record_id":"<urn:uuid:b3aab2e3-3aab-485e-9166-3ac82cf504ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Top 10 “Negative” Inventions
Hubble Key Project Team and High-Z Supernova Search Team, NASA, ESA
Throughout the history of math and science, supposedly impossible negative things have repeatedly turned out to be important both mathematically and physically. Here is my unofficial (no, let’s make
that official) Top 10 list of “negative” inventions.
10. Negative refraction: Victor Veselago, 1967
Refraction refers to how much light slows down (and therefore appears to be bent) when it passes through some medium. Refraction is quantified by an index relative to the refractive index of the
vacuum, which is equal to 1. All natural materials have a positive index of refraction, which means light is always bent in the same direction. Veselago, a Russian physicist, figured out that a
refraction index of less than zero was possible in theory, meaning light would bend in the opposite direction from the usual. Three decades later physicists began to figure out how carefully
constructed artificial “metamaterials” actually could bend light the “wrong” way, leading to current research on cloaking devices.
9. Negative electric charge: Benjamin Franklin, 1747
Franklin figured out that electric charge comes in positive and negative forms; he just guessed wrong about which was which, which is why electrons have negative charge even though they’re carriers
of electric current.
8. Negative mass (or negative weight): Friedrich Albrecht Carl Gren, 1786
OK, this one is tricky. Around 1700 the German physician Georg Stahl articulated the phlogiston theory (based on an idea of Johann Becher), an elaborate explanation for why things burn. Supposedly
they contained a flammable substance (phlogiston) that disappeared into the air during combustion. It’s often asserted that Stahl’s phlogiston had negative weight, but that idea appeared only much
later, when experiments showed that sometimes the combustion products (ashes) weighed more than the original burned substance. Gren, a German chemist, suggested that negative mass could account for
the discrepancy. Both Stahl and Gren were wrong, by the way.
7. Negative Energy: Hendrik Casimir, 1948
Paul Dirac imagined a sea of negative energy electrons in the late 1920s during his work on quantum mechanics that led to the prediction of the existence of antimatter. But let’s give this prize to
Casimir, who figured out how to create negative energy in a physical apparatus. You just have to put two mirrors, or shiny metal plates, very close to each other. Since the amount of energy in empty
space is set at zero, the plates should just sit wherever you put them. But in fact, they are slightly attracted to each other (the Casimir effect). That’s because empty space actually isn’t empty,
but has a bunch of quantum particles popping into and out of existence. Being quantum particles they behave like waves. When the plates are close enough together, the in-between space isn’t big
enough for some of the waves. So there are fewer particles in the gap than there should be, hence less than zero energy. Really.
6. Negative pressure: Saul Perlmutter et al, Brian Schmidt et al, 1998
We’re not talking about vacuum pumps here, but rather cosmological negative pressure, which requires the universe’s expansion to accelerate. That’s what the two teams led by Perlmutter and Schmidt
discovered when they measured the brightness of distant supernovas — evidence that the universe has, for the last few billion years, been expanding at an ever increasing rate. Because the universe is
expanding faster and faster, some force other than ordinary gravity must be at work, because gravity would slow the expansion rate down. That force must exert negative pressure, because ordinary
pressure would compress space; negative pressure expands it.
5. Negative temperature: Robert Pound, Norman Ramsey, 1951
We’re not talking about Antarctica here, but rather negative temperature on the absolute scale, where absolute zero represents the complete absence of heat, and hence supposedly the coldest
temperature possible. Which it is. But it turns out that mathematically, coldest is not the same as lowest. On the absolute scale, temperature and entropy are related in such a way that in all
ordinary circumstances the temperature is positive. Temperature is related to the average velocity (or energy) possessed by molecules, and most of the molecules won’t be as energetic (fast) as the
very fastest. If they were, the fastest would just go even faster. But if you put an upper limit on how fast the molecules can go, then they all could be as fast as the fastest. In this case, when
the majority of molecules are at the maximum energy, the ordinary formula for temperature is turned upside down, and that makes the temperature negative. Even though the temperature is negative, most
of the atoms are very energetic, so the system is technically hotter than any system with a positive temperature (heat would always flow from a negative temperature system to a positive temperature
system, which by definition makes the positive system colder).
4. Negative probabilities: Paul Dirac, 1920s
In his work leading to the prediction of antimatter, Dirac found not only that negative energies entered into the equations, but also so did negative probabilities. Ordinarily, the chance of
something happening (its probability) is regarded as somewhere between 0 (no chance at all, like the Cubs winning the World Series) and 1 (absolutely certain, like A-Rod guilty of using PEDs). Having
a less-than-zero chance of happening seems meaningless. But Dirac showed that in some situations negative probabilities at intermediate steps in quantum calculations could be useful, a point later
discussed by Richard Feynman. Recently the mathematician John Baez has blogged in detail about the whole negative probability business.
3. Negative curvature: Carl Friedrich Gauss, 1824
Except maybe for Newton, Gauss was the greatest mathematician of his millennium. He figured out that it would be possible to devise a geometry in which the sum of a triangle’s angles was less than
180 degrees, which means the curvature of such a space would be negative. He usually doesn’t get credit for inventing non-Euclidean geometry, though, because he didn’t publish that work. He was a
perfectionist and wouldn’t publish anything until he had everything worked out so well that nobody could find any way to criticize it. (In other words, if Gauss had written this blog it would never
have been posted.)
2. Negative numbers: Brahmagupta, seventh century
There is some evidence that the ancient Chinese possessed the concept of negative numbers, but Brahmagupta, a Hindu astronomer, gets credit for explicitly articulating their status as actual numbers
(and not “absurd impossibilities” as some of the Greeks thought). Brahmagupta called negative numbers “debts” (positive numbers were “fortunes”) and he outlined the arithmetical rules governing them.
For instance: “The product of two debts … is one fortune.” Thus Brahmagupta anticipated by more than 13 centuries the mantra of Edward James Olmos in the movie Stand and Deliver: “A negative times a
negative equals a positive.”
1. Square roots of negative numbers: John Wallis, 1673
Like negative numbers, the idea of the square root of a negative number was initially regarded as an impossibility, as negative numbers are not the square of anything. But Wallis, an English
mathematician, argued otherwise; as Paul Nahin says in his book on the subject, Wallis “made the first rational attempt to attach physical significance” to the square root of –1. Wallis pointed out
that negative numbers are not hard to visualize — they’re just the numbers to the left of zero on a number line. But if you add another axis to the number line (pointing straight up, from zero) you
then have a whole plane to the left of zero. “Now what is admitted in lines must, on the same Reason, be allowed in Plains also” (he meant “planes”), Wallis wrote. And since you can draw a square in
a plane, a side of a square on the negative side of zero would correspond to the square root of the negative number. Far from being physically meaningless, roots of negative numbers turn out to be
necessary ingredients in the equations of quantum mechanics.
Note: To comment, Science News subscribing members must now establish a separate login relationship with Disqus. Click the Disqus icon below, enter your e-mail and click “forgot password” to reset
your password. You may also log into Disqus using Facebook, Twitter or Google. | {"url":"https://www.sciencenews.org/blog/context/top-10-%e2%80%9cnegative%e2%80%9d-inventions","timestamp":"2014-04-19T15:48:17Z","content_type":null,"content_length":"83432","record_id":"<urn:uuid:88f04078-dc08-40ab-b4a9-d2d9e89b4b05>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
mutual information. concave/convex
hi everybody,
while looking on the mutual information of two variables, one find that it is concave of p(x) given p(x|y) and convex of p(x|y) given p(x).
the first statement is okey, but when it comes to proving the second, i get stuck, even when i find proofs already done i didn't get how they can conclude the convexity of I(x,y) as a function of p(x
|y) from the convexity of the relative entropy D(p||q).
here is a piece of the proof i didnt understand
if you have any idea, i'd very much appreciate it.
thank you in advance. | {"url":"http://www.physicsforums.com/showthread.php?p=3625572","timestamp":"2014-04-21T04:49:19Z","content_type":null,"content_length":"20279","record_id":"<urn:uuid:b2dc2c89-914c-4ae5-bbc6-4f67675cd1af>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Treatment of Negative Numbers
Replies: 12 Last Post: Dec 5, 2012 2:13 AM
Messages: [ Previous | Next ]
Re: Treatment of Negative Numbers
Posted: Dec 1, 2012 11:59 PM
The University of Illinois has published the UICSM material on the internet archive!
I will probably spend hours sifting through this stuff.
Here is the Volume 1 of the "First High School Course" dealing with directed numbers...
Check out how formal this stuff gets...
Bob Hansen
------- End of Forwarded Message
Date Subject Author
12/1/12 Treatment of Negative Numbers Robert Hansen
12/1/12 Re: Treatment of Negative Numbers Robert Hansen
12/2/12 Re: Treatment of Negative Numbers Robert Hansen
12/2/12 Re: Treatment of Negative Numbers Robert Hansen
12/2/12 Re: Treatment of Negative Numbers Robert Hansen
12/2/12 Re: Treatment of Negative Numbers Robert Hansen
12/4/12 Re: Treatment of Negative Numbers kirby urner
12/4/12 Re: Treatment of Negative Numbers Robert Hansen
12/4/12 Re: Treatment of Negative Numbers kirby urner
12/4/12 Re: Treatment of Negative Numbers Clyde Greeno @ MALEI
12/5/12 Re: Treatment of Negative Numbers kirby urner
12/2/12 Re: Treatment of Negative Numbers Robert Hansen
12/3/12 Re: Treatment of Negative Numbers Jonathan Crabtree | {"url":"http://mathforum.org/kb/message.jspa?messageID=7930932","timestamp":"2014-04-16T22:41:01Z","content_type":null,"content_length":"31147","record_id":"<urn:uuid:adef8b68-1d12-4824-bc0b-337539c72e57>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
transition from global to modular temporal reasoning about programs. volume F-13
Results 1 - 10 of 46
, 1997
"... In a trace-based world, the modular specification, verification, and control of live systems require each module to be receptive; that is, each module must be able to meet its liveness
assumptions no matter how the other modules behave. In a real-time world, liveness is automatically present in ..."
Cited by 69 (19 self)
Add to MetaCart
In a trace-based world, the modular specification, verification, and control of live systems require each module to be receptive; that is, each module must be able to meet its liveness assumptions no
matter how the other modules behave. In a real-time world, liveness is automatically present in the form of diverging time. The receptiveness condition, then, translates to the requirement that a
module must be able to let time diverge no matter how the environment behaves. We study the receptiveness condition for real-time systems by extending the model of reactive modules to timed and
hybrid modules. We define the receptiveness of such a module as the existence of a winning strategy in a game of the module against its environment. By solving the game on region graphs, we present
an (optimal) Exptime algorithm for checking the receptiveness of propositional timed modules. By giving a fixpoint characterization of the game, we present a symbolic procedure for checking the re...
, 1999
"... A methodology for system-level hardware verification based on compositional model checking is described. This methodology relies on a simple set of proof techniques, and a domain specific
strategy for applying them. The goal of this strategy is to reduce the verification of a large system to fini ..."
Cited by 53 (1 self)
Add to MetaCart
A methodology for system-level hardware verification based on compositional model checking is described. This methodology relies on a simple set of proof techniques, and a domain specific strategy
for applying them. The goal of this strategy is to reduce the verification of a large system to finite state subgoals that are tractable in both size and number. These subgoals are then discharged by
model checking. The proof strategy uses proof techniques for design refinement, temporal case splitting, data type reduction and the exploitation of symmetry. Uninterpreted functions can be used to
abstract operations on data. A proof system supporting this approach generates verification subgoals to be discharged by the SMV symbolic model checker. Application of the methodology is illustrated
using an implementation of Tomasulo's algorithm, a packet buffering device and a cache coherence protocol as examples. c fl1999 Cadence Berkeley Labs, Cadence Design Systems. 1 1 Introduction F...
- IN PROC. 2ND INTERNATIONAL CONFERENCE OF COMPUTER-AIDED VERIFICATION , 1991
"... In this paper we develop a compositional method for the construction of the minimal transition system that represents the semantics of a given reactive system. The point of this method is that
it exploits structural properties of the reactive system in order to avoid the consideration of large inter ..."
Cited by 36 (0 self)
Add to MetaCart
In this paper we develop a compositional method for the construction of the minimal transition system that represents the semantics of a given reactive system. The point of this method is that it
exploits structural properties of the reactive system in order to avoid the consideration of large intermediate representations. Central is the use of interface specifications here, which express
constraints on the components' communication behaviour, and therefore to control the state explosion caused by the interleavings of actions of communicating parallel components. The effect of the
method, which is developed for bisimulation semantics here, depends on the structure of the reactive system under consideration, in particular on the accuracy of the interface specifications.
However, its correctness does not: every "successful" construction is guaranteed to yield the desired minimal transition system, independently of the correctness of the interface specifications
provided by the designer.
- Advances in Hardware Design and Verification: IFIP WG10.5 International Conference on Correct Hardware Design and Verification Methods (CHARME ’99), volume 1703 of Lecture Notes in Computer Science
, 1999
"... . Compositional proofs about systems of many components often involve apparently circular arguments. That is, correctness of component A must be assumed when verifying component B, and vice
versa. The apparent circularity of such arguments can be resolved by induction over time. However, previous ..."
Cited by 31 (1 self)
Add to MetaCart
. Compositional proofs about systems of many components often involve apparently circular arguments. That is, correctness of component A must be assumed when verifying component B, and vice versa.
The apparent circularity of such arguments can be resolved by induction over time. However, previous methods for such circular compositional reasoning apply only to safety properties. This paper
presents a method of circular compositional reasoning that applies to liveness properties as well. It is based on a new circular compositional rule implemented in the SMV proof assistant. The method
is illustrated using Tomasulo's algorithm for out-of-order instruction execution. An implementation is proved live for arbitrary resources using compositional model checking. c fl1999 Cadence
Berkeley Labs, Cadence Design Systems. 1 Introduction Compositional methods are used in conjunction with model checking to reduce the verification of large systems to a number of smaller, localized
, 1996
"... We present a method for the compositional construction of the minimal transition system that represents the semantics of a given distributed system. Our aim is to control the state explosion
caused by the interleavings of actions of communicating parallel components by reduction steps that exploit g ..."
Cited by 30 (6 self)
Add to MetaCart
We present a method for the compositional construction of the minimal transition system that represents the semantics of a given distributed system. Our aim is to control the state explosion caused
by the interleavings of actions of communicating parallel components by reduction steps that exploit global communication constraints given in terms of interface specifications. The effect of the
method, which is developed for bisimulation semantics here, depends on the structure of the distributed system under consideration, and the accuracy of the interface specifications. However, its
correctness is independent of the correctness of the interface specifications provided by the program designer.
- Distributed Computing , 1995
"... ion ? Susanne Graf VERIMAG ?? , Avenue de la Vignate, F-38610 Gi`eres ? ? ? Abstract. The contribution of the paper is two-fold. We give a set of properties expressible as temporal logic
formulas such that any system satisfying them is a sequentially consistent memory, and which is sufficiently ..."
Cited by 26 (4 self)
Add to MetaCart
ion ? Susanne Graf VERIMAG ?? , Avenue de la Vignate, F-38610 Gi`eres ? ? ? Abstract. The contribution of the paper is two-fold. We give a set of properties expressible as temporal logic formulas
such that any system satisfying them is a sequentially consistent memory, and which is sufficiently precise such that every reasonable concrete system that implements a sequentially consistent memory
satisfies these properties. Then, we verify these properties on a distributed cache memory system by means of a verification method, based on the use of abstract interpretation which has been
presented in previous papers and so far applied to finite state systems. The motivation for this paper was to show that it can also be successfully applied to systems with an infinite state space.
This is a revised and extended version of [Gra94]. 1 Introduction We propose to verify the distributed cache memory presented in [ABM93] and [Ger94] by using the verification method proposed in
[BBLS92,LGS +...
- In Proc. of CAV’05, volume 3576 of LNCS , 2005
"... Abstract. The applicability of assume-guarantee reasoning in practice has been limited since it requires the right assumptions to be constructed manually. In this article, we address the issue
of efficiently automating assume-guarantee reasoning for simulation conformance between finite state system ..."
Cited by 19 (5 self)
Add to MetaCart
Abstract. The applicability of assume-guarantee reasoning in practice has been limited since it requires the right assumptions to be constructed manually. In this article, we address the issue of
efficiently automating assume-guarantee reasoning for simulation conformance between finite state systems and specifications. We focus on a non-circular assume-guarantee proof rule, and show that
there is a weakest assumption that can be represented canonically by a deterministic tree automata (DTA). We then present an algorithm L T that learns this DTA automatically in an incremental
fashion, in time that is polynomial in the number of states in the equivalent minimal DTA. The algorithm assumes a teacher that can answer membership queries pertaining to the language of the unknown
DTA, and can also test a conjecture and provide a counter example if the conjecture is false. We show how the teacher and its interaction with L T are implemented in a model checker. We have
implemented this framework in the ComFoRT toolkit and we report encouraging results (up to 41 and 14 times improvement in memory and time consumption respectively) on non-trivial benchmarks.
, 1995
"... In modular verification the specification of a module consists of two parts. One part describes the guaranteed behavior of the module. The other part describes the assumed behavior of the system
in which the module is interacting. This is called the assume-guarantee paradigm. In this paper we consid ..."
Cited by 19 (9 self)
Add to MetaCart
In modular verification the specification of a module consists of two parts. One part describes the guaranteed behavior of the module. The other part describes the assumed behavior of the system in
which the module is interacting. This is called the assume-guarantee paradigm. In this paper we consider assume-guarantee specifications in which the assumptions and the guarantees are specified by
universal branching temporal formulas (i.e., all path quantifiers are universal). Verifying modules with respect to such specifications is called the branching modular model-checking problem. We
consider both ACTL and ACTL*, the universal fragments of CTL and CTL*. We develop two fundamental techniques: building max...
- CAV 95: Computer-aided Verification, Lecture Notes in Computer Science 939 , 1995
"... Abstract. We argue that the standard constraints on liveness conditions in nonblocking trace models|machine closure for closed systems, and receptiveness for open systems|are unnecessarily weak
and complex, and that liveness should, instead, be speci ed by augmenting transition systems with acceptan ..."
Cited by 18 (3 self)
Add to MetaCart
Abstract. We argue that the standard constraints on liveness conditions in nonblocking trace models|machine closure for closed systems, and receptiveness for open systems|are unnecessarily weak and
complex, and that liveness should, instead, be speci ed by augmenting transition systems with acceptance conditions that satisfy a locality constraint. First, locality implies machine closure and
receptiveness, and thus permits the composition and modular veri cation of live transition systems. Second, while machine closure and receptiveness are based on in nite games, locality is based on
repeated nite games, and thus easier to check. Third, no expressive power is lost by the restriction to local liveness conditions. We illustrate the appeal of local liveness using the model of Fair
Reactive Systems, a nonblocking trace model of communicating processes. 1
- IN PROC. 2004 ACM SIGPLAN INT’L CONF. ON FUNCTIONAL PROG , 2004
"... Concurrency, as a useful feature of many modern programming languages and systems, is generally hard to reason about. Although existing work has explored the verification of concurrent programs
using high-level languages and calculi, the verification of concurrent assembly code remains an open probl ..."
Cited by 17 (6 self)
Add to MetaCart
Concurrency, as a useful feature of many modern programming languages and systems, is generally hard to reason about. Although existing work has explored the verification of concurrent programs using
high-level languages and calculi, the verification of concurrent assembly code remains an open problem, largely due to the lack of abstraction at a low-level. Nevertheless, it is sometimes necessary
to reason about assembly code or machine executables so as to achieve higher assurance. In this paper | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=165513","timestamp":"2014-04-18T13:38:09Z","content_type":null,"content_length":"40042","record_id":"<urn:uuid:aadba432-472d-4868-9d4a-4d0c38eeb358>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Little Neck Algebra 2 Tutor
Find a Little Neck Algebra 2 Tutor
...I created verbalization techniques for a college student with cerebral palsy and paraplegia to use during test taking scenarios in history and algebra. As a special needs teacher in Uganda, I
worked to support the literacy, math, and life skills of students with learning delays/disabilities incl...
39 Subjects: including algebra 2, reading, Spanish, ESL/ESOL
I am a graduate of Columbia University, class of 2008, with a degree in Applied Mathematics and a concentration in Computer Science. I do research on machine learning in music & audio processing
applications. In my spare time, I enjoy hiking, traveling, learning languages, producing/recording music, and cooking.
10 Subjects: including algebra 2, calculus, physics, geometry
...I worked for 4 years as a Chinese teaching assistant/tutor for the Academic Advantage with all ages student, including extensive experience with individuals who have special needs. As part of
my work, sometimes I tutor students in Math , Language Arts (including Chinese, reading, speaking, vocab...
5 Subjects: including algebra 2, geometry, Chinese, elementary math
...Under the mentorship of Melonie Daniels, Stephanie Fisher, Linny Smith, and Katrice Walker, my abilities and instructing became more polished. I am now standing music director for the youth and
young adult choir at The Shekinah Youth Chapel of The Greater Allen Cathedral. My methodologies are potent with lasting impressions and great results.
23 Subjects: including algebra 2, reading, English, writing
...In regard to my varied approaches during instruction, I seek to first analyze the child’s understanding and knowledge preceding the initiation of the lesson. This in turn helps reveal
specifically what needs to be focused on during the lessons. Assessments are giving frequently, both informally and formally, and seek to expedite the learning process and prevent unnecessary
tutoring sessions.
12 Subjects: including algebra 2, reading, ESL/ESOL, elementary (k-6th) | {"url":"http://www.purplemath.com/Little_Neck_Algebra_2_tutors.php","timestamp":"2014-04-17T07:22:27Z","content_type":null,"content_length":"24421","record_id":"<urn:uuid:84f82e25-c234-4f80-a749-4cb445c9a48f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Calculus help!
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5100abb4e4b03186c3f7fd95","timestamp":"2014-04-21T07:35:40Z","content_type":null,"content_length":"51435","record_id":"<urn:uuid:ef4695ea-022e-464f-8497-845cf3096a21>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
causal signals and causal system
Causality in systems makes the most sense. Causality in signals doesn't make that much sense.
Causality in systems determines whether a system relies on future information of a signal x[n+1].
When talking about "causality" in signals, we mean whether they are zero to the left of t=0 or zero to the right of t=0.
A causal signal is zero for t<0
A non-causal zero has values present for t<0.
Anti-causal signals are zero for t>0.
However, the reason why this doesn't really make sense is that if you have a signal, the time t=0 can be chosen arbitrarily. | {"url":"http://www.physicsforums.com/showthread.php?p=4236943","timestamp":"2014-04-18T15:49:40Z","content_type":null,"content_length":"25294","record_id":"<urn:uuid:29f9df70-324c-40ab-8f2d-1c90252be642>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scatterplots and Regressions
Scatterplots and Regressions (page 4 of 4)
Until (and unless) you get into a statistics class, the preceding pages cover pretty much all there is to scatterplots and regressions. You draw the dots (or enter them into your calculator), you
eyeball a line (or find one in the calculator), and you see how well the line fits the dots. About the only other thing you might do is "extrapolate" and "interpolate".
Remember that the point of all this data-collection, dot-drawing, and regression-computing was to try to find a formula that models... whatever it is that they're measuring. You can use these models
to try to find missing data points or to try to project into the future (or, sometimes, into the past).
If you have data, say, for the years 1950, 1960, 1970, and 1980, and you find a model for your data, you might use it to guess at values between these dates. For instance, given Namibian population
data for the listed years, you might try to guess the population of Namibia in 1965. The prefix "inter" means "between", so this guessing-between-the-points would be interpolation. On the other hand,
you might try to work backwards to guess the population in 1940, or try to fill in the missing data up through 2000. The prefix "extra" means "outside", so this guessing-outside-the-points would be
• Find a regression equation for the following population data, using t = 0 to stand for 1950. Then estimate the population of Namibia in the years 1940, 1997, and 2005. Note: Population values are
in thousands.
│ year t │ 0 │ 5 │ 10 │ 15 │ 20 │ 25 │ 30 │ 35 │ 40 │ 45 │ 50 │
│ pop. │ 511 │ 561 │ 625 │ 704 │ 800 │ 921 │ 1 018 │ 1 142 │ 1 409 │ 1 646 │ 1 894 │
Setting my window range as 0 < X < 55, counting by 5's, and 500 < Y < 2000, counting by 250's, my calculator gives me the following scatterplot:
The dots look like they line up in a curve, so I'll try a quadratic regression. The calculator gives me:
As you can see, I've set the calculator to "DiagnosticsOn", so it displays the correlation value whenever I do a regression. This regression looks pretty darned good, especially when it's graphed
with the data values:
...so I'll use this model for my computations.
Now that I have an equation for modelling Namibia's population, I can use it to estimate the population in the given years. For 1940, I'll use t = –10, since this is ten years before 1950. (This
is an extrapolated value, since I'm going outside the data set.)
f(–10) = 0.4958(–10)^2 + 1.9389(–10) + 538.6993 = 568.8903
For 2005, I'll use t = 55; this will be another extrapolated value.
f(55) = 0.4958(55)^2 + 1.9389(55) + 538.6993 = 2145.1338
For 1997, I'll use t = 47. Since this value is between known values, this will be an interpolated answer. Copyright © Elizabeth Stapel 2005-2011 All Rights Reserved
f(47) = 0.4958(47)^2 + 1.9389(47) + 538.6992 = 1725.0498
Remembering that the population values are in thousands, I'll add three zeroes to my numbers and round to get my final answers:
The estimated values for the population in 1940 is about 569 000; for 2005, the estimated value is about 2.15 million; and for 1997, the estimated value is about 1.73 million.
Depending on your calculator, you may need to memorize what the regression values mean. On my old TI-85, the regression screen would list values for a and b for a linear regression. But I had to
memorize that the related regression equation was "a + bx", instead of the "ax + b" that I would otherwise have expected, because the screen didn't say. If you need to memorize this sort of
information, do it now, because the teacher will not bail you out if you forget on the test what your calculator's variables mean.
<< Previous Top | 1 | 2 | 3 | 4 | Return to Index
Cite this article as: Stapel, Elizabeth. "Scatterplots and Regressions.." Purplemath. Available from
http://www.purplemath.com/modules/scattreg4.htm. Accessed | {"url":"http://www.purplemath.com/modules/scattreg4.htm","timestamp":"2014-04-18T00:30:45Z","content_type":null,"content_length":"31879","record_id":"<urn:uuid:8da21de8-cef3-4abc-8cac-76800f2d226a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Penn Ctr, PA Prealgebra Tutor
Find a Penn Ctr, PA Prealgebra Tutor
...So I try to break down each concept, mechanism and problem down to its bare parts and build an understanding so that each concept, mechanism and problem can be solved logically on their own
without memorization. I am a graduate student getting my Ph.D. in organic chemistry. I have taught both organic chemistry 1 and 2 recitation and lab.
6 Subjects: including prealgebra, chemistry, algebra 1, algebra 2
...I start off by building a rapport with you, student to student. We will talk about what your learning style is like and what your experience has been in the class that we are discussing. The
reason that I said, "student to student," is because I love to learn about what you are writing about!
37 Subjects: including prealgebra, reading, English, physics
...After all, I won't be there when you take the test! See you at our first session! Kevin Cancellation Policy: I make every effort to schedule my students around conflicts.
14 Subjects: including prealgebra, chemistry, physics, geometry
...I am a recent graduate with a Bachelor of science in biological sciences pre-professional, with a strong understanding of microbial concepts and principles. I took a course in microbiology,
have done research in applied microbiology helped explaining several concepts to colleagues and friends in...
28 Subjects: including prealgebra, chemistry, geometry, biology
I am teaching math, for over 20 years now, and was awarded four times as educator of the year. I was also mentor of the year twice. I have a variety of experience teaching not only in different
countries, but also teaching here in public school, private school, charter school, and adult continuing education school.
15 Subjects: including prealgebra, geometry, algebra 1, algebra 2
Related Penn Ctr, PA Tutors
Penn Ctr, PA Accounting Tutors
Penn Ctr, PA ACT Tutors
Penn Ctr, PA Algebra Tutors
Penn Ctr, PA Algebra 2 Tutors
Penn Ctr, PA Calculus Tutors
Penn Ctr, PA Geometry Tutors
Penn Ctr, PA Math Tutors
Penn Ctr, PA Prealgebra Tutors
Penn Ctr, PA Precalculus Tutors
Penn Ctr, PA SAT Tutors
Penn Ctr, PA SAT Math Tutors
Penn Ctr, PA Science Tutors
Penn Ctr, PA Statistics Tutors
Penn Ctr, PA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Bala, PA prealgebra Tutors
Billingsport, NJ prealgebra Tutors
Carroll Park, PA prealgebra Tutors
Center City, PA prealgebra Tutors
Delair, NJ prealgebra Tutors
East Camden, NJ prealgebra Tutors
Lester, PA prealgebra Tutors
Merion Park, PA prealgebra Tutors
Middle City East, PA prealgebra Tutors
Middle City West, PA prealgebra Tutors
Passyunk, PA prealgebra Tutors
Philadelphia prealgebra Tutors
Philadelphia Ndc, PA prealgebra Tutors
Verga, NJ prealgebra Tutors
West Collingswood, NJ prealgebra Tutors | {"url":"http://www.purplemath.com/penn_ctr_pa_prealgebra_tutors.php","timestamp":"2014-04-19T14:51:17Z","content_type":null,"content_length":"24329","record_id":"<urn:uuid:c6c1519e-7ba6-4342-82a0-fa949999d6e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: comparing survival models: Cox vs AFT
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: comparing survival models: Cox vs AFT
From "Moran, John (NWAHS)" <John.Moran@nwahs.sa.gov.au>
To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu>
Subject RE: st: comparing survival models: Cox vs AFT
Date Fri, 23 Aug 2002 17:26:48 +0930
Thanks to Ronan Conroy for his extended reply
Although over a "short" period, logistic and Cox regressions perform
comparably (a number of studies have shown this) there is advantage in using
survival analysis on the basis of "lost information" with logistic
Depending upon the data set, the ability, say, of the Cox model with TVC to
track covariate effect over time, seems worthwhile (although with large data
sets with repeated observations per patient, set-up can be a challenge, to
say the least). With the data-set to which I referred, it was also useful to
be able to compute peak hazard: the patient were those with the condition of
Adult Respiratory Distress Syndrome (ARDS); peak hazard appears to occur at
about Day 8 post diagnosis, a novel observation.
Common advice has it that if the Cox model shows non-proportionality, use
stratified or TVC Cox, but, as the latter are too complex for a clinical
audience, go straight to parametric models. However, as I originally noted,
there is somewhat of a bias FOR the Cox model in medical literature.
However, there remains the (or rather, my) original problem: having set up
two quite reasonable models, Cox with TVC and log-normal AFT (also with
TVC), the question that was posed to me (by a referee), was: which was the
better model?
Thus my query of the mechanics of doing just this.
john moran
-----Original Message-----
From: Ronan Conroy [mailto:rconroy@rcsi.ie]
Sent: Thursday, August 22, 2002 7:22 PM
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: comparing survival models: Cox vs AFT
on 22/8/02 7:48 AM, Moran, John (NWAHS) at John.Moran@nwahs.sa.gov.au wrote:
> I am not quite sure as to direction here; any advice would be most
> I have a multi-record per patient survival data set with 28 day (from
> diagnosis) mortality as the outcome. A Cox model with (significant)
> time-varying covariates gives a "good" fit , by conventional means
> analysis etc) .
> A log normal AFT model (parameterized in the time-ratio sense) seems to do
> "good" job as well (again, by conventional diagnostics). The shape of the
> baseline Cox model hazard (using stkerhaz, recently posted) certainly has
> log-normal profile.
With 28-day survival, you will have complete data (or you will have in less
than a month...). For this reason, you might consider logistic regression
or, indeed, -binreg- as the first options. With 28-day survival, the shape
of the survival distribution is generally of little interest (I am guessing
that this is something like acute coronary syndrome, where there is a
significant hazard in the first 28 days). Logistic regression allows you to
estimate the effects of risk factors as odds ratios. -binreg-, on the other
hand, will try to estimate risk ratios, which are easier to interpret, since
a risk ratio is simply the ratio of two probabilities, but you aren't
guaranteed that any model will converge.
Both Cox regression and AFT models can give you hazard ratios, which are
also useful measures of the effect of risk factors, though harder to explain
properly than risk ratios.
The advantage of AFT models, and other parametric approaches such as
fractional polynomials, is that you can characterise the shape of the hazard
function. Cox regression, on the other hand, treats the shape as a high
dimensional nuisance parameter - something that just has to be got out of
the way before we do the interesting work parametrising the risk factors.
In general, if you are interested in factors which predict outcome, I would
go for simple binary models using -logistic- or -binreg- and the hell with
the shape of the survival function.
If the survival function's shape is actually interesting, then parametric
approaches allow you to characterise it, while Cox regression simply takes
it as a given, so I would opt first for simple parametric methods, and then
investigate the gain from using something like fractional polynomials. I
would beware of making a model that is more complex than the underlying
Ronan M Conroy (rconroy@rcsi.ie)
Lecturer in Biostatistics
Royal College of Surgeons
Dublin 2, Ireland
+353 1 402 2431 (fax 2329)
Too busy fighting terror to worry about the planet? Gosh! It's hard being
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2002-08/msg00406.html","timestamp":"2014-04-19T09:41:57Z","content_type":null,"content_length":"9496","record_id":"<urn:uuid:f29890e2-43e9-4db4-89c4-893c94a79f6e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Verizon Thinkfinity
Having trouble sorting through all the wonderful resources Illuminations has to offer? Not to worry! We've complied a fail-proof list of our top ten! Have comments, suggestions, and or evidence of
impact? We'd love to hear from you!
Top Ten Lesson Plans
In this activity, students choose a picture and use all seven Tangram pieces to fill in the outline. They describe and visualize what figures look like when they are transformed through rotations,
reflections, or translations, and are put together or taken apart in different ways. This activity helps students to develop their spatial memory and spatial visualization skills, to recognize and
apply transformations, and to become more precise in their use of vocabulary about space. Students use an internet-based tool to explore tangram puzzles.
A game encourages students to find the sums of two one-digit numbers. Students explore commutativity and examine addition patterns. Then they record known facts on a personal addition chart.
This lesson provides an introduction to and practice with the concept of time. The activities focus students’ attention on the attributes of time and enables students at varying levels to develop
knowledge and skills in using time.
Students construct sets of numbers up to 10, write the numerals up to 10, and count up to 10 rationally. They use ten frames and also make bean sticks.
In this lesson, students make groups of zero to 5 objects, connect number names to the groups, compose and decompose numbers, and use numerals to record the size of a group. Visual, auditory, and
kinesthetic activities are used to help students begin to acquire a sense of number.
Students learn the names of solid geometric shapes and explore their properties. At various centers, they use physical models of simple solid shapes, including cubes, cones, spheres, rectangular
prisms, and triangular prisms.
Comparing Columns on a Bar Graph
During this lesson, students apply what they know about comparison subtraction by constructing bar graphs and using the graphs to answer questions.
How Many Letters Are in Your Name?
Students review numbers 1 to 10 by counting the number of letters in their names and their classmates' names. They also write and order numbers. The class compiles students' finished product in a
class book.
In this lesson, students generate sums using the number line model. This model highlights the measurement aspect of addition and is a distinctly different representation of the operation from the
model presented in the previous lesson. The order (commutative) property is also introduced. At the end of the lesson, students are encouraged to predict sums and to answer puzzles involving
6 Individual lessons included. Look under Related Resources for the pack.
Top Ten Activities
Oh, no! Okta and his friends need help. Help rescue them by transporting them to a safe ocean. How fast can you transport the Oktas? Use your counting skills to save as many as you can before the
timer runs out.
Bobbie Bear is planning a vacation and wants to know how many outfits can be made using different colored shirts and pants.
Help the alien spaceship move cows into corrals by counting, adding, and subtracting. This activity helps children learn grouping, tally marks, and place value. As they master counting, they can move
on to adding and subtracting two-digit numbers.
Guide a turtle to a pond using computer commands.
After Okta hides some bubbles under a shell, he then either adds more bubbles or takes some away. Students have to determine how many bubbles are left under the shell.
Quilters and other designers sometimes start by producing square patches with a pattern on them. These square patches are then repeated and connected to produce a larger pattern. Create your own
patch using the shapes in the tool below.
Thinking about numbers using frames of 5 can be a helpful way to learn basic number facts. The four games that can be played with this applet help to develop counting and addition skills. (This
applet works well when used in conjunction with the Ten Frame applet.)
This tool allows you to learn about various geometric solids and their properties. You can manipulate and color each shape to explore the number of faces, edges, and vertices, and you can also use
this tool to investigate the following question:
For any polyhedron, what is the relationship between the number of faces, vertices, and edges?
What other questions can this tool help you answer?
Thinking about numbers using frames of 10 can be a helpful way to learn basic number facts. The four games that can be played with this applet help to develop counting and addition skills. (This
applet works well when used in conjunction with the Five Frame applet.)
By yourself or against a friend, match whole numbers, shapes, fractions, or multiplication facts to equivalent representations. Practice with the clear panes or step up the challenge with the windows
closed. How many socks can you win? | {"url":"http://thinkfinity.org/docs/DOC-12646","timestamp":"2014-04-25T05:55:28Z","content_type":null,"content_length":"164605","record_id":"<urn:uuid:c4d9be1b-c279-4bf0-8fce-a9a0e0f39a67>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Hypothetical Population Of 500 Cats Has Two Alleles, ... | Chegg.com
A hypothetical population of 500 cats has two alleles, T and t, for a gene that codes for tail length. The table below presents the phenotype of cats with each possible genotype, as well as the
number of individuals in the population with each genotype. Assume that this population is in Hardy-Weinberg equilibrium.
1. What is the frequency of cats with long tails in the population? __.84__
2. What is the frequency of cats with short tails in the population? .16
3. What is the frequency of cats that are homozygous dominant in the population? .36
4. What is the frequency of the T allele in the gene pool of this population? .60
5. What is the frequency of the t allele in the gene pool of this population? .40
6. Use the Hardy-Weinburg equation to predict the frequency of heterozygous cats in the next generation. ______
7. Use the Hardy-Weinburg equation to predict the frequency of homozygous recessive cats in the next generation. _______
8. Use the Hardy-Weinburg equation and your answer to question 7 to estimate the frequency of the next generation. | {"url":"http://www.chegg.com/homework-help/questions-and-answers/hypothetical-population-500-cats-two-alleles-t-t-gene-codes-tail-length-table-presents-phe-q2009411","timestamp":"2014-04-18T07:11:32Z","content_type":null,"content_length":"24572","record_id":"<urn:uuid:0419b76f-42d9-4c3e-b348-b7b683217e03>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thermodynamics Problems
First problem: temperatures should be in Kelvins. (Always) It makes no sense to divide a temperature in degrees Celsius by anything.
Second problem: a liter is defined as a cubic decimeter. An easy way to remember this is that one milliliter is equal to one cubic centimeter.
Third problem: pay attention to signs. One of the cup or water is losing heat while the other is gaining heat, so one of the two will have a negative change in entropy.
I did what you said...
In the first problem... if I converted 28 degress celsius to kelvins it would be 301K
so then would we have 4.5= TL/(301-TL) ??
Second problem...I still don't get how to convert 3.00m^3 into cm^3 or L or Kg for that matter...
Third problem... so I did
S of cup = Q/T= m*c*change in temperature/final temp in kelvein = .12*900*(50-19)/(273+50) = 10.4
S of water = 9.72
10.4-9.72 ... still wrong answer | {"url":"http://www.physicsforums.com/showthread.php?t=312390","timestamp":"2014-04-19T12:35:15Z","content_type":null,"content_length":"39662","record_id":"<urn:uuid:8d24814f-e34c-43d9-959a-1356cd3791c7>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Tree Graphs' printed from http://nrich.maths.org/
The simplest tree graph consists of one line with two vertices, one at each end.
If a new line is added it must connect to one and only one of the existing vertices.
If the new line connected to no vertices, the tree graph would not be connected, as the new line's vertices could not be reached from the existing vertices.
If each end of the new line connects to a vertex the graph will have a circuit and will not be a tree graph.
So every new line added will join on to one existing vertex and create a new vertex at its end. This adds one line and one vertex to the tree graph making no change to the difference between the
numbers of edges and vertices. So the difference remains constant at what it originally was.
Marcos's solution is a subtle variation on the above method:
1. Firstly, it's obvious that every edge is connected to exactly two vertices. (by definition)
2. More importantly, there is at least one vertex which is connected to exactly one edge.
Proof . If there wasn't at least one such vertex we could keep moving around the graph indefinitely and as there is a finite number of edges it would mean that there is a cycle, counter to the
definition of a tree.
Take one such vertex as described in (2) and its respective edge. So far we have 2 vertices and 1 edge. The difference is 1.
(*) Add to this an adjacent edge (this is adding one edge and one vertex. The difference is still one.
Generally, carrying out this step, (*), an arbitrary number of times until we add the final edge will still result in a difference of 1 (as the step was in no way linked to fact that the previous
edge was the starting one)
Hence, the number of vertices is one more than the number of edges. | {"url":"http://nrich.maths.org/453/solution?nomenu=1","timestamp":"2014-04-16T13:50:34Z","content_type":null,"content_length":"4864","record_id":"<urn:uuid:5f9ae4e4-7b22-48b3-b750-40c9745ef3c3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
MAP inference software package
Given a BN whose graph is a tree and whose nodes are all binary, which is the most efficient algorithm to compute the Most Probable Explanation, i.e. the assignment to all nodes associated with the
maximum probability? Is there any "R" package implementing appropriate algorithms?
asked 17 Mar '12, 04:40 | {"url":"http://www.aiqus.com/questions/38786/map-inference-software-package","timestamp":"2014-04-20T18:22:56Z","content_type":null,"content_length":"25726","record_id":"<urn:uuid:1e5d3264-7fc4-4c8b-8301-e5ee334f76ab>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differentiability and the chain rule (multivariable calculus)
September 24th 2010, 10:33 AM
Differentiability and the chain rule (multivariable calculus)
I have a problem with the next exercise:
Given de function $f(x,y)=\begin{Bmatrix} \displaystyle\frac{xy^2}{x^2+y^2} & \mbox{ if }& (x,y)eq{(0,0)}\\0 & \mbox{if}& (x,y)=(0,0)\end{matrix}$ with $\vec{g}(t)=\begin{Bmatrix} x=at \\y=bt \
a) Find $h=fog$ y $\displaystyle\frac{dh}{dt}$ for t=0
The thing is that I've found that f isn't differentiable at (0,0). The partial derivatives exists at that point, I've found them by definition.
And then I saw if it was differentiable at that point.
$\displaystyle\lim_{(x,y) \to{(0,0)}}{\displaystyle\frac{xy^2}{(x^2+y^2)^{3/2}}}$
In the polar form it gives that this limit doesn't exists, so it isn't differentiable at that point. So I can't apply the chain rule there, right?
To ensure the differentiability of a composed function, both function must be differentiable. If one isn't, then the composition isn't differentiable at a certain point. Right?
Bye there, and thanks.
September 24th 2010, 12:50 PM
I think this problem is easier than you are making it. In fact, $h(t) = \dfrac{(at)(bt)^2}{(at)^2+(bt)^2} = \dfrac{abt}{a^2+b^2}$ (and this formula holds also when t=0, because $h(0) = f(0,0) =
0$). Therefore $h'(t) = \frac{ab}{a^2+b^2}$ for all t, including t=0, whether or not f is differentiable at (0,0).
September 24th 2010, 02:52 PM
Yes, you're right. Unless for that part of the problem. But I forgot to tell that then it asks me to use the chain rule too. I've made this part this way, and arrived to a similar conclusion, I
think you've made a little mistake (you have forgotten the square for b), lets see:
$h(t)=\begin{Bmatrix} \displaystyle\frac{ab^2t^3}{a^2t^2+b^2t^2} & \mbox{ if }& (t)eq{0}\\0 & \mbox{if}& t=0\end{matrix}$
Then using the limit definition we got $\dysplaystyle\frac{dh(0)}{dt}=\dysplaystyle\frac{a b^2}{a^2+b^2}$
But using the chain rule it gives 0, so the chain rule evidently doesn't work on this case, and thats because the function "f" isn't differentiable at the given point.
September 25th 2010, 12:17 AM
Yes, you're right. Unless for that part of the problem. But I forgot to tell that then it asks me to use the chain rule too. I've made this part this way, and arrived to a similar conclusion, I
think you've made a little mistake (you have forgotten the square for b), lets see:
$h(t)=\begin{Bmatrix} \displaystyle\frac{ab^2t^3}{a^2t^2+b^2t^2} & \mbox{ if }& (t)eq{0}\\0 & \mbox{if}& t=0\end{matrix}$
Then using the limit definition we got $\dysplaystyle\frac{dh(0)}{dt}=\dysplaystyle\frac{a b^2}{a^2+b^2}$
But using the chain rule it gives 0, so the chain rule evidently doesn't work on this case, and thats because the function "f" isn't differentiable at the given point.
You're right, of course, I should have written $\frac{ab^2}{a^2+b^2}$. The partial derivatives of f at (0,0) are both zero, so the chain rule gives the wrong answer for h'(0), and the reason for
that is that f is not differentiable at (0,0) so the chain rule does not apply. | {"url":"http://mathhelpforum.com/calculus/157306-differentiability-chain-rule-multivariable-calculus-print.html","timestamp":"2014-04-21T06:00:21Z","content_type":null,"content_length":"10612","record_id":"<urn:uuid:60fa5764-207b-4eb3-8bc4-fb892ddf9c57>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Concrete View of Rule 110 Computation
@MISC{Cook906aconcrete, author = {Matthew Cook}, title = {A Concrete View of Rule 110 Computation}, year = {906}}
Rule 110 is a cellular automaton that performs repeated simultaneous updates of an infinite row of binary values. The values are updated in the following way: 0s are changed to 1s at all positions
where the value to the right is a 1, while 1s are changed to 0s at all positions where the values to the left and right are both 1. Though trivial to define, the behavior exhibited by Rule 110 is
surprisingly intricate, and in [1] we showed that it is capable of emulating the activity of a Turing machine by encoding the Turing machine and its tape into a repeating left pattern, a central
pattern, and a repeating right pattern, which Rule 110 then acts on. In this paper we provide an explicit compiler for converting a Turing machine into a Rule 110 initial state, and we present a
general approach for proving that such constructions will work as intended. The simulation was originally assumed to require exponential time, but surprising results of Neary and Woods [2] have shown
that in fact, only polynomial time is required. We use the methods of Neary and Woods to exhibit a direct simulation of a Turing machine by a tag system in polynomial time. 1 Compiling a Turing
machine into a Rule 110 State In this section we give a concrete algorithm for compiling a Turing machine and its tape into an initial state for Rule 110, following the construction given in [1]. We
will create an initial state that will eventually
496 2002 A new kind of science - Wolfram
74 Universality in elementary cellular automata - Cook
21 P-completeness of cellular automaton Rule 110 - Turlough, Woods - 2006
16 On the time complexity of 2-tag systems and small universal turing machines - Neary, Woods - 2006
12 Four small universal Turing machines - Woods
7 weakly universal Turing machines - Neary, Woods - 2009
2 M.Minsky: Universality of tag systems with P=2 - Cocke - 1964
1 A tag system for the 3x+1 problem, (personal communication - Chapman - 2003
1 Tag systems and Collatz-like functions - Mol - 2008
1 The 2-symbol Turing machine simulating Rule 110 requires only 7 states, (personal communication - Eppstein - 1998
Developed at and hosted by The College of Information Sciences and Technology | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.249.1382","timestamp":"2014-04-16T05:58:56Z","content_type":null,"content_length":"20869","record_id":"<urn:uuid:b7acd0d5-4c19-49f4-97f0-9a8679def461>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics 215 > D'souza > Notes > Regular Homework 5 | StudyBlue
Simply amazing. The flashcards are smooth, there are many different types of studying tools, and there is a great search engine. I praise you on the awesomeness. - Dennis
I have been getting MUCH better grades on all my tests for school. Flash cards, notes, and quizzes are great on here. Thanks! - Kathy
I was destroying whole rain forests with my flashcard production, but YOU, StudyBlue, have saved the ozone layer. The earth thanks you. - Lindsey
This is the greatest app on my phone!! Thanks so much for making it easier to study. This has helped me a lot! - Tyson | {"url":"http://www.studyblue.com/notes/note/n/regular-homework-5/file/466074","timestamp":"2014-04-18T18:12:43Z","content_type":null,"content_length":"33521","record_id":"<urn:uuid:5b7256d6-06cc-4789-8fa9-04607a52d8ed>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
LAPACK Routines
Next: Level 1, 2, and Up: Overview of Tape Contents Previous: Overview of Tape Contents   Contents
There are three classes of LAPACK routines:
• driver routines solve a complete problem, such as solving a system of linear equations or computing the eigenvalues of a real symmetric matrix. Users are encouraged to use a driver routine if
there is one that meets their requirements. The driver routines are listed in Appendix A and the LAPACK Users' Guide [1].
• computational routines, also called simply LAPACK routines, perform a distinct computational task, such as computing the 2#2 decomposition of an 3#3-by-4#4 matrix or finding the eigenvalues and
eigenvectors of a symmetric tridiagonal matrix using the 5#5 algorithm. The LAPACK routines are listed in Appendix A and the LAPACK Users' Guide [1].
• auxiliary routines are all the other subroutines called by the driver routines and computational routines. The auxiliary routines are listed in Appendix B and the LAPACK Users' Guide [1].
Susan Blackford 2001-08-13 | {"url":"http://www.netlib.org/lapack/lawn41/node6.html","timestamp":"2014-04-19T04:27:16Z","content_type":null,"content_length":"3578","record_id":"<urn:uuid:48cabd78-7cb3-4c15-8a4a-e7a420d66ddd>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: July 2001 [00244]
[Date Index] [Thread Index] [Author Index]
Re: Re: about ConstrainedMin
• To: mathgroup at smc.vnet.net
• Subject: [mg29888] Re: [mg29855] Re: [mg29806] about ConstrainedMin
• From: Daniel Lichtblau <danl at wolfram.com>
• Date: Tue, 17 Jul 2001 01:00:33 -0400 (EDT)
• References: <200107140536.BAA18344@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
qing.cheng at icos.be wrote:
> Thank you, Mark.
> But I would like to know what ConstrainedMin inside does, not only the
> usage.
> The problem was rised from one of our applications. There we need to
> measure the position of each pins in a leads component(electronic chip).
> Based on these individual points, we need to calculate a plane, which
> should reflect a physical plane where the component can 'sit' stably, so
> call it seating plane. One way we try to achieve this is to convert this
> problem to a LP problem. The objective function is to Minimize the Sum
> distance between the measurement points and the plane.
> We have implemented a Simplex method besed on the algorithm in Numerical
> Recipes to solve this linear problem, and found it worked not very well for
> ">=" type constraints. I also brought the same problem to Mathematica, and
> found the situation that BasicSimplex failed as same as our C
> implementation, while ConstrainedMin found good solution. Now, we have done
> a data transformation before pass them to Simplex algorithm to ensure that
> all the constraints are "<=". It works in that way. But still I would like
> to know how ConstrainedMin improved BasicSimplex. (In Mathematica hand book
> from Stephen Wolfram, page 1061 says that ConstrainedMax and related
> function use an enhanced version of the simplex algorithm).
> Could you give me some more information or suggestions about it?
> Best Regards.
> /Qing
> [...]
I do not know the details of the Mathematica implementation of the
simplex algorithm. I will make a few comments about other aspects of
this particular problem, in case any are of relevance e.g. for obtaining
preferable or faster results.
As worded, the problem above appears to be one of total least squares.
That is, the task is to minimize distances from a plane (ordinary least
squares would minimize squares of vertical discrepancies; we'll address
that later).
First note that a plane may be represented by an equation of the form
a*x+b*y+c*z==d. Provided the x-coefficient is not zero we can normalize
so that it is one. We'll work with an example. We take the plane below,
find some points on it, and perturb them. We then hope to recover a
plane with approximately the same parameters.
plane = 3*x + 2*y + z - 10;
npoints = 10;
xyvals = 2*Table[Random[], {npoints}, {2}];
zvals = Map[10 - 3*#[[1]] - 2*#[[2]] &, xyvals];
points = Map[Flatten, Transpose[{xyvals, zvals}]];
offsets = Table[.4*Random[] - .2, {npoints}, {3}];
fuzzedpoints = points + offsets;
Map[10 - 3*#[[1]] - 2*#[[2]] - #[[3]] &, fuzzedpoints];
Given the plane a*x+b*y+c*z==d and the point {x0,y0,z0} the square of
the distance from point to plane is given by
(a*x+b*y+c*z-d)^2/(a^2+b^2+c^2). If we want to minimize the sum of these
square distances we may discard the common denominators. We form the
resulting sum as below.
newpoints = Map[Append[#, -1] &, fuzzedpoints];
vec = {1, b, c, d};
dotprods = newpoints.vec;
sumsquares = dotprods.dotprods;
Now we can use FindMinimum to get the best values for the parameters (at
least assuming we can give it reasonable starting values).
In[16]:= bestparams = FindMinimum[sumsquares, {b,1}, {c,1}, {d,1}]
Out[16]= {0.117926, {b->0.654815, c->0.357271, d->3.50226}}
We'll check the result.
In[17]:= bestplane = x + b*y + c*z - d /. bestparams[[2]]
Out[17]= -3.50226 + x + 0.654815 y + 0.357271 z
In[18]:= Expand[3*bestplane]
Out[18]= -10.5068 + 3 x + 1.96445 y + 1.07181 z
We could instead minimize the sum of distances rather than square
sumdistances = Apply[Plus, Map[Abs, dotprods]];
For this we do not have a differentiable function so I used a secant
method by giving two initial values.
In[26]:= bestparamsl1 = FindMinimum[sumdistances, {b,.5,1}, {c,.5,1},
Out[26]= {0.879003, {b -> 0.636147, c -> 0.34802, d -> 3.42449}}
In[27]:= bestplanel1 = x + b*y + c*z - d /. bestparamsl1[[2]]
Out[27]= -3.42449 + x + 0.636147 y + 0.34802 z
In[28]:= Expand[3*bestplanel1]
Out[28]= -10.2735 + 3 x + 1.90844 y + 1.04406 z
I found that it took some experimentation to make this work; "bad"
initial values gave bad results. Likewise for avoidung Abs and using
sumdistances = Apply[Plus, Map[Sqrt[#^2]&, dotprods]]
This latter does not have differnetiability problems, hence again does
not need two initial values. But I did not get good results unless I
started reasonably close to the actual solution.
Another scenario is that you might want to treat x and y values as
"exact" and z values as subject to experimental error, and then
determine a best-fitting plane of the form z = a*x+b*y+d. This is
readily cast as an ordinary linear least-squares optimization problem.
First we separate z values from the rest in the data.
vec2 = {a, b, d};
pointsnoz = newpoints[[All, {1, 2, 4}]];
zvals = newpoints[[All, 3]];
pointsnoz can be regarded as a matrix by which we multiply vec2; were
the data to lie exactly on a plane we would have zvals == pointsnoz.vec2
for appropriate values of the parameters in vec2. In the exactly
determined case one can multiply by an inverse matrix to solve for
{a,b,d}. For a least-squares solution, one instead may multiply by the
(Moore-Penrose) pseudo-inverse.
In[43]:= bestparams2 = Thread[vec2 -> PseudoInverse[pointsnoz] . zvals]
Out[43]= {a -> -2.68791, b -> -1.81179, d -> -9.63997}
In[44]:= bestplane2 = z - (a*x + b*y + d) /. bestparams2
Out[44]= 9.63997 + 2.68791 x + 1.81179 y + z
Note that there are ways to do this that avoid explicit computation of
PseudoInverse. In version 4.1 of Mathematica the "further notes" for
QRDecomposition and SingularValues demonstrate some such methods.
Moreover one can formulate the problem as a single call to Fit.
One reason to prefer this second approach is that it is alot faster. If
you have 1000 data points the first method will take several seconds and
the second a split second. This is probably not applicable for the
stated problem (can one work with 1000 pins in a single chip? What would
the connection be like?)
I was initially at a loss to see how this might be formulated as a
linear programming problem. Then one possibility came to mind, that you
constrain the plane so that it lies "under" the data. In this way we get
distances without recourse to Abs and we have a legitimate LP problem.
constraints = Map[# >= 0 &, newpoints.vec];
In[46]:= objfunc = Apply[Plus, newpoints.vec]
Out[46]= 12.9605 + 8.96492 b + 45.3205 c - 10 d
In[47]:= cbestparams = ConstrainedMin[objfunc, constraints, {b, c, d}]
Out[47]= {1.10822, {b -> 0.602984, c -> 0.353536, d -> 3.32804}}
In[48]:= cbestplane = x + b*y + c*z - d /. cbestparams[[2]]
Out[48]= -3.32804 + x + 0.602984 y + 0.353536 z
In[49]:= Expand[3*cbestplane]
Out[49]= -9.98412 + 3 x + 1.80895 y + 1.06061 z
I believe there is a way to formulate the total least squares problem so
that one can use linear algebra techniques e.g. SingularValues. But I do
not know how to do that myself. One advantage that may make such an
approach attractive (other than speed, if you have a truly monstrous
chip) is that these methods are immune to the problems FindMinimum can
have with respect to starting values.
Daniel Lichtblau
Wolfram Research
• Follow-Ups:
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2001/Jul/msg00244.html","timestamp":"2014-04-16T07:29:12Z","content_type":null,"content_length":"42106","record_id":"<urn:uuid:f81058cf-0135-4306-8421-90896ea76a3e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Triangles on a Sphere
This Demonstration shows a spherical triangle. Three 2D sliders on the left control the vertices with spherical coordinates and sliding horizontally from 0 to 360° and vertically from 0 to 180°,
Geometry on a sphere is a noneuclidean geometry. Straight lines are represented as great circles and edges of a spherical triangle are parts of these great circles. The sum of the angles of a
spherical triangle is always greater than 180°.
Snapshot 1: vertices close together form a triangle with the sum of its angles close to 180°
Snapshot 2: a triangle with three right angles, with angle a sum equal to 270°
Snapshot 3: all vertices lying on one great circle give an angle sum of 540° | {"url":"http://demonstrations.wolfram.com/TrianglesOnASphere/","timestamp":"2014-04-19T07:10:54Z","content_type":null,"content_length":"42247","record_id":"<urn:uuid:c0af2e82-4e25-4754-ae95-14dd73198065>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
10 Things to know about Confidence Intervals
About Jeff Sauro
Jeff Sauro is the founding principal of Measuring Usability LLC, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 4 books on statistics and the user-experience.
More about Jeff...
Posted Comments
There are 2 Comments
March 22, 2012 | Jeff Sauro wrote:
You're exactly right Michael, thanks for articulating that. It is much more likely that the true population mean will fall in the middle of the interval (near the average) than near the ends—it’s a
sideways normal curve.  
March 21, 2012 | Michael Zuschlag wrote:
#11 (building on #4). The further you get from the middle of the confidence interval, the less likely the real average/percentage is there. I find some people tend to think of a confidence interval
as a uniform probability distribution. I try to remind them that the 95% confidence interval is the _reasonably plausible_ range of values for the average/percentage. The average/percentage is
_probably_ (i.e., usually) in a range about half as wide. 
Post a Comment
Your Name:
Your Email Address:
To prevent comment spam, please answer the following :
What is 5 + 5: (enter the number) | {"url":"http://www.measuringusability.com/blog/ci-10things.php","timestamp":"2014-04-16T18:56:49Z","content_type":null,"content_length":"35651","record_id":"<urn:uuid:14cc1324-ebb8-4072-a31f-9ac74489f0a4>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Writing propositions using connectives and quantifiers
October 20th 2013, 11:20 AM #1
Oct 2013
Writing propositions using connectives and quantifiers
Hi guys,
I'm struggling to take on these questions where I am asked to write propositions using connectives and quantifiers:
Let P(x) be the statement that says that a real number x has some property P.
For every two real numbers x and y with x<y, there is a real number with the property P between x and y.
I must also construct a negation for this problem. There is also a very similar question to this that I have to do but instead of using P(x) as the statement that says a real number x has some
property P, I must let P(n) be the statement that says that a natural number n has some property P then write the following statement using connectives and quantifiers:
Any sum m + n of natural numbers m and n which have the property P, has the property P.
Re: Writing propositions using connectives and quantifiers
Well, take a shot at these questions, and we'll give you feedback. Surely you suspect that the statement that starts with "For every two real numbers x and y..." is written as ∀x ∀y...
Re: Writing propositions using connectives and quantifiers
Re: Writing propositions using connectives and quantifiers
Good start. You are missing something after the existence symbol. Right now, that reads: "For all real numbers x, for all real numbers y, x<y, there exists an inequality such that x<P(z)<y for
some undefined z."
Try using a colon when you are restricting a quantifier. For example:
$(\forall x \in \mathbb{R})(\forall y \in \mathbb{R}: x<y)(\exists \ldots)$
Last edited by SlipEternal; October 20th 2013 at 12:31 PM.
Re: Writing propositions using connectives and quantifiers
Re: Writing propositions using connectives and quantifiers
Re: Writing propositions using connectives and quantifiers
Re: Writing propositions using connectives and quantifiers
I would encourage the OP to expand the abbreviation $(\forall y \in \mathbb{R}: x<y)$. This is not the basic formula syntax, and it is important to be able to write it in full.
Often people are not sure whether "For all x and y such that x < y, Q(x, y) holds" is rendered ∀x ∀y. x < y ∧ Q(x, y) or ∀x ∀y. x < y → Q(x, y). The first version is wrong because it is not
claimed that x < y for all x and y (and also Q(x, y)). Another way to look at it, if someone chose x and y and it happened that x ≥ y, then nothing is claimed; Q(x, y) is only guaranteed when x <
y. This resembles implication because implication is true when the hypothesis is false. Indeed, the correct formula is ∀x ∀y. x < y → Q(x, y).
Re: Writing propositions using connectives and quantifiers
Do you mean the negation? Or the similar question with natural numbers? For the similar question with natural numbers, here is a start...
$(\forall m \in \mathbb{N})(\forall n \in \mathbb{N})(P(m)\wedge P(n) \Rightarrow ...)$
(I updated this to take emakarov's advice into account. I have seen and used a colon to restrict qualifiers, so I was not aware it was not "basic". Then again, I have never actually checked to
see what is considered basic syntax...)
Last edited by SlipEternal; October 20th 2013 at 12:50 PM.
Re: Writing propositions using connectives and quantifiers
@emakarov where are you defining that x and y are real numbers?
And for the negation, do I just change all "for all" signs to "their exists" signs?
Re: Writing propositions using connectives and quantifiers
You change all "for all" signs to "there exists" signs, all "there exists" signs to "for all" signs, and negate any expressions based on those. So, let's negate emakarov's example: $(\forall x \
in \mathbb{R})(\forall y \in \mathbb{R})(x < y \Rightarrow Q(x,y))$
It's negation would be: $(\exists x \in \mathbb{R})(\exists y \in \mathbb{R}) eg (x<y \Rightarrow Q(x,y))$. So, how do you negate a conditional statement? The only time a conditional $A \
Rightarrow B$ is false is when you have $A \wedge eg B$. So, it would be $(\exists x \in \mathbb{R})(\exists y \in \mathbb{R})(x<y \wedge eg Q(x,y))$
Re: Writing propositions using connectives and quantifiers
Re: Writing propositions using connectives and quantifiers
You change all "for all" signs to "there exists" signs, all "there exists" signs to "for all" signs, and negate any expressions based on those. So, let's negate emakarov's example: $(\forall x \
in \mathbb{R})(\forall y \in \mathbb{R})(x < y \Rightarrow Q(x,y))$
It's negation would be: $(\exists x \in \mathbb{R})(\exists y \in \mathbb{R}) eg (x<y \Rightarrow Q(x,y))$. So, how do you negate a conditional statement? The only time a conditional $A \
Rightarrow B$ is false is when you have $A \wedge eg B$. So, it would be $(\exists x \in \mathbb{R})(\exists y \in \mathbb{R})(x<y \wedge eg Q(x,y))$
I understand it now. Thank you guys. The only thing I don't understand is the use of "Q" - sorry if it is a silly question!
Then for the next question, could I write:
$(\forall m \in \mathbb{N})(\forall n \in \mathbb{N})(P(m)\wedge P(n) \Rightarrow P(m+n))$
Re: Writing propositions using connectives and quantifiers
Q is some property. Given x and y, Q(x,y) is true if the property holds for the given x and y and it is false if the property does not hold. In other words, replace Q(x,y) with another expression
that completes the problem. If x<y, then what should be true? Q(x,y) = "there is a real number between x and y with the property P"
Looks good to me
Re: Writing propositions using connectives and quantifiers
Would I need to define Q at all?
Last edited by MichaelH; October 20th 2013 at 02:25 PM.
October 20th 2013, 12:08 PM #2
MHF Contributor
Oct 2009
October 20th 2013, 12:13 PM #3
Oct 2013
October 20th 2013, 12:25 PM #4
MHF Contributor
Nov 2010
October 20th 2013, 12:30 PM #5
Oct 2013
October 20th 2013, 12:33 PM #6
MHF Contributor
Nov 2010
October 20th 2013, 12:37 PM #7
Oct 2013
October 20th 2013, 12:44 PM #8
MHF Contributor
Oct 2009
October 20th 2013, 12:45 PM #9
MHF Contributor
Nov 2010
October 20th 2013, 12:54 PM #10
Oct 2013
October 20th 2013, 01:08 PM #11
MHF Contributor
Nov 2010
October 20th 2013, 01:11 PM #12
MHF Contributor
Oct 2009
October 20th 2013, 01:39 PM #13
Oct 2013
October 20th 2013, 01:44 PM #14
MHF Contributor
Nov 2010
October 20th 2013, 01:52 PM #15
Oct 2013 | {"url":"http://mathhelpforum.com/differential-geometry/223246-writing-propositions-using-connectives-quantifiers.html","timestamp":"2014-04-16T17:14:42Z","content_type":null,"content_length":"85715","record_id":"<urn:uuid:a3f85e74-8f62-49fb-9eda-d293a3cbe82b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS A Hybrid Approach to Capture Free-Surface and Viscous Effects for a Ship in a Channel V.Bertram (Institut für Schiffbau, Germany) S.Ishikawa (Mitsubishi
Heavy Industries, Japan) Abstract The flow about a Series-60 (CB=0.6) in a channel is computed by a new hybrid approach to capture most of the free-surface and viscous effects. First, a fully
nonlinear wave resistance code computes the potential free-surface flow predicting the squat. Then the Reynolds-averaged Navier-Stokes equations are solved using the Baldwin-Lomax turbulence model.
This step uses the squat predicted in the first step and the velocities at the lateral boundary of the RANSE domain, which spans only a small part of the channel width. The free-surface deformation
is still neglected. An integrated propeller model interacts automatically with the RANSE computations. Results for flow details agree well with experiments for deep water and reproduce qualitatively
all influences of the shallow water. Remaining differences are explained mainly by not capturing the free-surface deformation in the second step. 1. Introduction Consider a ship moving steadily ahead
in the center of a channel of constant depth and width at a subcritical depth Froude number where U0 is the ship speed, g=9.81m/s2, and h the channel depth. The flow about the ship is steady except
for turbulent fluctuations. It is considerably influenced by the shallow water. The influence of the channel walls is for usual channel geometries of secondary importance. Resistance and sinkage
increase strongly near the critical depth Froude number, trim may change its sign. These global changes reflect changes in local flow details such as the wave pattern or the pressure distribution on
the hull. The 20th ITTC [1] surveys measurements of global and local flow details for a Series-60 (CB=0.6) in a channel. A correct computational prediction of the pressures at channel bottom or the
flow in the aft region of the ship is difficult, because both viscous and free-surface effects play an important role. Classical approaches following e.g. Sretensky [2] or Inui [3] focus on the
prediction of wave resistance and wave pattern. They usually do not capture e.g. the influence of squat on the flow field and neglect all viscous effects. We refer to Lap [4] and Tuck [5] for a more
comprehensive review of classical and semi-empirical approaches. More recently, Rankine singularity methods (RSM) have been applied to compute shallow-water flows about ships, using SHIPFLOW [6],
[7], [8], REVA in Nantes [9], [10], panel codes of the IfS in Hamburg [11] to [17], and Yasukawa's code [18]. Delhommeau [10], Bertram, [15] to [18], and Yasukawa [18] include also side-wall effects
for ships in channels. Bai [19] used a finite-element approach for simplified hull forms. The Duisburg Model Basin VBD [20] investigates various methods including an finite-volume Euler solver around
a river cargo vessel in a channel. All these methods still neglect viscosity. Linear RSM ([13, [18]) and methods based on volume grids ([19], [20]) do not account for squat. Thus they improve hardly
results compared to classical methods in most cases. Cura uses a different approach calculating the flow for a Series-60 in a channel, [21], [22]. His RANSE (Reynolds-averaged Navier-Stokes
equations) solver captures viscous effects but neglects free-surface effects, namely squat and trim. Cura predicts the pressure at the channel bottom quite accurately and discovers an error in
published measurements, [1], [23], which ”most probably explains previous differences between computations and […] measurements”. Remaining discrepancies are attributed to turbulence modelling, large
grid cell distortion due to the wide, but shallow channel, and the neglect of free-surface effects.
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS We will present a combined numerical approach to capture most of these remaining effects. In a first step, a nonlinear Rankine source method will predict
squat and trim for a ship in a channel. In a second step, a RANSE solver will use a grid for a ship fixed at the predicted squat and trim. The lateral extent of the grid will be considerably smaller
than the actual channel. The velocities at the lateral boundary of the RANSE computational domain will be determined by the Rankine source code. However, the free-surface elevation will still be
neglected assuming a flat undisturbed surface instead. 2. Computational Procedure The flow is assumed to be symmetrical with respect to the hull center plane coinciding with the center plane of the
channel. The problem is solved in two steps. In the first step, the inviscid free-surface flow in the channel is computed by a Rankine singularity method (RSM). Linear source panels are distributed
above a finite section of the free surface. The panels are numerically evaluated by approximating them by a four-point source cluster, [24]. On the hull and the channel side wall, higher-order panels
(parabolic in shape, linear in strength) are distributed. Mirror images of the sources at the channel bottom enforce that no water flows through the channel bottom. The nonlinear free-surface
boundary condition is met in an iterative scheme that linearizes differences from arbitrary approximations of the potential and the wave elevation, Fig.1, [12]. The radiation and open-boundary
conditions are enforced by shifting sources versus collocation points on the free surface. [25] gives more details on the method. We describe now the automatic grid generation for the free-surface
grid. The base 'wave length' is taken as The upstream end of the grid is 1.5 · max(0.4Lpp,λ) before FP for shallow water. (For infinite water, the factor is 1.0 instead of 1.5). The downstream end of
the grid is max(0.6Lpp,λ) behind AP. The outer boundary in transverse direction BG is 0.35 of the grid length for unlimited flow, but taken at the channel wall (0.8L in our case) for a ship in a
channel. The intended number of panels per wave length is 10. The intended number of panels in transverse direction is (BG–Δx)/(1.5Δx)+1, where Δx is the grid spacing in longitudinal direction.
However, if the intended number of free-surface panels plus the number of hull panels exceeds 2500, the grid spacing in x- and y-direction is increased by the same factor until this condition is met.
The innermost row of panels uses square panels, the rest of the panels is rectangular with a side ratio (Δy/Δx) of approximately 1.5. The panels follow a 'grid waterline'. This is the upper rim of
the discretized ship (1.5m above CWL in our case) which is modified towards the ends to enforce entrance angles of less than 31°. The channel wall grid follows the free-surface grid in longitudinal
direction. In vertical direction the number of panels is the next integer to (h–Δx)/(2Δx)+1, but at least two. The uppermost row uses square panels. The free-surface panels are desingularized by a
distance of Δx. Fig. 1: Flow chart of iterative solution In a second step, the viscous flow around the ship is solved. The ship is assumed fixed at the squat calculated in the first step. The
deformation of the water surface is neglected and the water surface substituted by a flat symmetry plane. The computational domain does not extend in lateral direction to the channel walls. Instead,
the inviscid velocities of the first step are taken as boundary condition on the lateral boundary. The RANSE solver is based on Kodama's method, [26]. It solves the continuity equation including a
pseudo-compressibility term and the three momentum equations for incompressible turbu-
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS lent flows. These equations are discretized in space by cell-centered finite volumes. The inviscid fluxes are evaluated by a third-order upwind scheme
(MUSCL) with the flux difference-splitting method. The viscous fluxes are determined by central differences. The algebraic Baldwin-Lomax model accounts for turbulence, [27]. [29] gives more details
on the method. The propeller effect is considered by applying an equivalent body force in the right-hand side of the RANSE. Our RANSE solver with propeller effect is based on Hinatsu's method, [29].
This considers the body force in both thrust direction and rotative direction. The propeller force distribution is estimated by Yamazaki 's [30] an infinite-blade propeller theory. The distribution
obtained by this method depends on the propeller inflow and has to determined by an iterative procedure: Solve the RANSE for the ship without propeller Calculate the wake distribution at the
propeller plane Define the required propeller thrust as ship resistance minus corrective towing force Calculate the propeller force distribution using a propeller program with inflow and required
thrust (as computed above) as input data Solve the RANSE with equivalent body forces If the resistance is equal to the required thrust, end the calculation. If not, calculate the new propeller inflow
by subtrating the propeller induced velocity from the wake distribution at the propeller plane and go back to step 3) This cycle is actually performed every 10 outer iterations of the RANSE
computation. No problems with convergence were ever observed. An H-O type grid is generated using Kodama's implicit geometrical method, [31]. An initial algebraicly generated hull grid is modified so
as to satisfy requirements of orthogonality, smoothness, clustering towards the ends, and minimum spacing. Grid lines are clustered towards bow and stern profiles in streamwise direction, and towards
the hull in radial direction. Bow and stern profiles are followed by vertical grid lines avoiding the step curve approximation of Cura, [21], [22]. The horizontal lines are approximately orthogonal
to the vertical grid lines, and also to both the bow and stern profiles. 3. Test Case: Series-60 The method was applied to a Series-60 ship (CB=0.6, L=6.096m, λ=1:20). Results are compared to
experimental data of the Duisburg Model Basin VBD. The lateral wall of the towing tank lies 0.8L from the center plane. Experiments were performed for water depth-to-draft ratios h/T=3.2, 2.0, 1.5,
1.2. We computed the cases given in Table I, Fig.2. We denote the case h/T=3.2 as 'deep' water, h/T=1.5 as ‘shallow' water. Table I: Computed cases for Series-60 h/T Fn Fnh Rn 3.2 0.15 0.363 7.0 ·
106 1.5 0.15 0.530 7.0 · 106 3.2 0.16 0.387 7.5 · 106 1.5 0.16 0.565 7.5 · 106 3.2 0.18 0.436 8.4 · 106 1.5 0.18 0.636 8.4 · 106 Fig. 2: Computed cases for Series-60 at Fn=0.15; dotted line RANSE
grid boundary 503 elements discretized the hull up to a height of 0.23 T above the CWL, Fig.3. The free-surface grid extended 0.8L in lateral direction (to the channel wall), 0.6L ahead of FP and
0.5L behind AP. 96 · 19=1824 elements were used to discretize this area. This discretization resolve the wave pattern coarsely, but is deemed sufficient to capture effects relevant for squat and
induced pressures. 96·2=192 elements were used to discretize the channel wall for h/T=1.5, 96 · 3=288 elements for h/T=3.2. Figs.4 show the RANSE grid for h/T=1.5. The grid extended 0.5L ahead of FP
and L behind AP. The lateral extent was 0.2L. 100·24·50=120000 cells were used in total. The computations assumed a kinematic viscosity of v=1.01 · 10–6m2/s and a water density of ρ=1000kg/m3.
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS Fig. 3: Grid for computations with Rankine Source Method (2519 elements) Fig. 4a: RANSE grid covering near field (120000 cells)
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS Fig. 4b: Detail of RANSE grid at aftbody Fig. 4c: Detail of RANSE grid at bow
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS 4. Resistance test simulations Local flow details of the first-step RSM computation like wave pattern, Fig.5, pressure on hull, and wave profile showed
no irregularities. Free-surface grid variations gave almost exactly the same results. Fig. 5: Wave pattern for Fn=0.15 at h/T=3.2 (top) and h/T=1.5 (bottom) Table II gives the computed and measured
squat for Fn=0.15. The computations capture the squat well. Measurements of various towing tanks, [1], show considerable scatter of measured squat for h/T=1.5, ranging from ΔT/L=0.00236 to 0.00326.
Our computed result of 0.00269 lies well within this bandwidth. The difference of 7% between measured (VBD) and computed squat may be due to instationary flow effects in the experiments (VBD in
personal communication), but neglect of the boundary layer may also contribute. Table II: Sinkage and trim (positive for bow immersion) for Series 60 at Fn=0.15 Fn=0.15 experiment RSM h/T ΔT/L θ ΔT/L
θ 3.2 0.00118 0.00050 0.00120 0.00040 1.5 0.00285 0.00083 0.00269 0.00087 Fn=0.16 experiment RSM h/T ΔT/L θ ΔT/L θ 3.2 0.00139 0.00058 0.00135 0.00051 1.5 0.00348 0.00116 0.00326 0.00107 The pressure
at the channel bottom is dominated by free-surface effects, namely the primary wave system with its long wave trough along the ship lengths. Towards the ends, the pressure at the channel bottom shows
local maxima. The aft maximum is less pronounced due to viscous effects. RANSE and RSM solution are blended with RSM solutions taken between ±0.4L, RANSE solution otherwise. The two solutions
coincide in an intermediate region so that the blended solution is smooth. The pressure on the channel bottom is captured well for 'deep water', Fig.6. For 'shallow water', the tendency is captured
well including a local maximum amidships, Fig.7. The difference between experiments and computations is 7%. For comparison, linear results of the RSM code (first iterative step) are also given. The
hybrid CFD approach improves both the prediction of the pressure minimum (due to the consideration of squat and local wave trough) and the aft pressure maximum (due to capturing viscous effects). We
attribute the remaining differences to the underpredicted squat which in turn might be due to a not fully stationary flow in experiments. Fig.6: Pressure coefficient CP along center line on channel
bottom at Fn=0.15; h/T=3.206; experiments (·), hybrid CFD (•) Fig.7: Pressure coefficient CP along center line on channel bottom at Fn=0.15; h/T=1.5; experiments (·), hybrid CFD (•), linear RSM (○)
Fig.8 shows contour lines of the axial velocity for Fn=0.18, Rn=8.4 · 106. Unfortunately only very few data were measured close to the hull. Subsequently, the inner contour lines of the experimental
results cannot be very reliable. The original plots, [1], [23], do not reflect the quality of the measured data due to a poor plotting procedure. We therefore plotted our own curves based on the data
supplied by VBD. In this way we wanted to exclude differences due to the employed plotting algorithms. Experiments and computations agree well for h/T=3.2.
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS Shallow water strongly changes the experimental contour lines. They are inflated at the lower regime and get closer to the hull in the upper regime which
makes their overall circumferential contour smoother. The computations capture this behaviour only qualitatively. The widening of the contour lines in the lower regime is overpredicted. Cura' s [21],
[22] RANSE computations show the same effect. Cura suspected as reason for his discrepancies: ”The overprediction of widened contour lines in the lower regime is probably due to the neglected
free-surface effects. Especially the sinkage leads to higher blockage, thus to higher local velocities and contour lines closer to each other.” However, our results indicate that including the
sinkage does not remove the discrepancies. Cura's results agree better with experiments than ours. Maybe the strong wave trough over large part of the ship increases the blockage, but we are
surprised that this still effects the flow so far aft. Turbulence modelling is a popular scapegoat. But for such a slender hull, it should not have such a large impact on the contour lines. Cura uses
a different turbulence model and his results are quite similar to ours. This indicates that differences are rather due to the physical model (neglect of free surface deformation) or differences
between computational methods and experiments as such. Computations including the free-surface deformation and further model tests, preferably from other towing tanks or with larger model scales,
might give more insight into this phenomenon. With the data presently available, we cannot explain the differences between computations and experiments completely. Fig.9a shows the computed pressure
coefficient on 5 cross sections in the aftbody for Fn=0.16, Rn=7.5 · 106. (No experimental data were available for Fn=0.15). Shallow water induces strong low pressure over an extended region of the
ship. The integral value of this effect is the increased squat. The three-dimensionality of the flow for deep water is shown by the curved pressure lines in the bilge region. Shallow water makes the
flow rather two-dimensional. The pressures are nearly constant at each cross section. This agrees qualitatively well with all ITTC experiments, [1]. For deep water, experiments of VBD. and our
computations show differences in the order of ΔCP=0.02. Various numerical tests produced only little differences in the computed results. We compared results for Fn=0.16, Rn=3.2 · 106 with
experiments of the University of Iowa, [32]. The different Reynolds number has virtually no effect on the computational results, Fig.9b. Our results agree well with the Iowa experiments. The
differences between the Iowa and VBD measurements, but also the differences between measurements on port and starboard for the VBD measurements alone, are an indication of the margin of uncertainty
for the experimental results. Measuring pressure on the hull is difficult and no criticismn of the experimenters is implied. We just want to point out that in the aftbody apparently the computations
are already within the margin of uncertainty of available experiments. For shallow water, only VBD measurements are available. Agreement is good for the two aftmost sections. For the third section (x
/L=0.15), the VBD measurements differ considerably between port and starboard. Computations agree well with measurements for one side. For the fourth and fifth section, the experimental pressures are
noticably lower than computed pressures. This is to be expected: Our viscous computations capture a numerically predicted squat, but not the deformation of the water surface. The strong wave trough
for shallow water gives an additional blockage effect over the central part of the ship that leads to higher velocities and lower pressures. To capture this effect, the RANSE grid generator would
have to incorporate the wave elevation predicted by the RSM code. At present, our grid generator does not have this capability. Integrating the pressures and shear stresses gives the resistance
coefficient (without wave resistance) based on S0=L2/5.83. The RANSE computations integrated the pressure over the whole ship hull (including squat, excluding local wave deformation). The increase in
resistance for shallow water is predicted correctly. However, the resistance is overestimated, Table III. The agreement is good for deep water, better than usually for fuller hulls, because the
Series-60 is a slender hull with no flow separation in the aftbody. Subsequently the frictional resistance dominates and this is predicted relatively accurately by RANSE codes. We contribute the
perfect agreement though to fortunate error cancellation. For shallow water, the overestimation in resistance is considerable. This is qualitatively expected. We neglect the deformation of the free
surface in the viscous computation. This has two effects: The actual wetted surface is smaller than the wetted surface in the computation. The blockage effect is underpredicted. In reality the flow
is faster, the boundary layer thinner. A crude correction for the first effect is possible: We computed the actual wetted surface with the RSM
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS Fig. 8: Contour lines of the axial velocity u/U0 at 5%L before AP, Fn=0.18, Rn=8.4 · 106 h/T=3.2 (left) and h/T=1.5 (right); - - - - VBD, —— CFD Fig. 9a:
Pressure distribution for resistance test, Fn=0.16, Rn=7.5 · 106, • exp. VBD, · CFD
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS Fig. 9b: Pressure distribution for resistance test, Fn=0.16, Rn=3.2 · 106, ○ exp. Iowa, · CFD Fig. 10: Contour lines of axial velocity u/U0 at 5%L before
AP, Fn=0.18, Rn=8.4 · 106, with prop. h/T=3.2 (left) and h/T=1.5 (right), - - - - VBD, —— CFD
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS code. The ratio of the actual wetted surface to wetted surface assumed by the RANSE computation is 0.9945 for both water depths. So, this effect is not
significant. Better quantitative agreement then requires a RANSE grid that follows the free surface deformation. This could be achieved either by a hybrid approach or by free-surface RANSE
computations. Fig. 11: Pressure distribution for propulsion test, Fn=0.16, • exp. VBD (Rn=7.5 · 106), ○ exp. Iowa (Rn=3.2 · 106), · CFD (Rn=7.5 · 106) Table III: Computed (w/o wave resistance) and
measured resistance coefficients Fn Rn h/T=3.2 h/T=1.5 CFD 0.15 7 · 106 3.86 · 10–3 5.79 · 10–3 Exp 0.15 7 · 106 3.86 · 10–3 4.67 · 10–3 The employed turbulence model might also contribute
considerably to the error. The Baldwin-Lomax model inherently assumes that there is only one maximum in the flow profile for the product of wall distance and magnitude of vorticity at that point.
This assumption already requires some care for deep water cases, as more than one local maximum may appear, e.g. [33]. For shallow water, a second maximum definitely will appear close to the channel
bottom. We confirm Graf who investigated the two-dimensional flow around barges on shallow water, [34]. Maybe the Baldwin-Lomax turbulence model is generally unsuited for flows between two walls. In
any case, we share the wide consensus that CFD methods are not yet accurate enough to predict resistance with practical accuracy. 5. Propulsion test simulations The simulations for propulsion test
required some assumptions, but as far as possible the conditions supplied by VBD were used: Table IV gives the propeller data used in the computations. The propulsion tests were performed for the
ship self-propulsion point. A corrective towing force was applied based on ITTC-57 and CA=0.0002. Fig. 10 shows the contour lines of the axial velocity for Fn=0.18, Rn=8.4 · 106. The qualitative
effect of the propeller is captured as expected: The contour lines are getting closer to the hull compared to the resistance test, see Fig.8. At the considered station (5% before AP), the port/
starboard asymmetric influence of the propeller is still very small. The quantitative agreement resp. disagreement between computed and measured contour lines is similar to the resistance test.
Fig.11 shows the computed pressure coefficient on 5 cross sections in the aftbody for Fn=0.16, Rn=7.5 · 106. As the different Reynolds number was proven to have no significant effect for the
resistance test, we plotted Iowa and VBD results this time in one figure. The star-board/port asymmetry of the pressure due to the propeller is at the considered stations negigibly small. So we
plotted only the starboard computational results. The propeller accelerates the flow shifting the pressure generally to lower values. The computation not only reproduces this effect as expected, it
also agrees well with experiments quantitatively. Only for the sections closer to amidships, the same differences as for the resistance test are apparent for the same reasons as discussed above.
Table IV: Propeller data (λ=1:20) propeller diameter 209.5mm boss ratio 0.19 P/D 1.04 Ae/A0 0.565 height over keel 160mm position before AP 1%L blades 4 Blade Geometry r [m] chord length [m] 0.021348
0.05060 0.032328 0.05714 0.041114 0.06136 0.051346 0.06486 0.062326 0.06649 0.073306 0.06510 0.083538 0.05969 0.095983 0.04388 0.099066 0.03460 0.104750 0.0 Propeller open-water data J KT KQ 0.4
0.301 0.0481 0.5 0.263 0.0429 0.6 0.223 0.0373 1.09 0.0 6. Conclusions The hybrid approach computing first squat and potential flow field and then the viscous flow improves the quality of results and
saves computational time for shallow water applications. A nonlinear potential flow code may be already sufficient for cases where only the pressure on the channel bottom is of interest.
Discrepancies remain for the pressure on the hull in the middle section of the ship and the computed resistance. These discrepancies could be reduced by taking the free-surface deformation into
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS Acknowledgement The research was performed during a stay of V.Bertram as a visiting, scientist of MHI R&D Center in Nagasaki sponsored by the German
Research Association (DFG). The authors are grateful for the assistance of VBD, namely Dipl.-Ing. A.Gronarz, for updated data on measurements. We thank H.Sato for his assistance for the RANSE
computations. References 1. ITTC ( 1993), ”Report on cooperative experimental program in shallow water,” 20th Int. Towing Tank Conf., Resistance and Flow Committee, San Francisco 2. Sretensky, L.N. (
1937), ”A theoretical investigation of wave resistance,” Joukovsky Central Institute for Aero-Hydrodynamics Rep. 319 (in Russian) 3. Inui, T. ( 1954), ”Wave-making resistance in shallow water sea and
in restricted water with special reference to its discontinuities,” J. Soc. Nav. Arch. of Japan 76, pp. 1–10. 4. Lap, A. ( 1972), ”Ship resistance in shallow and restricted water,”, 13th Int. Towing
Tank Conf., Appendix 5 to Report of Resistance Committee, Berlin/Hamburg 5. Tuck, E.O. ( 1978), ”Hydrodynamic problems of ships in restricted water,” Ann. Rev. Fluid Mech. 10, pp. 33–46. 6. Ni, S.Y.
( 1987), ”Higher order panel method for potential flows with linear or nonlinear free surface boundary conditions,” Ph.D. thesis, Chalmers Univ. of Technology, Sweden 7. Kim, K. and Choi, Y. ( 1993),
”A numerical calculation of free surface potential flow field and of ship wave resistance in shallow water by fully nonlinear wave theory,” 2nd Japan Korea Workshop, Osaka, pp. 111–120. 8. Kim, K.,
Choi, Y., Jansson, C., and Larsson, L. ( 1994), ”Linear and nonlinear calculations of the free surface potential flow around ships in shallow water,” 20th Symp. Naval Hydrodyn., Santa Barbara 9.
Maissonneuve, J. and Delhommeau, G. ( 1991), ”A computer tool for solving the wave resistance problem for conventional and unconventional ships,” 3rd Int. Conf. CADMO, Key Biscaine 10. Delhommeau, G.
( 1993), ”Wave resistance code REVA,” 19th WEGEMT school on Num. Simulation of Hydrodyn., Nantes 11. Söding, H., Bertram, V., and Jensen, G. ( 1989), ”Numerical computation of squat and trim of ships
in shallow water, ” STG-Yearbook 83, Springer, pp. 42–48. (in German) 12. Jensen, G., Bertram, V., and Söding, H. ( 1989), ”Ship wave-resistance computations,” 5th Int. Conf. Num. Ship Hydrodyn.,
Hiroshima, pp. 593–606. 13. Kux, J. and Müller, E. ( 1992), ”Shallow-water influence on ship flows demonstrated for a Series-60 ship, CB=0.6,” STG-Yearbook 86, Springer, pp. 367–389. (in German) 14.
Bertram, V. ( 1994), ”Shallow water effects for SWATH ships,” 9th Int. Workshop Water Waves and Floating Bodies, Kuju 15. Zibell, H.G. and Bertram, V. ( 1994), ”Influence of the channel effect on
resistance and flow field of a river cargo vessel,” Binnenschiffahrt 49/17, pp. 34–38. (in German) 16. Bertram, V. and Jensen, G. ( 1992), ”Side wall and shallow water influence on potential flow,”
8th Int. Workshop Water Waves and Floating Bodies, Val de Reuil 17. Bertram, V. and Yasukawa, H. ( 1995), ”Inviscid free-surface computations for a Series-60 in a channel,” 4th Symp. Nonlinear and
Free-Surface Flows, Hiroshima, pp. 13–16. 18. Yasukawa, H. ( 1989), ”Calculation of the free surface flow around a ship in shallow water by Rankine source method,” 5th Int. Conf. Num. Ship Hydrodyn.,
Hiroshima, pp. 643–655. 19. Bai, K.J. ( 1977), ”A localized finite element method for steady three dimensional free surface flow problems,” 2nd Int. Conf. Num. Ship Hydrodyn., Berkeley, pp. 78–87.
20. Pagel, W.; Rieck, K.; Grollius, W.; Gronarz, A. ( 1995), ”Experimental and theortical-numerical flow investigations for river cargo ships,” VBD-Report 1366, Duisburg (in German) 21. Cura, A. (
1994), ”Influence of shallow water on the flow around a slender ship hull, ” 15th Duisburg Koll. Schiffstechnik/Meerestechnik, Univ. Duisburg, pp. 78–96. (in German) 22. Cura, A. ( 1995), ”Influence
of water depth on ship stern flows,” Ship Techn. Res. 42/4, pp. 193–197. 23. Binek, H., Ter Jung, G., and Müller, E.
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS ( 1992), ”Resistance and flow characteristics of ships with CB=0.6 on shallow water,” VBD-Report 1320, Duisburg Model Basin, Duisburg, Germany (in
German) 24. Bertram, V. ( 1990), ”Fulfilling open-boundary and radiation condition in free-surface problems using Rankine sources,” Ship Techn. Res. 37/2, pp. 47–52. 25. Hughes, M. and Bertram, V. (
1995), ”A higher-order panel method for steady 3-d free-surface flows,” IfS-Report 558, Univ. Hamburg, Germany 26. Kodama, Y. ( 1992), ”Computation of ship's resistance using a NS solver with global
conservation—Flat plate and Series 60 (Cb=0.6) hull,” J. Soc. Naval Arch. Japan 172, pp. 147–156. 27. Baldwin, B. and Lomax, H. ( 1978), ”Thin layer approximation and algebraic model for separated
turbulent flows prediction code,” AIAA Paper 78–257 28. Ishikawa, S. ( 1994), ”Application of CFD to estimation of ship's viscous resistance—A series of full hull forms,” Trans. West-Japan Soc. of
Naval Arch. 87, pp. 81–92. 29. Hinatsu, M.; Kodama, Y.; Fujisawa, J.; Ando, J. ( 1994), “Numerical simulation of flow around a ship hull including a propeller effect,” Trans. West-Japan Soc. Naval
Arch. 88, pp. 1–12. (in Japanese) 30. Yamazaki, R. ( 1966), ”On the theory of screw propellers in non-uniform flows”, Memoirs of the Faculty of Engineering, Kyushu Univ., Vol 25/2 31. Kodama, Y. (
1991), ”Grid generation around a practical ship hull form using the implicit geometrical method,” J. Soc. of Naval Arch. Japan 169, pp. 27–38. (in Japanese) 32. Toda, Y., Stern, F., and Longo, J. (
1991), ”Mean-flow measurements in the boundary layer and wake and wave field of a series-60 CB=.6 ship model for Froude numbers .16 and .316,” IIHR Rep. 352, Univ. of Iowa 33. Bertram, V. ( 1994),
”Numerical shiphydrodynamics in practice,” IfS-Report 545, Univ. Hamburg, Germany (in German) 34. Graf, K. ( 1992), ”Calculation of viscous flow around barge-like ship hulls,” Ship Techn. Res. 39/3,
pp. 107–117. DISCUSSION H.Kajitani Kumamoto Institute of Technology, Japan May I congratulate the authors on an impressive paper which describes a hybrid method to simulate the force and flow around
a ship in a shallow channel? The blending of RANSE after RSM seems quite effective by the reason that we can consider how the inviscid and viscid flow characteristics are playing their own important
roles on the flow. From this point of view, we should have expected if possible a more detailed and step-by-step explanation though we admit a new tendency toward quick, short, and symbolic way of
the presentations. The authors compare the wave patterns in the deep and shallow water cases. However, the wave angle spreading out from bow (or even from stern) seems to be unchanged between deep
water and shallow water. This if, of course, due to the low Fn case. I have much interest in widening of the wave pattern at high Fn (or Fnh=0.64) in shallow water case how early or late it comes
out. In Table II, the measured and evaluated (RSM) sinkage and trim are compared which show a so well coincidence. But I'm afraid whether there is room to accept viscous effect. If you evaluate the
sinkage force by RANSE, you might get a pronounced downwards local frictional force on the ship fore part, and less effect around the aft part, you may obtain a more closer evaluation. In this case,
however, trim by bow moment is also increased. With regard to trim moment, shear stress on the ship bottom surface does not have its counter part, so trim by bow is mainly caused by this frictional
force and the towing height has decisive effect both on measurement and evaluation. I would like to know the towing height applied both in experiment and evaluation. Regarding pressure coefficient Cp
along the centerline of the channel bottom (Fig.7), the prediction by CFD seems to be a little less estimate. In advanced RANSE procedures, the trim and sinkage at the latest iteration are included
in the next calculation. Even though the authors' RANSE free surface grid does not seem to follow or express the local depression of mean free surface around the midship region. This means a relaxed
flow continuity around midship, which implies a less prediction of the velocities also a less evaluation of the Cp. These have a decisive effect
OCR for page 743
Twenty-First Symposium on NAVAL HYDRODYNAMICS on the resistance evaluation (Table III). In this connection, it is hard to accept the overestimated resistance coefficient in shallow water case. The
authors' second reason (2 under prediction of blockage effect…), if corrected by taking the effect of free surface depression into account, may work out a further resistance increment. So we need a
more careful approach. Propulsion test simulation is, I believe, a challenging problem which seems to be yet at the beginning stage. In this case, to check the dipping at AP (usually measured at this
station) seems to be a good index whether the simulation is done well or not. I appreciate very much your comments about this including measured results. AUTHORS' REPLY The considered depth Froude
number does not yet lead to a considerable widening of the wave contour angle. However, the method is capable of capturing this phenomenon as demonstrated previously for an inland water vessel at
depth Froude number 0.9 [16]. The sinkage is not very strongly affected by either viscous effects or the towing force. However, the remarks of Prof. Kajitani concerning their influence on trim are
qualitatively correct. The towing force may be incorporated in the RSM but no information for the experimental condition was available. So we did not include this option. Tentative initial
computations for the deep-water case showed only small influence of changing the towing force between still-water line and propeller height. The effect on trim is expected to increase with shallow
water. However, as far as pressures on the channel bottom are concerned, the main influence will be sinkage. As the computations are already within the margin of uncertainty of the experiments for
sinkage and trim, we did not focus our efforts on a further improvement of the method in the ability to predict trim and sinkage. We agree with Prof. Kajitani that we should focus instead on
improving the resistance prediction which at present does not satisfy our expectations. Small differences of trim and sinkage affect the accuracy of the resistance prediction, especially for shallow
water. So, in order to improve the resistance prediction to the required degree, we may need a free-surface RANSE solver with sinkage and trim effect. As the problem of insufficient resistance
prediction is shared by many other colleagues, it appears that we will need considerable more shared research worldwide before we see consistently accurate resistance predictions for real ship
geometries, especially for the more complicated shallow-water hydrodynamics. The propulsion test simulations predicted trim and sinkage using the RSM without propeller action, i.e., the same trim and
sinkage as for the resistance test. Admittedly this is crude, but it is better than the usual practice of taking the zero-speed design floating condition. We do not have measured results in addition
to the ones published with the kind permission of the Duisburg Towing Tank. We share Prof. Kajitani's wish for further details to validate our computational procedures and hope that maybe ITTC may
provide in the future such data. | {"url":"http://books.nap.edu/openbook.php?record_id=5870&page=743","timestamp":"2014-04-18T23:25:21Z","content_type":null,"content_length":"109397","record_id":"<urn:uuid:acddc725-70a2-4b8f-8a61-c13599c93f38>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binomial distribution (the question might be incomplete!)
February 13th 2007, 09:13 AM #1
Feb 2007
Binomial distribution (the question might be incomplete!)
could you please answer a question I vaguely remember?
question: given p =0.3 . calculate the binomial distribution P(X>=2) ?
If the question is incomplete, what are the values that are still needed?
or is it possible to calculate the binomial distribution with the above information?
Please give the formula to calculate the binomial distribution.
In such problems we usually need to know the number of independent trials in the binominal event.
P(X=x)={N \choose x}(p)^x[1-p]^{N-x}, p is the probability, x is the number of successes and N is the number of trials.
Hi Plato,
Thanks for your answer, do i need to compute the distribution
with the x as 2 or 1 in this case?.
Assuming that n the number of trials= 4 and x as 2,
I try to arrive at a solution
P(X>=2)= { n!/x!(n-x!)}(p)^x[1-p]^{n-x} = {1*2*3*4/1*2(1*2)}(.3)^2[.7]^2
(my math skills are very bad) Could you please verify if the above calculation is correct?
Yes yours skills are poor.
Note that:
P(X=2)= { n!/x!(n-x!)}(p)^x[1-p]^{n-x} = {1*2*3*4/1*2(1*2)}(.3)^2[.7]^2
={6}(.09)*.049. And that is just P(X=2)!
Now you need to find P(X=3) & P(X=4) and add all three.
Thank you once again, i take that one needs to compute
the value of x from 2 upto the number of trails(here 4)
and add them up when one needs to compute P(X>=2).
consider a hypothetical question - for binomial distribution of p(x<3), one needs to compute the distribution for x=0,1 and 2 and then add them up?.
Trying to find the answer..
P(X=2)= { n!/x!(n-x!)}(p)^x[1-p]^{n-x} = {1*2*3*4/1*2(1*2)}(.3)^2[.7]^2
={6}(.09)*.049 = .02646
p(x=3)= { n!/x!(n-x!)}(p)^x[1-p]^{n-x} ={1*2*3*4/1*2*3(1)}(.3)^3[.7]^1
={4}(.027).7= .0756
p(x=4)= { n!/x!(n-x!)}(p)^x[1-p]^{n-x}}={1*2*3*4/1*2*3*4(1)}(.3)^4[.7]^0
={1}(.0081)1= .0081
please allow me to ask some basic questions ...0!= 1? and anynumber^0=1?
Adding all three values, I get .11016.
Thank you once again, i take that one needs to compute
the value of x from 2 upto the number of trails(here 4)
and add them up when one needs to compute P(X>=2).
consider a hypothetical question - for binomial distribution of p(x<3), one needs to compute the distribution for x=0,1 and 2 and then add them up?.
Trying to find the answer..
P(X=2)= { n!/x!(n-x!)}(p)^x[1-p]^{n-x} = {1*2*3*4/1*2(1*2)}(.3)^2[.7]^2
={6}(.09)*.049 = .02646
4!/(2! 2!) (0.3)^2 (0.7)^2=0.2646
p(x=3)= { n!/x!(n-x!)}(p)^x[1-p]^{n-x} ={1*2*3*4/1*2*3(1)}(.3)^3[.7]^1
={4}(.027).7= .0756
4!/(3! 1!) (0.3)^3 (0.7)=0.0756
p(x=4)= { n!/x!(n-x!)}(p)^x[1-p]^{n-x}}={1*2*3*4/1*2*3*4(1)}(.3)^4[.7]^0
={1}(.0081)1= .0081
4!/(4! 0!) (0.3)^4 =0.0081
please allow me to ask some basic questions ...0!= 1? and anynumber^0=1?
0! is defined as such for a number of reasons, one of which is so the above
works without fussing about an exception when x=1, or x=n.
anynumber^0 = 1, for the same sorts of reasons (except that the meaning of
0^0 is still being argued about).
Last edited by CaptainBlack; February 23rd 2007 at 10:54 AM.
Thanks for your reply. with all your help and support, I finally managed to arrive at the right answer!
February 13th 2007, 01:35 PM #2
February 21st 2007, 05:31 AM #3
Feb 2007
February 21st 2007, 07:08 AM #4
February 22nd 2007, 08:08 AM #5
Feb 2007
February 22nd 2007, 09:36 AM #6
Grand Panjandrum
Nov 2005
February 23rd 2007, 05:16 AM #7
Feb 2007 | {"url":"http://mathhelpforum.com/advanced-statistics/11549-binomial-distribution-question-might-incomplete.html","timestamp":"2014-04-16T19:13:45Z","content_type":null,"content_length":"51678","record_id":"<urn:uuid:7986cc21-0450-4abb-8d4c-23f1f7c8378f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Very General
Re: Very General
Hi Dave;
DaveRobinsonUK wrote:
How is everybody?
I remember once being asked that type of question and replying."everything is okay."
That's when he hit me with the following super reply."Hmmm, if I thought everything was okay, it would mean I didn't really understand the situation."
After 2 years of thought I could see he was right!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=147953","timestamp":"2014-04-19T09:34:54Z","content_type":null,"content_length":"12472","record_id":"<urn:uuid:0a66b0be-213a-469b-890d-5a9391833b63>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Physical Interpretation of String Theory?
Authors: Dirk J. Pons
This note identifies similarities between the cordus conjecture and string theory, and suggests opportunity for new research directions. To fully define a cordus particule requires 11 geometric
independent-variables. This is the same number of dimensions predicted by some variants of string theory. There is also a similarity in the structural models, e.g. for the photon. The cordus model is
physically descriptive and built with conceptual-design principles, whereas string theory provides a family of abstract mathematical models. Perhaps they are describing the same thing from different
perspectives? Therefore we invite string theorists to consider whether the orthogonal spatial dimensions in their models could instead be interpreted as geometric independent-variables. Doing so
would create new ways for interpreting string theories. Perhaps string theory might yet be a tool for the development of physically meaningful explanations for fundamental physics?
Comments: 4 Pages.
Download: PDF
Submission history
[v1] 2012-04-12 21:56:04
Unique-IP document downloads: 178 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by | {"url":"http://vixra.org/abs/1204.0047","timestamp":"2014-04-16T18:56:46Z","content_type":null,"content_length":"7606","record_id":"<urn:uuid:7f693004-3128-4b37-813f-ebd189f5934c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Linear Algebra and its Applications 409 (2005) 1331
On the difference between the maximum
multiplicity and path cover number for tree-like
Francesco Barioli a, Shaun Fallat b,,1, Leslie Hogben c
aSchool of Mathematics and Statistics, Carleton University, Ottawa, ON, Canada K1S 5B6
bDepartment of Mathematics and Statistics, University of Regina, Regina, Sask., Canada S4S 0A2
cDepartment of Mathematics, Iowa State University, Ames, IA 50011, USA
Received 17 May 2004; accepted 21 September 2004
Available online 11 November 2004
Submitted by S. Kirkland
We dedicate this work to Pauline van den Driessche for her life long contributions to linear algebra and
her support of the linear algebra community
For a given undirected graph G, the maximum multiplicity of G is defined to be the largest
multiplicity of an eigenvalue over all real symmetric matrices A whose (i, j)th entry is non-
zero whenever i /= j and {i, j} is an edge in G. The path cover number of G is the minimum
number of vertex-disjoint paths occurring as induced subgraphs of G that cover all the vertices
of G. We derive a formula for the path cover number of a vertex-sum of graphs, and use | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/781/2144102.html","timestamp":"2014-04-21T13:02:32Z","content_type":null,"content_length":"8310","record_id":"<urn:uuid:72b7311f-0a6c-4f06-a6a9-a71ae3e7167f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
Speed versus RPM calculator
Speed versus RPM Calculator
Tire Expansion:
There is a potential for error in these calculations with bias-belted tires due to centrifugal force expansion of the tire at high speeds, but that effect is generally negligible for radial-ply tires
due to the circumferential belts used in their construction. To get accurate results, racers using bias-ply tires should check with their tire supplier to determine how much the tire radius will
change at various speeds.
Calculator Equations:
For those who are curious about the calculations, here are the gory details of the speed versus engine RPM calculations:
Each revolution of the engine is reduced by the transmission gear ratio, each revolution of the output shaft of the transmission is reduced by the rear-end ratio, and each revolution of the tire
makes the car move a distance equal to the circumference of the tire. Pretty simple really.
Let's go thru the calculations to create an equation for the vehicle speed...
First, let's define the meaning of the gear ratios:
Transmission Gear Ratio (R1): denotes how many engine revolutions there are for each driveshaft revolution.
Differential Gear Ratio (R2): denotes how many driveshaft revolutions there are for each axle revolution.
Now, we'll derive the equation:
If the engine speed (for this example) RPM = 6000 revolutions/minute,
then the driveshaft speed is the engine speed divided by tranny gear ratio R1 = ( 6000 / R1) revolutions/minute,
and the rear axle speed is the driveshaft speed divided by rear-end ratio R2 = ( 6000 / (R1*R2) ) revolutions per minute.
(Note: the symbol * indicates multiplication, and / indicates division)
So, the rear tire will be making 6000 / (R1*R2) revolutions each minute, causing the car to move forward (2*pi*r) * ( 6000 / (R1*R2) ) inches/minute (where r is the loaded tire radius in inches, and
2*pi = 6.28).
That is, the car will be moving ( 6000 * 6.28 * r) / (R1*R2) inches per minute.
Since 1 mile = 5280 feet = 63,360 inches, and 1 hour = 60 minutes, then the conversion from inches per minute to miles per hour is ( 60 / 63,360 ).
So, if the engine is turning 6000 rev/min then the car must be going (60 / 63,360) * ( ( 6000 * 6.28 * r) / (R1*R2) miles/hour.
Rewriting that all into a tidy form:
(0.00595) * (RPM * r) / (R1 * R2) = vehicle speed in miles/hour
RPM = engine speed, in revolutions/minute
r = loaded tire radius (wheel center to pavement), in inches
R1 = transmission gear ratio
R2 = rear axle ratio
SCCA Ford Spec Racer -
RPM = 6000
transmission gear ratio R1 = 0.73 in high gear
rear end ratio R2 = 3.62
loaded tire radius r = 10.9 inches
The car's speed at 6000 RPM in high gear will be:
(0.00595) * ( 6000 * 10.9) / (0.73 * 3.62) = 147 miles/hour
Last Updated: 3-Apr-2012 | {"url":"http://wahiduddin.net/calc/calc_speed_rpm.htm","timestamp":"2014-04-17T04:02:42Z","content_type":null,"content_length":"11974","record_id":"<urn:uuid:7faec3a5-7202-42aa-a6c8-e31e36be9e1a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random variable generation (Pt 3 of 3)
This post is based on chapter 1.4.3 of Advanced Markov Chain Monte Carlo. Previous posts on this book can be found via the AMCMC tag.
The ratio-of-uniforms was initially developed by Kinderman and Monahan (1977) and can be used for generating random numbers from many standard distributions. Essentially we transform the random
variable of interest, then use a rejection method.
The algorithm is as follows:
Repeat until a value is obtained from step 2.
1. Generate $(Y, Z)$ uniformly over $\mathcal D \supseteq \mathcal C_h^{(1)}$.
2. If $(Y, Z) \in \mathcal C_h^{(1)}$. return $X = Z/Y$ as the desired deviate.
The uniform region is
$\mathcal C_h^{(1)} = \left\{ (y,z): 0 \le y \le [h(z/y)]^{1/2}\right\}.$
In AMCMC they give some R code for generate random numbers from the Gamma distribution.
I was going to include some R code with this post, but I found this set of questions and solutions that cover most things. Another useful page is this online book.
Thoughts on the Chapter 1
The first chapter is fairly standard. It briefly describes some results that should be background knowledge. However, I did spot a few a typos in this chapter. In particular when describing the
acceptance-rejection method, the authors alternate between $g(x)$ and $h(x)$.
Another downside is that the R code for the ratio of uniforms is presented in an optimised version. For example, the authors use EXP1 = exp(1) as a global constant. I think for illustration purposes
a simplified, more illustrative example would have been better.
This book review has been going with glacial speed. Therefore in future, rather than going through section by section, I will just give an overview of the chapter. | {"url":"http://csgillespie.wordpress.com/2011/01/12/random-variable-generation-pt-3-of-3/","timestamp":"2014-04-19T09:24:05Z","content_type":null,"content_length":"65290","record_id":"<urn:uuid:f50dbf5f-d550-4f18-a302-e7e62ddaa74c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
What are the ways we can generate ellipses?
We’ve been working with ellipses. I have talked about some of these this year. Others I haven’t. But I like this list for future reference.
• The set of points from two points (called foci) which have a set sum of distances from these two pointsand for a cool video illustrating this (alongside the reflective property of ellipses):
• Drop a planet in space near a massive object, and give it an initial push (velocity)
[not drawn to scale, obvi.]
13 thoughts on “Ellipses”
1. What’s it matter that student can draw a perfect ellipse by hand? I was trying to find an answer for that a couple months ago…
□ I don’t know if you mean you don’t see the value of drawing perfect ellipses (meaning they need to be drawn accurately/precisely)… or that you don’t see the value of drawing ellipses by hand
at all (meaning: we have calculators and computer programs that can do that).
☆ I meant drawing BY HAND – thanks for clarifying.
2. Great time of yr to go outside with sidewalk chalk and string to draw ellipses. Have you ever tried it? Use one kids’ legs as foci, tie string into circle, and another kid pulls string taut with
chalk and marks out a great ellipse. Possible to do hyperbola but trickier. They can play w/width between foci, difft string length.
□ Awww cute! I didn’t do that. I’m stupid and just taped pieces of strings of different length to large whiteboards, and then briefly had kids draw them on the whiteboards with dry erase
But next year… yes.
The hyperbola in the same way? Perhaps, though I have my doubts.
3. No you’re right – not hyperbola, but parabola. It’s from Illuminations. http://illuminations.nctm.org/LessonDetail.aspx?id=L815
We don’t get into circ/ellipse/hyper here at my new school in Alg2, so I haven’t done this activity in a while.
4. Sam
I stole your idea from a recent post when my Precalc Honors class started our conics unit. We used Desmos and a slider and came up with some wacky shapes as we modified. The kids were engaged,
were curious about the results of their guesses and even got a little competitive. Super cool. It was also a nice way to remind them that trig has not gone away even though we were safely back in
a world devoid of trig functions.
Love Reilly’s idea of sidewalk chalk. If spring ever does arrive for real here in PA we might go out and do that. I used to do shoestrings and thumbtacks on my door
5. Or: if A is a nonsingular 2×2 matrix, and v = (cos(t), sin(t)), then the set Av will be an ellipse (for t in [0,2pi]). Probably beyond where you want to go but it could be an excuse to introduce
matrix-vector multiplication.
□ I am going to be talking about ellipses parametrically on Monday! Love that we think alike!
6. Here’s another one….
7. I’ve used Geometer’s Sketchpad to construct ellipses by using the locus of a line perpendicular to a line segment, where one of the endpoints of the line segment is on a circle.
8. This is a really terrific video/animation (and part of series on conics) that demonstrates ways to construct ellipses:
□ Thank you! Terrific really is the word! | {"url":"http://samjshah.com/2013/04/23/ellipses/","timestamp":"2014-04-17T03:56:51Z","content_type":null,"content_length":"98594","record_id":"<urn:uuid:515c3321-ccba-4c77-beda-ce678c9d9d32>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Previous work on this generalization of continued fractions?
up vote 6 down vote favorite
The 2x2 matrix representation of a continued fraction makes it clear that we're multiplying together a bunch of group elements. Inversion is essentially freely adjoining a generator to a Coxeter
group; the usual notion of simple continued fractions comes from adding such a generator to $\tilde{I}_1$.
Given a regular tiling of a two-dimensional space (whether spherical, Euclidean, or hyperbolic), we get a triangle group by barycentrically subdividing the regular polygons and considering the ways
of reflecting the triangles into each other. The triangle group acts on the space in the obvious way.
Say we pick the center of one of the original polygons as our origin, pick units so that the hypotenuse of the triangle has length 1, and pick one of the triangles with a vertex at the origin as the
canonical one.
We can act on the origin with an element of the triangle group and reflect it outside the unit circle, then geometrically invert the point and bring it back inside. Those points on the surface that
one can reach in a finite number of such steps could be thought of as "rational".
In the Euclidean and hyperbolic cases, we can also go the other way, since geometric inversion always takes a point inside the unit circle to a point outside of it. We can act on a point in the space
with a group element and reflect it into the canonical triangle, then do geometric inversion to place it outside the unit circle. Those points that reach the origin in a finite number of moves could
be called "rational"; those points with repeating continued fractions could be called "quadratic surds".
Does anyone know of previous work on this idea? Does the generalization lead to any interesting number theory?
add comment
1 Answer
active oldest votes
The matrix representation of continued fractions appears in Milne-Thomson, "The Calculus of Finite Differences", Chelsea, 1981. As far as I know he was the first to study them
up vote systematically.
1 down
Whatever the virtues of the cited reference, it is not true that 1981 was the first year "matrix representations of" continued fractions were "studied systematically", given prior ambient
awareness of their obvious properties. E.g., in 1975 Nick Katz remarked in the common room at Princeton that it was "obvious" that the theory of continued fractions is a study of the
action of $SL_2(\mathbb Z)$, especially, tracking its generators sending $z$ to $z+1$ and/or to $-1/z$, on the real line. Various attempts to "generalize" the ideas were extant in those
years... not very interesting, I recall. – paul garrett Nov 26 '12 at 1:06
1 Paul: 1981 is only the year that Chelsea reprinted the book. Milne-Thomson worked in the early 20th century and that book of his is from the 1930s. – KConrad Nov 26 '12 at 6:55
Aha! @KConrad, thanks for the info! (Sorry to be slow to notice...) – paul garrett Dec 7 '12 at 18:32
add comment
Not the answer you're looking for? Browse other questions tagged continued-fractions hyperbolic-geometry rt.representation-theory reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/114460/previous-work-on-this-generalization-of-continued-fractions","timestamp":"2014-04-17T07:05:05Z","content_type":null,"content_length":"56624","record_id":"<urn:uuid:c59179f4-1539-4986-b2a3-1a221c98c3fe>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Big Picture » John Murphy’s Ten Laws of Technical Trading » Print
1. Map the Trends
Study long-term charts. Begin a chart analysis with monthly and weekly charts spanning several years. A larger scale “map of the market” provides more visibility and a better long-term
perspective on a market. Once the long-term has been established, then consult daily and intra-day charts. A short-term market view alone can often be deceptive. Even if you only trade the very
short term, you will do better if you’re trading in the same direction as the intermediate and longer term trends.
2. Spot the Trend and Go With It
Determine the trend and follow it. Market trends come in many sizes — long term, intermediate-term and short-term. First, determine which one you’re going to trade and use the appropriate chart.
Make sure you trade in the direction of that trend. Buy dips if the trend is up. Sell rallies if the trend is down. If you’re trading the intermediate trend, use daily and weekly charts. If
you’re day trading, use daily and intra-day charts. But in each case, let the longer range chart determine the trend, and then use the shorter term chart for timing.
3. Find the Low and High of It
Find support and resistance levels. The best place to buy a market is near support levels. That support is usually a previous reaction low. The best place to sell a market is near resistance
levels. Resistance is usually a previous peak. After a resistance peak has been broken, it will usually provide support on subsequent pullbacks. In other words, the old “high” becomes the new
“low.” In the same way, when a support level has been broken, it will usually produce selling on subsequent rallies — the old “low” can become the new “high.”
4. Know How Far to Backtrack
Measure percentage retracements. Market corrections up or down usually retrace a significant portion of the previous trend. You can measure the corrections in an existing trend in simple
percentages. A fifty percent retracement of a prior trend is most common. A minimum retracement is usually one-third of the prior trend. The maximum retracement is usually two-thirds. Fibonacci
retracements of 38% and 62% are also worth watching. During a pullback in an uptrend, therefore, initial buy points are in the 33 38% retracement area.
5. Draw the Line
Draw trend lines. Trend lines are one of the simplest and most effective charting tools. All you need is a straight edge and two points on the chart. Up trend lines are drawn along two successive
lows. Down trend lines are drawn along two successive peaks. Prices will often pull back to trend lines before resuming their trend. The breaking of trend lines usually signals a change in trend.
A valid trend line should be touched at least three times. The longer a trend line has been in effect, and the more times it has been tested, the more important it becomes.
6. Follow that Average
Follow moving averages. Moving averages provide objective buy and sell signals. They tell you if existing trend is still in motion and help confirm a trend change. Moving averages do not tell you
in advance, however, that a trend change is imminent. A combination chart of two moving averages is the most popular way of finding trading signals. Some popular futures combinations are 4- and
9-day moving averages, 9- and 18-day, 5- and 20 day. Signals are given when the shorter average line crosses the longer. Price crossings above and below a 40-day moving average also provide good
trading signals. Since moving average chart lines are trend-following indicators, they work best in a trending market.
7. Learn the Turns
Track oscillators. Oscillators help identify overbought and oversold markets. While moving averages offer confirmation of a market trend change, oscillators often help warn us in advance that a
market has rallied or fallen too far and will soon turn. Two of the most popular are the Relative Strength Index (RSI) and Stochastics. They both work on a scale of 0 to 100. With the RSI,
readings over 70 are overbought while readings below 30 are oversold. The overbought and oversold values for Stochastics are 80 and 20. Most traders use 14-days or weeks for stochastics and
either 9 or 14 days or weeks for RSI. Oscillator divergences often warn of market turns. These tools work best in a trading market range. Weekly signals can be used as filters on daily signals.
Daily signals can be used as filters for intra-day charts.
8. Know the Warning Signs
Trade MACD. The Moving Average Convergence Divergence (MACD) indicator (developed by Gerald Appel) combines a moving average crossover system with the overbought/oversold elements of an
oscillator. A buy signal occurs when the faster line crosses above the slower and both lines are below zero. A sell signal takes place when the faster line crosses below the slower from above the
zero line. Weekly signals take precedence over daily signals. An MACD histogram plots the difference between the two lines and gives even earlier warnings of trend changes. It’s called a
“histogram” because vertical bars are used to show the difference between the two lines on the chart.
9. Trend or Not a Trend
Use ADX. The Average Directional Movement Index (ADX) line helps determine whether a market is in a trending or a trading phase. It measures the degree of trend or direction in the market. A
rising ADX line suggests the presence of a strong trend. A falling ADX line suggests the presence of a trading market and the absence of a trend. A rising ADX line favors moving averages; a
falling ADX favors oscillators. By plotting the direction of the ADX line, the trader is able to determine which trading style and which set of indicators are most suitable for the current market
10. Know the Confirming Signs
Include volume and open interest. Volume and open interest are important confirming indicators in futures markets. Volume precedes price. It’s important to ensure that heavier volume is taking
place in the direction of the prevailing trend. In an uptrend, heavier volume should be seen on up days. Rising open interest confirms that new money is supporting the prevailing trend. Declining
open interest is often a warning that the trend is near completion. A solid price uptrend should be accompanied by rising volume and rising open interest.
Technical analysis is a skill that improves with experience and study. Always be a student and keep learning. | {"url":"http://www.ritholtz.com/blog/2005/05/john-murphys-ten-laws-of-technical-trading/print/","timestamp":"2014-04-19T14:53:31Z","content_type":null,"content_length":"10248","record_id":"<urn:uuid:68d57d9f-d9b9-4b43-a7d2-43bb3d447b59>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Characterization of Continuity Revisited
Gámez Merino, José Luis and Muñoz Fernández, Gustavo Adolfo and Seoane Sepúlveda, Juan Benigno (2011) A Characterization of Continuity Revisited. American Mathematical Monthly, 118 (2). pp. 167-170.
ISSN 0002-9890
Official URL: http://www.ingentaconnect.com/content/maa/amm/2011/00000118/00000002/art00009
It is well known that a function f : R -> R is continuous if and only if the image of every compact set under f is compact and the image of every connected set is connected. We show that there exist
two 2(c)-dimensional linear spaces of nowhere continuous functions that (except for the zero function) transform compact sets into compact sets and connected sets into connected sets respectively.
Item Type: Article
Subjects: Sciences > Mathematics > Topology
ID Code: 16895
References: R. M. Aron, V. I. Gurariy, and J. B. Seoane-Sepúlveda, Lineability and spaceability of sets of functions on R , Proc. Amer. Math. Soc. 133 (2004) 795–803. oi:10.1090/
J. L. Gámez-Merino, G. A. Mu~noz-Fernández, V. M. Sánchez, and J. B. Seoane-Sepúlveda, Sierpiński-Zygmund functions and other problems on lineability, Proc. Amer. Math. Soc. 138 (2010)
3863–3876. doi:10.1090/50002–9939-2010–10420-3
D. J. Velleman, Characterizing continuity, Amer. Math. Monthly 104 (1997) 318–322. doi:10.2307/2974580
Deposited 26 Oct 2012 09:01
Last 24 Sep 2013 13:28
Repository Staff Only: item control page | {"url":"http://eprints.ucm.es/16895/","timestamp":"2014-04-21T05:03:34Z","content_type":null,"content_length":"24606","record_id":"<urn:uuid:bea6ff67-854e-4385-b1cc-618437e054b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
for newton's laws, do you refer to acceleration as m/s^2 or N/kg?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
don't you use N/kg for gravity also?
Best Response
You've already chosen the best response.
yes gravity is just a special type of acceleration.
Best Response
You've already chosen the best response.
so in newton's laws, you go for a=N/kg instead the m/s^2?
Best Response
You've already chosen the best response.
yes because m/s^2 is just units. whereas N/kg is actually a variable representation of acceleration.
Best Response
You've already chosen the best response.
is 1 N/kg equal to 1 m/s^2?
Best Response
You've already chosen the best response.
wait actually I'm sorry for some reason I was thinking F/m. both are units that mean the same thing.
Best Response
You've already chosen the best response.
so really use can use either to represent accleration, but typically we use m/s^2 to represent acceleration.
Best Response
You've already chosen the best response.
For Newtons second law the units would be m/s^2
Best Response
You've already chosen the best response.
is this for a homework question or are you just trying to understand units better?
Best Response
You've already chosen the best response.
understanding the units, because im not sure what units to put after i find the acceleration?
Best Response
You've already chosen the best response.
N/kg is odd, butt yes it means the same thing.
Best Response
You've already chosen the best response.
think about velocities unites, we use m/s for velocity, so to keep it consistent use m/s^2 for acceleration.
Best Response
You've already chosen the best response.
alright!thank you and do you know what 1 newton is equivalent to?:s
Best Response
You've already chosen the best response.
yes one newton = (kgm)/s^2
Best Response
You've already chosen the best response.
okay thanks alot!
Best Response
You've already chosen the best response.
See if you look at g as acceleration due to gravity.. then you normally use m/s^2.. cause even if its acceleration due to gravity.. its none the less acceleration and that has to be its SI
units.. However.. there is another TOTALLY DIFFERENT way to look at g.. and that is called as gravitational field intensity.. .meaning.. how strong is the field of gravity due to earth (or for
that matter any mass).. when you mean THAT way.. you usually say g = N/kg.. meaning you are saying.. how much force a UNIT mass would experience in a gravitational field... so bottom line if i
said g = 9.8m/s^2... it basically says how much acceleration i would get at that point .. if i said g = 9.8 N/kg.. it means how much force is experienced by a unit mass kept at that point.. so
even though they have the same physical dimensions.. they are two VERY different concepts!! and hence we use those different units!
Best Response
You've already chosen the best response.
First thing: there is no difference between N/kg and m/s², so both are ok. Acceleration is a kinematic concept in the first place. So it is more logical to use m/s² Even if N's 2nd law did not
exist, and the unit newton had not been defined, you could still work with accelerations in m/s². When you first encounter g, it is seen as the ratio of a force by a mass as you hang a mass on a
spring, so it is logical to use N/kg to define it. But then you realise an object in free fall has an acceleration equal to g as well, and then the natural unit becomes m/s². It is the same with
other quantities, and it is sometimes a cultural point of view. In France, all electric fields are given in V/m, but I think I have seen a different (but same) unit on this forum, though I cannot
remember which one it was, maybe N/C.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50beb829e4b09e7e3b85ed32","timestamp":"2014-04-19T22:30:21Z","content_type":null,"content_length":"72495","record_id":"<urn:uuid:210ff2f4-136d-4b22-a594-46b82d183935>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Symmetry 2 - p2 (Science U)
the tilings is symmetric with respect to 180 degree turns around any of the red points.
first symmetry - p1 - to wallpaper the plane with this larger tile. Moreover, we will obtain the same tiling of the plane as the p2 animation above is constructing. Hence, the p2 tiling is also
symmetric under two translations. (Can you find the two translations?)
Kali denotes this symmetry by "2222". You can go to now to experiment with symmetry p2. | {"url":"http://www.scienceu.com/geometry/articles/tiling/symmetry/p2.html","timestamp":"2014-04-21T00:05:44Z","content_type":null,"content_length":"15269","record_id":"<urn:uuid:f45182f7-ad3b-4351-8aa0-9cb0c15ebfc3>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
force direction
June 11th 2011, 02:07 PM #1
MHF Contributor
Nov 2008
force direction
i have a capacitator
and we insert a dielectric plate inside
the capacitance rises and total energy of the capacitator drops
what force will act on the plate iin what direction
do you have some links on it
Re: force direction
Did this problem come from a physics text? If so, which one?
June 16th 2011, 03:26 PM #2
Senior Member
May 2010
Los Angeles, California | {"url":"http://mathhelpforum.com/advanced-math-topics/182855-force-direction.html","timestamp":"2014-04-23T19:49:29Z","content_type":null,"content_length":"31280","record_id":"<urn:uuid:d6d72f67-63f3-4839-9da1-dd3fb09eee3c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intra-channel nonlinearity compensation with scaled translational symmetry
« journal navigation
Intra-channel nonlinearity compensation with scaled translational symmetry
Optics Express, Vol. 12, Issue 18, pp. 4282-4296 (2004)
It is proposed and demonstrated that two fiber spans in a scaled translational symmetry could cancel out their intra-channel nonlinear effects to a large extent without using optical phase
conjugation. Significant reduction of intra-channel nonlinear effects may be achieved in a long-distance transmission line consisting of multiple pairs of translationally symmetric spans. The results
have been derived analytically from the nonlinear Schrödinger equation and verified by numerical simulations using commercial software.
© 2004 Optical Society of America
1. Introduction
Group-velocity dispersion and optical nonlinearity are the major limiting factors in high-speed long-distance fiber-optic transmissions [
]. Dispersion-compensating fibers (DCFs) have been developed to offset the dispersion effects of transmission fibers over a wide frequency band. The most advanced DCFs are even capable of
slope-matching compensation, namely, compensating the dispersion and the dispersion slope of the transmission fiber simultaneously [
]. By contrast, it proves more difficult to compensate the nonlinear effects of optical fibers because of the lack of materials with negative nonlinearity and high group-velocity dispersion
simultaneously [
5. C. Pare, A. Villeneuve, and P.-A. Belanger, “Compensating for dispersion and the nonlinear Kerr effect without phase conjugation,” Opt. Lett. 21, 459–461 (1996). [CrossRef] [PubMed]
]. Optical phase conjugation (OPC) in the middle of a transmission line may compensate the nonlinear effects between fibers on the two sides of the phase conjugator [
6. D. M. Pepper and A. Yariv, “Compensation for phase distortions in nonlinear media by phase conjugation,” Opt. Lett. 5, 59–60 (1980). [CrossRef] [PubMed]
], especially when the two sides are configured into a mirror [
7. S. Watanabe and M. Shirasaki, “Exact compensation for both chromatic dispersion and Kerr effect in a transmission fiber using optical phase conjugation,” J. Lightwave Technol. 14, 243–248 (1996).
8. I. Brener, B. Mikkelsen, K. Rottwitt, W. Burkett, G. Raybon, J. B. Stark, K. Parameswaran, M. H. Chou, M. M. Fejer, E. E. Chaban, R. Harel, D. L. Philen, and S. Kosinski, “Cancellation of all Kerr
nonlinearities in long fiber spans using a LiNbO3 phase conjugator and Raman amplification,” OFC 2000, paper PD33.
] or translational [
10. M. E. Marhic, N. Kagi, T.-K. Chiang, and L. G. Kazovsky, “Cancellation of third-order nonlinear effects in amplified fiber links by dispersion compensation, phase conjugation, and alternating
dispersion,” Opt. Lett. 20, no. 8, 863–865 (1995). [CrossRef] [PubMed]
11. H. Wei and D. V. Plant, “Simultaneous nonlinearity suppression and wide-band dispersion compensation using optical phase conjugation,” Opt. Express 12, no. 9, 1938–1958 (2004), http://
www.opticsexpress.org/abstract.cfm?URI=OPEX-12-9-1938. [CrossRef] [PubMed]
] symmetry in a scaled sense, although the benefit of OPC may still be appreciable in the absence of such scaled symmetry [
12. A. Chowdhury and R.-J. Essiambre, “Optical phase conjugation and pseudolinear transmission,” Opt. Lett. 29, no. 10, 1105–1107(2004). [CrossRef] [PubMed]
]. However, wide-band optical phase conjugation exchanges the channel wavelengths, so to complicate the design and operation of wavelength-division multiplexed (WDM) networks. Also, the performance
and reliability of prototype conjugators are not yet sufficient for field deployment. Fortunately, it has been found that ordinary fibers could compensate each other for the intra-channel Kerr
nonlinear effects without the help of OPC. The intra-channel nonlinear effects, namely, nonlinear interactions among optical pulses within the same wavelength channel, are the dominating
nonlinearities in systems with high modulation speeds of 40 Gb/s and above [
], where the nonlinear interactions among different wavelength channels become less-limiting factors. As a result of the short pulse width and high data rate, optical pulses within one channel are
quickly dispersed and overlap significantly so to interact through the Kerr nonlinearity. In the past a few years, intra-channel nonlinearities have been extensively investigated by several research
groups [
14. P. V. Mamyshev and N. A. Mamysheva, “Pulse-overlapped dispersion-managed data transmission and intrachan-nel four-wave mixing,” Opt. Lett. 24, 1454–1456 (1999). [CrossRef]
15. A. Mecozzi, C. B. Clausen, and M. Shtaif, “Analysis of intrachannel nonlinear effects in highly dispersed optical pulse transmission,” IEEE Photon. Technol. Lett. 12, 392–394 (2000). [CrossRef]
16. A. Mecozzi, C. B. Clausen, M. Shtaif, S.-G. Park, and A. H. Gnauck, “Cancellation of timing and amplitude jitter in symmetric links using highly dispersed pulses,” IEEE Photon. Technol. Lett. 13,
445–447 (2001). [CrossRef]
17. J. Martensson, A. Berntson, M. Westlund, A. Danielsson, P. Johannisson, D. Anderson, and M. Lisak, “Timing jitter owing to intrachannel pulse interactions in dispersion-managed transmission
systems,” Opt. Lett. 26, 55–57 (2001). [CrossRef]
18. P. Johannisson, D. Anderson, A. Berntson, and J. Martensson, “Generation and dynamics of ghost pulses in strongly dispersion-managed fiber-optic communication systems,” Opt. Lett. 26, 1227–1229
(2001). [CrossRef]
19. M. J. Ablowitz and T. Hirooka, “Resonant nonlinear intrachannel interactions in strongly dispersion-managed transmission systems,” Opt. Lett. 25, 1750–1752 (2000). [CrossRef]
20. M. J. Ablowitz and T. Hirooka, “Intrachannel pulse interactions in dispersion-managed transmission systems: timing shifts,” Opt. Lett. 26, 1846–1848 (2001). [CrossRef]
21. M. J. Ablowitz and T. Hirooka, “Intrachannel pulse interactions in dispersion-managed transmission systems: energy transfer,” Opt. Lett. 27, 203–205 (2002). [CrossRef]
22. T. Hirooka and M. J. Ablowitz, “Suppression of intrachannel dispersion-managed pulse interactions by distributed amplification,” IEEE Photon. Technol. Lett. 14, 316–318 (2002). [CrossRef]
]. A method has been identified for suppressing the intra-channel nonlinearity-induced jitters in pulse amplitude and timing, using Raman-pumped transmission lines manifesting a lossless or
mirror-symmetric map of signal power [
16. A. Mecozzi, C. B. Clausen, M. Shtaif, S.-G. Park, and A. H. Gnauck, “Cancellation of timing and amplitude jitter in symmetric links using highly dispersed pulses,” IEEE Photon. Technol. Lett. 13,
445–447 (2001). [CrossRef]
22. T. Hirooka and M. J. Ablowitz, “Suppression of intrachannel dispersion-managed pulse interactions by distributed amplification,” IEEE Photon. Technol. Lett. 14, 316–318 (2002). [CrossRef]
]. However, there is a problem with such mirror-symmetric power map. Namely, the loss of pump power makes it difficult to maintain a constant gain in a long transmission fiber. Consequently, the
significant deviation of signal power profile from a desired mirror-symmetric map degrades the result of intra-channel nonlinear compensation using mirror symmetry [
23. R. Hainberger, T. Hoshita, T. Terahara, and H. Onaka, “Comparison of span configurations of Raman-amplified dispersion-managed fibers,” IEEE Photon. Technol. Lett. 14, 471–473 (2002). [CrossRef]
]. By contrast, we shall demonstrate here that two fiber spans in a scaled translational symmetry [
11. H. Wei and D. V. Plant, “Simultaneous nonlinearity suppression and wide-band dispersion compensation using optical phase conjugation,” Opt. Express 12, no. 9, 1938–1958 (2004), http://
www.opticsexpress.org/abstract.cfm?URI=OPEX-12-9-1938. [CrossRef] [PubMed]
] could cancel out their intra-channel nonlinear effects to a large extent without resorting to OPC, and a significant reduction of intra-channel nonlinear effects may be achieved in a multi-span
system with scaled translationally symmetric spans suitably arranged. The results shall be derived analytically from the nonlinear Schrödinger equation and verified by numerical simulations using
commercial software.
2. Basics of nonlinear wave propagation in fibers
The eigenvalue solution of Maxwell’s equations in a single-mode fiber determines its transverse model function and propagation constant
) as a function of the optical frequency
]. When a fiber transmission line is heterogeneous along its length, the propagation constant could also depend on the longitudinal position
in the line, and may be denoted as
). The slow-varying envelope form,
, is often employed to represent an optical signal, which may be of a single time-division multiplexed channel or a superposition of multiple WDM channels. The evolution of the envelope
) in an optical fiber of length
is governed by the nonlinear Schrödinger equation (NLSE) [
∀ z ∈ [0,L], in the retarded reference frame with the origin z = 0 moving along the fiber at the signal group-velocity. In the above equation, α(z) is the loss/gain coefficient,
are the
-dependent dispersion coefficients of various orders,
) is the Kerr nonlinear coefficient of the fiber,
) is the impulse response of the Raman gain spectrum, and ⊗ denotes the convolution operation [
]. Note that all fiber parameters are allowed to be
-dependent, that is, they may vary along the length of the fiber. Because of the definition in terms of derivatives,
β [2]
may be called the second-order dispersion (often simply dispersion in short), while
β [3]
may be called the third-order dispersion, so on and so forth. The engineering community has used the term dispersion for the parameter
, namely, the derivative of the inverse of group-velocity with respect to the optical wavelength, and dispersion slope for
]. Although
β [2]
are directly proportional to each other, the relationship between
β [3]
is more complicated. To avoid confusion, this paper adopts the convention that dispersion and second-order dispersion are synonyms for the
β [2]
parameter, while dispersion slope and third-order dispersion refer to the same
β [3]
parameter, and similarly the slope of dispersion slope is the same thing as the fourth-order dispersion
β [4]
Had there been no nonlinearity, namely
) =
z, t
) ≡ 0, Eq. (
) would reduce to,
which could be solved analytically using, for example, the method of Fourier transform. Let F denote the linear operator of Fourier transform, a signal A(z, t) in the time domain can be represented
equivalently in the frequency domain by Ã(z,ω)=defF[A(z,t)]. Through a linear fiber, a signal Ã(z [1],ω) at z = z [1] would be transformed into Ã(z [2], ω) = H(z [1],z [2],ω)Ã(z [1],ω) at z [2] ≤ z
[1], where the transfer function H(z [1],z [2], ω) is defined as,
Namely, P(z [1],z [2]) is the concatenation of three linear operations: firstly Fourier transform is applied to convert a temporal signal into a frequency signal, which is then multiplied by the
transfer function H(z [1],z [2],ω), finally the resulted signal is inverse Fourier transformed back into the time domain. In terms of the impulse response,
P(z [1],z [2]) may also be represented as,
That is, the action of P(
z [1]
z [2]
) on a time-dependent function is to convolve the function with the impulse response. All linear operators P(
z [1]
z [2]
) with
z [1]
z [2]
, also known as propagators, form a semigroup [
] for the linear evolution governed by Eq. (
However, the existence of nonlinear terms in Eq. (
) makes the equation much more difficult to solve. Fortunately, when the signal power is not very high so that the nonlinearity is weak and may be treated as perturbation, the output from a nonlinear
fiber line may be represented by a linearly dispersed version of the input, plus nonlinear distortions expanded in power series of the nonlinear coefficients [
27. E. E. Narimanov and P. Mitra, “The channel capacity of a fiber optics communication system: perturbation theory,” J. Lightwave Technol. 20, 530–537 (2002). [CrossRef]
]. In practical transmission lines, although the end-to-end response of a long link may be highly nonlinear due to the accumulation of nonlinearity through many fiber spans, the nonlinear
perturbation terms of higher orders than the first are usually negligibly small within each fiber span. Up to the first-order perturbation, the signal
z [2]
) as a result of nonlinear propagation of a signal
z [1]
) from
z [1]
z [2]
z [1]
, may be approximated using,
where A(z [2],t)≈A [0](z [2],t) amounts to the zeroth-order approximation which neglects the fiber nonlinearity completely, whereas the result of first-order approximation A(z [2],t)≈A [0](z [2],t) +
A [1](z [2], t) accounts in addition for the lowest-order nonlinear products integrated over the fiber length. The term A [1](·, t) is called the first-order perturbation because it is linearly
proportional to the nonlinear coefficients γ(·) and g(·,t).
3. Theory of intra-channel nonlinearity compensation using scaled translational symmetry
Within one wavelength channel, it is only necessary to consider the Kerr nonlinearity, while the Raman effect may be neglected. The translational symmetry [
11. H. Wei and D. V. Plant, “Simultaneous nonlinearity suppression and wide-band dispersion compensation using optical phase conjugation,” Opt. Express 12, no. 9, 1938–1958 (2004), http://
www.opticsexpress.org/abstract.cfm?URI=OPEX-12-9-1938. [CrossRef] [PubMed]
] requires that the corresponding fiber segments have the same sign for the loss/gain coefficients but opposite second-and higher-order dispersions, which are naturally satisfied conditions in
conventional fiber transmission systems, where, for example, a transmission fiber may be paired with a DCF as symmetric counterparts. The scaled translational symmetry further requires that the fiber
parameters should be scaled in proportion and the signal amplitudes should be adjusted to satisfy [
11. H. Wei and D. V. Plant, “Simultaneous nonlinearity suppression and wide-band dispersion compensation using optical phase conjugation,” Opt. Express 12, no. 9, 1938–1958 (2004), http://
www.opticsexpress.org/abstract.cfm?URI=OPEX-12-9-1938. [CrossRef] [PubMed]
∈ [0,
] and ∈
∈ (-∞, +∞), where
β [2]
β [3]
), and
) denote the loss coefficient, second-order dispersion, third-order dispersion, and Kerr nonlinear coefficient respectively for one fiber stretching from
= 0 to
> 0, while the primed parameters are for the other fiber stretching from
= 0 to
> 0 is the scaling ratio,
) and
) are the envelopes of optical amplitude in the two fiber segments respectively. Even though the effect of dispersion slope may be neglected within a single wavelength channel, the inclusion of the
β [3]
-parameters in the scaling rules of Eq. (
) ensures that good dispersion and nonlinearity compensation is achieved for each wavelength channel across a wide optical band. When a pair of such fiber segments in scaled translational symmetry
are cascaded, and the signal power levels are adjusted in accordance with Eq. (
), it may be analytically proved that both the timing jitter and the amplitude fluctuation due to intra-channel nonlinear interactions among overlapping pulses are compensated up to the first-order
perturbation of fiber nonlinearity, namely, up to the linear terms of the nonlinear coefficient. Since the dispersive and nonlinear transmission response is invariant under the scaling of fiber
parameters and signal amplitudes as in Eq. (
) [
11. H. Wei and D. V. Plant, “Simultaneous nonlinearity suppression and wide-band dispersion compensation using optical phase conjugation,” Opt. Express 12, no. 9, 1938–1958 (2004), http://
www.opticsexpress.org/abstract.cfm?URI=OPEX-12-9-1938. [CrossRef] [PubMed]
], it is without loss of generality to consider two spans that are in translational symmetry with the ratio
= 1. The cascade of such two spans would constitute a transmission line stretching from
= 0 to
= 2
, with the fiber parameters satisfying,
∈ [0,
] and ∀
∈ (-∞, +∞). The translational symmetry is illustrated in
Fig. 1
with plots of signal power and accumulated dispersion along the propagation distance.
The amplitude envelope of a single channel may be represented by a sum of optical pulses, namely,
) = Σ
[k] u[k]
), where
) denotes the pulse in the
th bit slot and centered at time
, with
> 0 being the bit duration. The following NLSE describes the propagation and nonlinear interactions among the pulses [
where the right-hand side keeps only those nonlinear products that satisfy the phase-matching condition. The nonlinear mixing terms with either
contribute to self-phase modulation and intra-channel cross-phase modulation (XPM), while the rest with both
are responsible for intra-channel four-wave mixing (FWM) [
]. It is assumed that all pulses are initially chirp-free or they may be made so by a dispersion compensator, and when chirp-free the pulses
, ∀
should all be real-valued. This includes the modulation schemes of conventional on-off keying as well as binary phase-shift keying, where the relative phases between adjacent pulses are either 0 or
. It is only slightly more general to allow the pulses being modified by arithmetically progressive phase shifts
ϕ [0]
, ∀
, with
ϕ [0]
∈ [0,2
),because Eq. (
) is invariant under the multiplication of phase factors exp(
) to
, ∀
. The linear dependence of
is in fact equivalent to a readjustment of the frequency and phase of the optical carrier. The pulses may be return-to-zero (RZ) and nonreturn-to-zero (NRZ) modulated as well, for an NRZ signal train
may be viewed the same as a stream of wide RZ pulses with the half-amplitude points (with respect to the peak amplitude) on the rising and falling edges separated by one bit duration.
Were there no nonlinearity in the fibers, the signal propagation would by fully described by the dispersive transfer function,
with z [1],z [2] ∈ [0,2L], and,
or equivalently the corresponding impulse response,
which is calculated from F
z [1]
z [2]
)] up to a constant phase factor. The impulse response defines a linear propagator P(
z [1]
z [2]
) as in Eq. (
). In reality, the signal evolution is complicated by the Kerr nonlinear effects. Nevertheless, the nonlinearity within each fiber span may be sufficiently weak to justify the application of the
first-order perturbation theory:
, where
) ≈
) is the zeroth-order approximation which neglects the fiber nonlinearity completely, whereas the result of first-order perturbation
) ≈
) accounts in addition for the nonlinear products integrated over the fiber length. For the moment, it may be assumed that both fiber spans are fully dispersion- and loss-compensated to simplify the
mathematics. It then follows from the translational symmetry of Eq. (
) that
b [2]
) = -
b [2]
∫0z+L α
∫0z α
) =
), ∀
∈ [0,
], and
) =
) =
) =
), which is real-valued by assumption, ∀
. It further follows that
) =
h ^*
) and
) =
h ^*
), hence,
∀ z ∈ [0,L]. Consequently, the pulses at z and z+L are complex conjugate,namely,v[k] (z+L,t) = vk*(z,t), ∀ k ∈ Z, ∀ z ∈ [0,L]. At the end z = 2L, a typical term of nonlinear mixing reads,
which is therefore real-valued. It follows immediately that the first-order nonlinear perturbation vk′ (2L,t) is purely imaginary-valued, which is in quadrature phase with respect to the zeroth-order
approximation v[k] (2L,t) = u[k] (0,t), ∀ k ∈ Z. When the span dispersion is not fully compensated, namely, b [2](0,L) ≠ 0, the input pulses to the first span at z = 0 should be pre-chirped by an
amount of dispersion equal to - ½b [2](0,L), so that the input pulses to the second span at z = L are pre-chirped by ½b [2](0,L) as a consequence. In other words, the input signals to the two spans
should be oppositely chirped. Under such condition, the equation v[k](z + L,t) = vk* (z,t), ∀ k ∈ [0,L],∀k ∈ Z, are still valid, so are the above argument and the conclusion that v[k] and vk′ are
real- and imaginary-valued respectively when brought chirp-free.
Mathematically, that
are in quadrature phase implies |
= |
= |
+ |
, where |
is quadratic, or of the second-order, in terms of the Kerr nonlinear coefficient, ∀
. This fact has significant implications to the performance of a transmission line. Firstly, it avoids pulse amplitude fluctuations due to the in-phase beating between signal pulses and nonlinear
products of intra-channel FWM, which could seriously degrade the signal quality if not controlled [
15. A. Mecozzi, C. B. Clausen, and M. Shtaif, “Analysis of intrachannel nonlinear effects in highly dispersed optical pulse transmission,” IEEE Photon. Technol. Lett. 12, 392–394 (2000). [CrossRef]
16. A. Mecozzi, C. B. Clausen, M. Shtaif, S.-G. Park, and A. H. Gnauck, “Cancellation of timing and amplitude jitter in symmetric links using highly dispersed pulses,” IEEE Photon. Technol. Lett. 13,
445–447 (2001). [CrossRef]
21. M. J. Ablowitz and T. Hirooka, “Intrachannel pulse interactions in dispersion-managed transmission systems: energy transfer,” Opt. Lett. 27, 203–205 (2002). [CrossRef]
]. The quadrature-phased nonlinear products due to intra-channel FWM lead to the generation of “ghost” pulses in the “ZERO”-slots [
14. P. V. Mamyshev and N. A. Mamysheva, “Pulse-overlapped dispersion-managed data transmission and intrachan-nel four-wave mixing,” Opt. Lett. 24, 1454–1456 (1999). [CrossRef]
18. P. Johannisson, D. Anderson, A. Berntson, and J. Martensson, “Generation and dynamics of ghost pulses in strongly dispersion-managed fiber-optic communication systems,” Opt. Lett. 26, 1227–1229
(2001). [CrossRef]
19. M. J. Ablowitz and T. Hirooka, “Resonant nonlinear intrachannel interactions in strongly dispersion-managed transmission systems,” Opt. Lett. 25, 1750–1752 (2000). [CrossRef]
] and the addition of noise power to the “ONE”-bits. As second-order nonlinear perturbations, these effects are less detrimental. Secondly, it eliminates pulse timing jitter due to intra-channel XPM
up to the first-order nonlinear perturbation. Using the moment method [
15. A. Mecozzi, C. B. Clausen, and M. Shtaif, “Analysis of intrachannel nonlinear effects in highly dispersed optical pulse transmission,” IEEE Photon. Technol. Lett. 12, 392–394 (2000). [CrossRef]
16. A. Mecozzi, C. B. Clausen, M. Shtaif, S.-G. Park, and A. H. Gnauck, “Cancellation of timing and amplitude jitter in symmetric links using highly dispersed pulses,” IEEE Photon. Technol. Lett. 13,
445–447 (2001). [CrossRef]
], the time of arrival for the center of the
th pulse may be calculated as,
which is clearly jitterless up to the first-order perturbation, ∀
. In the above calculation, the |
terms are simply neglected as they represent second-order nonlinear perturbations. It may be noted that our mathematical formulation and derivation are straightforwardly applicable to transmission
lines with scaled mirror symmetry for compensating intra-channel nonlinear effects without using OPC, and provide a theoretical framework of intra-channel nonlinearity that is more general than
previous discussions [
14. P. V. Mamyshev and N. A. Mamysheva, “Pulse-overlapped dispersion-managed data transmission and intrachan-nel four-wave mixing,” Opt. Lett. 24, 1454–1456 (1999). [CrossRef]
15. A. Mecozzi, C. B. Clausen, and M. Shtaif, “Analysis of intrachannel nonlinear effects in highly dispersed optical pulse transmission,” IEEE Photon. Technol. Lett. 12, 392–394 (2000). [CrossRef]
16. A. Mecozzi, C. B. Clausen, M. Shtaif, S.-G. Park, and A. H. Gnauck, “Cancellation of timing and amplitude jitter in symmetric links using highly dispersed pulses,” IEEE Photon. Technol. Lett. 13,
445–447 (2001). [CrossRef]
17. J. Martensson, A. Berntson, M. Westlund, A. Danielsson, P. Johannisson, D. Anderson, and M. Lisak, “Timing jitter owing to intrachannel pulse interactions in dispersion-managed transmission
systems,” Opt. Lett. 26, 55–57 (2001). [CrossRef]
18. P. Johannisson, D. Anderson, A. Berntson, and J. Martensson, “Generation and dynamics of ghost pulses in strongly dispersion-managed fiber-optic communication systems,” Opt. Lett. 26, 1227–1229
(2001). [CrossRef]
19. M. J. Ablowitz and T. Hirooka, “Resonant nonlinear intrachannel interactions in strongly dispersion-managed transmission systems,” Opt. Lett. 25, 1750–1752 (2000). [CrossRef]
20. M. J. Ablowitz and T. Hirooka, “Intrachannel pulse interactions in dispersion-managed transmission systems: timing shifts,” Opt. Lett. 26, 1846–1848 (2001). [CrossRef]
21. M. J. Ablowitz and T. Hirooka, “Intrachannel pulse interactions in dispersion-managed transmission systems: energy transfer,” Opt. Lett. 27, 203–205 (2002). [CrossRef]
22. T. Hirooka and M. J. Ablowitz, “Suppression of intrachannel dispersion-managed pulse interactions by distributed amplification,” IEEE Photon. Technol. Lett. 14, 316–318 (2002). [CrossRef]
]. No matter which type is the scaled symmetry, the essence of intra-channel nonlinear compensation is to annihilate the in-phase components of the nonlinear mixing terms, with respect to the
unperturbed signals. How well the nonlinear effects are suppressed in a fiber transmission line depends largely upon how clean the in-phase nonlinear components are removed.
4. Optimal setups of fiber-optic transmission lines
A transmission fiber, either standard single-mode fiber (SMF) or non-zero dispersion-shifted fiber (NZDSF), and its corresponding slope-matching DCF [
] are a perfect pair for compensating intra-channel nonlinearities, as their dispersions and slopes of dispersion satisfy the scaling rules in Eq. (
) perfectly, and the signal amplitudes may be easily adjusted to fulfil the corresponding scaling rule. The so-called reverse-dispersion fibers (RDFs) [
], as a special type of DCFs, may be suitably cabled into the transmission line and contribute to the transmission distance, since the absolute dispersion value and loss coefficient of RDFs are both
comparable to those of the conventional transmission fiber. Only the smaller modal area requires a lower level of signal power for an RDF to compensate the nonlinearity of a conventional transmission
fiber. Otherwise a “one-for-many” compensation scheme may be employed, where the signal power may be slightly adjusted for an RDF to compensate the nonlinearity of multiple conventional transmission
fibers [
11. H. Wei and D. V. Plant, “Simultaneous nonlinearity suppression and wide-band dispersion compensation using optical phase conjugation,” Opt. Express 12, no. 9, 1938–1958 (2004), http://
www.opticsexpress.org/abstract.cfm?URI=OPEX-12-9-1938. [CrossRef] [PubMed]
]. There is usually no power repeater between the conventional transmission fiber and the cabled RDF within one span, so that the signal power decreases monotonically in each fiber span, as shown in
Fig. 1
. Note that one fiber span has a conventional transmission fiber followed by an RDF, while the other span has an RDF followed by a conventional transmission fiber, in accordance with the scaling
rules in Eq. (
) for non-linearity compensation. Alternatively, if distributive Raman amplification [
31. M. Vasilyev, B. Szalabofka, S. Tsuda, J. M. Grochocinski, and A. F. Evans, “Reduction of Raman MPI and noise figure in dispersion-managed fiber,” Electron. Lett. 38, no. 6, 271–272 (2002).
34. C. Rasmussen, T. Fjelde, J. Bennike, F. Liu, S. Dey, B. Mikkelsen, P. Mamyshev, P. Serbe, P. van der Wagt, Y. Akasaka, D. Harris, D. Gapontsev, V. Ivshin, and P. Reeves-Hall, “DWDM 40G
transmission over trans-Pacific distance (10,000 km) using CSRZ-DPSK, enhanced FEC and all-Raman amplified 100 km UltraWave fiber spans,” OFC 2003, paper PD18.
], especially backward Raman pumping, is used to repeat the signal power, then one span should have the conventional transmission fiber Raman pumped in accordance with the RDF being Raman pumped in
the other span. The signal power variation in each span may no longer be monotonic, but the power profiles in two compensating spans should still be similar and obey the scaling rules of Eq. (
), especially in portions of fibers that experience high signal power.
For DCFs having absolute dispersion values much higher than the transmission fiber, it is suitable to coil the DCF into a lumped dispersion-compensating module (DCM) and integrate the module with a
multi-stage optical amplifier at each repeater site. Two fiber spans in scaled translational symmetry for intra-channel nonlinearity compensation should have oppositely ordered transmission fibers
and DCFs. As shown in
Fig. 2
, one span has a piece of transmission fiber from A to B, in which the signal power decreases exponentially, and an optical repeater at the end, in which one stage of a multi-stage optical amplifier
boosts the signal power up to a suitable level and feeds the signal into a lumped DCM, where the signal power also decreases exponentially along the length of the DCF from B to C, finally the signal
power is boosted by another stage of the optical amplifier. The other span has the same transmission fiber and the same DCM, with the signal power in the DCF from C to D tracing the same decreasing
curve. However, this span has the DCM placed before the transmission fiber. Ironically, the efforts of improving the so-called figure-of-merit [
] by DCF manufacturers have already rendered the loss coefficients of DCFs too low to comply with the scaling rules of Eq. (
). To benefit from nonlinearity compensation enabled by scaled translational symmetries, DCFs, at least parts of them carrying high signal power, may be intentionally made more lossy during
manufacturing or by means of special packaging to introduce bending losses. As illustrated in
Fig. 2
, the DCFs from B to C and from C to D are arranged in scaled translational symmetry to the transmission fibers from D to E and from A to B respectively, such that the transmission fiber from A to B
is compensated by the DCF from C to D, and the DCF from B to C compensates the transmission fiber from D to E, for the most detrimental effects of jittering in pulse amplitude and timing due to
intra-channel FWM and XPM. In practice, the DCMs from B to D and the multi-stage optical amplifiers may be integrated into one signal repeater, and the same super-span from A to E may be repeated
many times to reach a long-distance, with the resulting transmission line enjoying the effective suppression of intra-channel nonlinear impairments. In case distributive Raman pumping in the
transmission fibers [
31. M. Vasilyev, B. Szalabofka, S. Tsuda, J. M. Grochocinski, and A. F. Evans, “Reduction of Raman MPI and noise figure in dispersion-managed fiber,” Electron. Lett. 38, no. 6, 271–272 (2002).
34. C. Rasmussen, T. Fjelde, J. Bennike, F. Liu, S. Dey, B. Mikkelsen, P. Mamyshev, P. Serbe, P. van der Wagt, Y. Akasaka, D. Harris, D. Gapontsev, V. Ivshin, and P. Reeves-Hall, “DWDM 40G
transmission over trans-Pacific distance (10,000 km) using CSRZ-DPSK, enhanced FEC and all-Raman amplified 100 km UltraWave fiber spans,” OFC 2003, paper PD18.
] is employed to repeat the signal power, the DCFs may also be Raman pumped [
] or erbium-doped for distributive amplification [
] to have similar (scaled) power profiles as that in the transmission fibers for optimal nonlinearity compensation.
It should be noted that in regions of fibers carrying lower optical power, the scaling rules of fiber parameters in Eq. (
) may be relaxed without sacrificing the performance of nonlinearity compensation, both for systems using cabled DCFs into the transmission lines and for systems using lumped DCMs at the repeater
sites. Such relaxation may be done for practical convenience, or to control the accumulated dispersion in a span to a desired value, as well as to reduce the DCF loss so to reduce the penalty due to
optical noise. As an example and a potentially important method in its own right, a DCM compensating the dispersion and nonlinearity of transmission fibers may be so packaged that the first part of
DCF experiencing a high level of signal power may have a higher loss coefficient satisfying the scaling rule in Eq. (
), whereas the second part of DCF may ignore the scaling rule and become less lossy such that the signal power at the end of the DCM is not too low to be significantly impaired by the amplifier
noise. In fact, the low-loss part of the DCM may even use optical filters other than DCFs, such as fiber Bragg gratings and photonic integrated circuits. This method of packaging DCMs achieves the
capability of nonlinearity compensation and good signal-to-noise ratio performance simultaneously. For instance, it takes 10 km DCF with
= -80 ps/nm/km to compensate 100 km NZDSF with dispersion
= 8 ps/nm/km and loss
= 0.2 dB/km. The first 4 km of the DCF may be made highly lossy by a special treatment in manufacturing or packaging, with a loss coefficient
= 2 dB/km to form a scaled translational symmetry with respect to the first 40 km NZDSF for optimal nonlinearity compensation. However, the remaining 6 km DCF may ignore the scaling rules and have a
much lower nominal loss
= 0.6 dB/km. The total loss is reduced by 8.4 dB as compared to a DCM that complies strictly with the scaling rules throughout the length of the DCF. Another important parameter of DCFs is the
effective modal area, or more directly the nonlinear coefficient. Traditional designs of DCFs have always strived to enlarge the modal area so to reduce the nonlinear effects of DCFs. However, for
DCFs used in our method of nonlinearity compensation, there exists an optimal range of modal area which should be neither too large nor too small. According to the scaling rules in Eq. (
), a DCF with a large modal area may require too much signal power to generate sufficient nonlinearity to compensate the nonlinear effects of a transmission fiber, when optical amplifiers may have
difficulty to produce that much signal power. On the other hand, when the effective modal area is too small, the scaling rules of Eq. (
) dictate a reduced power level for the optical signal in the DCF, which may be more seriously degraded by optical noise, such as the amplified-spontaneous-emission noise from an amplifier at the end
of the DCF.
5. Simulation results and discussions
Numerical simulations using commercial software are carried out to support our theoretical analysis and verify the effectiveness of our method of suppressing intra-channel nonlinearity using scaled
translational symmetry. In one test system, as depicted in
Fig. 3
, the transmission line consists of 6 pairs of compensating fiber spans totaling a transmission distance of 1072.2 km. The first span in each pair has 50 km SMF followed by 50 km RDF then an
erbium-doped fiber amplifier (EDFA) with gain 15.74 dB, the second span has 39.35 km RDF followed by 39.35 km SMF then an EDFA with gain 20 dB. The other test system consists of the same number of
spans with the same span lengths, which are constructed using the same fibers and EDFAs as the first system except that the second span in each span-pair has the 39.35-km SMF placed before the
39.35-km RDF, as shown in
Fig. 4
. The EDFA noise figure is 4 dB. The SMF has loss
= 0.2 dB/km, dispersion
= 16 +
ps/nm/km, and dispersion slope
= 0.055 ps/nm
/km, effective modal area
A [eff]
= 80
, while the RDF has
= 0.2 dB/km,
= -16 ps/nm/km,
= -0.055 ps/nm
/km, and
A [eff]
= 30
. Fiber-based pre- and post-dispersion compensators equalize 11/24 and 13/24 respectively of the total dispersion accumulated in the transmission line. Both the SMF and the RDF have the same
nonlinear index of silica
n [2]
= 2.6 × 10
/W. The transmitter has four 40 Gb/s WDM channels. The center frequency is 193.1 THz, and the channel spacing is 200 GHz. All four channels are co-polarized and RZ-modulated with 33% duty cycle and
peak power of 15 mW for the RZ pulses. The MUX/DEMUX filters are Bessel of the 7th order with 3dB-bandwidth 80 GHz. The electrical filter is third-order Bessel with 3dB-bandwidth 28 GHz. The results
of four-channel WDM transmissions have been compared with that of single-channel transmissions, with no clearly visible difference observed, which indicates the dominance of intra-channel
nonlinearity and the negligibility of inter-channel nonlinear effects. Several trials with various values for
have been simulated for each test system. The following figures present the eye diagrams of optical pulses after wavelength DEMUX, in order to signify the nonlinear deformation (timing and amplitude
jitters) of optical pulses and the generation of ghost pulses.
Fig. 5
shows the received optical pulses of
= 0 for the two test systems, with the amplifier noise being turned off to signify the nonlinear impairments (right diagram) and the effectiveness of nonlinearity compensation (left diagram). Clearly
shown is the suppression of nonlinear impairments by using scaled translational symmetry, and especially visible is the reduction of pulse timing jitter, as seen from the thickness of the rising and
falling edges as well as the timing of pulse peaks. In both eye diagrams, there are optical pulses with small but discernable amplitudes above the floor of zero signal power, which could be
attributed to ghost-pulse generation [
14. P. V. Mamyshev and N. A. Mamysheva, “Pulse-overlapped dispersion-managed data transmission and intrachan-nel four-wave mixing,” Opt. Lett. 24, 1454–1456 (1999). [CrossRef]
18. P. Johannisson, D. Anderson, A. Berntson, and J. Martensson, “Generation and dynamics of ghost pulses in strongly dispersion-managed fiber-optic communication systems,” Opt. Lett. 26, 1227–1229
(2001). [CrossRef]
19. M. J. Ablowitz and T. Hirooka, “Resonant nonlinear intrachannel interactions in strongly dispersion-managed transmission systems,” Opt. Lett. 25, 1750–1752 (2000). [CrossRef]
] due to the uncompensated and quadrature-phased components of intra-channel FWM. When the amplifier noise is turned back on, as shown in
Fig. 6
, the received signals become slightly more noisy, but the suppression of nonlinear distortions is still remarkable when there is scaled translational symmetry. Then
= 0.2 ps/nm/km was set for the two test systems of
Fig. 3
Fig. 4
respectively, in order to showcase that a mirror-symmetric ordering of pairwise translationally symmetric fiber spans is fairly tolerant to the residual dispersions in individual fiber spans. In this
setting, each fiber span has 10 or 7.87 ps/nm/km worth of residual dispersion, and the accumulated dispersion totals 107.22 ps/nm/km for the entire transmission line. Importantly, the pre- and
post-dispersion compensators are set to compensate 11/24 and 13/24 respectively of the total dispersion, ensuring at least approximately the complex conjugation between the input signals to each pair
of spans in scaled translational symmetry. The amplifier noise is also turned on. The transmission results, as shown in
Fig. 7
, are very similar to that with
= 0, which demonstrates the dispersion tolerance nicely. In a better optimized design to tolerate higher dispersion mismatch |
|, either SMFs or RDFs may be slightly elongated or shortened in accordance with the value of
, such that the same residual dispersion is accumulated in all spans. As an example,
is set to 0.6 ps/nm/km and each 39.35-km SMF is elongated by 0.385 km, so that all spans have the same residual dispersion of 30 ps/nm/km, and the whole transmission line accumulates 360 ps/nm/km
worth of dispersion. The pre- and post-dispersion compensators equalize 360×11/24= 165 and 360×13/24 = 195 ps/nm/km worth of dispersion respectively. The gain of each 15.74-dB EDFA is increased to
15.817 dB in correspondence to the elongation of the 39.35-km SMF. The amplifier noise is still on. The transmission results are shown in
Fig. 8
6. Conclusion
In conclusion, we have demonstrated through analytical derivation and numerical simulations that two fiber spans in a scaled translational symmetry could cancel out their intra-channel nonlinear
effects to a large extent. And a significant reduction of intra-channel nonlinear effects may be achieved in a long-distance transmission line consisting of multiple pairs of translationally
symmetric spans. We have also discussed a method of packaging dispersion-compensating fibers to optimally compensate the nonlinear effects of transmission fibers and to minimize the signal power loss
at the same time.
This work was supported by the Natural Sciences and Engineering Research Council (NSERC) and industrial partners, through the Agile All-Photonic Networks (AAPN) Research Network.
References and links
1. A. H. Gnauck and R. M. Jopson, “Dispersion compensation for optical fiber systems,” in Optical Fiber Telecommunications III A, I. P. Kaminow and T. L. Koch, eds. (Academic Press, San Diego,
2. F. Forghieri, R. W. Tkach, and A. R. Chraplyvy, “Fiber nonlinearities and their impact on transmission systems,” in Optical Fiber Telecommunications III A, I. P. Kaminow and T. L. Koch, eds.
(Academic Press, San Diego, 1997).
3. V. Srikant, “Broadband dispersion and dispersion slope compensation in high bit rate and ultra long haul systems,” OFC2001, paper TuH1.
4. M. J. Li, “Recent progress in fiber dispersion compensators,” European Conference on Optical Communication 2001, paper Th.M.1.1.
5. C. Pare, A. Villeneuve, and P.-A. Belanger, “Compensating for dispersion and the nonlinear Kerr effect without phase conjugation,” Opt. Lett. 21, 459–461 (1996). [CrossRef] [PubMed]
6. D. M. Pepper and A. Yariv, “Compensation for phase distortions in nonlinear media by phase conjugation,” Opt. Lett. 5, 59–60 (1980). [CrossRef] [PubMed]
7. S. Watanabe and M. Shirasaki, “Exact compensation for both chromatic dispersion and Kerr effect in a transmission fiber using optical phase conjugation,” J. Lightwave Technol. 14, 243–248 (1996).
8. I. Brener, B. Mikkelsen, K. Rottwitt, W. Burkett, G. Raybon, J. B. Stark, K. Parameswaran, M. H. Chou, M. M. Fejer, E. E. Chaban, R. Harel, D. L. Philen, and S. Kosinski, “Cancellation of all
Kerr nonlinearities in long fiber spans using a LiNbO3 phase conjugator and Raman amplification,” OFC 2000, paper PD33.
9. H. Wei and D. V. Plant, “Fundamental equations of nonlinear fiber optics,” in Optical Modeling and Performance Predictions, M. A. Kahan, ed., Proc. SPIE5178, 255–266 (2003).
10. M. E. Marhic, N. Kagi, T.-K. Chiang, and L. G. Kazovsky, “Cancellation of third-order nonlinear effects in amplified fiber links by dispersion compensation, phase conjugation, and alternating
dispersion,” Opt. Lett. 20, no. 8, 863–865 (1995). [CrossRef] [PubMed]
11. H. Wei and D. V. Plant, “Simultaneous nonlinearity suppression and wide-band dispersion compensation using optical phase conjugation,” Opt. Express 12, no. 9, 1938–1958 (2004), http://
www.opticsexpress.org/abstract.cfm?URI=OPEX-12-9-1938. [CrossRef] [PubMed]
12. A. Chowdhury and R.-J. Essiambre, “Optical phase conjugation and pseudolinear transmission,” Opt. Lett. 29, no. 10, 1105–1107(2004). [CrossRef] [PubMed]
13. R.-J. Essiambre, G. Raybon, and B. Mikkelson, “Pseudo-linear transmission of high-speed TDM signals: 40 and 160 Gb/s,” in Optical Fiber Telecommunications IVB: Systems and Impairments, I. P.
Kaminow and T. Li, eds. (Academic Press, San Diego, 2002).
14. P. V. Mamyshev and N. A. Mamysheva, “Pulse-overlapped dispersion-managed data transmission and intrachan-nel four-wave mixing,” Opt. Lett. 24, 1454–1456 (1999). [CrossRef]
15. A. Mecozzi, C. B. Clausen, and M. Shtaif, “Analysis of intrachannel nonlinear effects in highly dispersed optical pulse transmission,” IEEE Photon. Technol. Lett. 12, 392–394 (2000). [CrossRef]
16. A. Mecozzi, C. B. Clausen, M. Shtaif, S.-G. Park, and A. H. Gnauck, “Cancellation of timing and amplitude jitter in symmetric links using highly dispersed pulses,” IEEE Photon. Technol. Lett. 13,
445–447 (2001). [CrossRef]
17. J. Martensson, A. Berntson, M. Westlund, A. Danielsson, P. Johannisson, D. Anderson, and M. Lisak, “Timing jitter owing to intrachannel pulse interactions in dispersion-managed transmission
systems,” Opt. Lett. 26, 55–57 (2001). [CrossRef]
18. P. Johannisson, D. Anderson, A. Berntson, and J. Martensson, “Generation and dynamics of ghost pulses in strongly dispersion-managed fiber-optic communication systems,” Opt. Lett. 26, 1227–1229
(2001). [CrossRef]
19. M. J. Ablowitz and T. Hirooka, “Resonant nonlinear intrachannel interactions in strongly dispersion-managed transmission systems,” Opt. Lett. 25, 1750–1752 (2000). [CrossRef]
20. M. J. Ablowitz and T. Hirooka, “Intrachannel pulse interactions in dispersion-managed transmission systems: timing shifts,” Opt. Lett. 26, 1846–1848 (2001). [CrossRef]
21. M. J. Ablowitz and T. Hirooka, “Intrachannel pulse interactions in dispersion-managed transmission systems: energy transfer,” Opt. Lett. 27, 203–205 (2002). [CrossRef]
22. T. Hirooka and M. J. Ablowitz, “Suppression of intrachannel dispersion-managed pulse interactions by distributed amplification,” IEEE Photon. Technol. Lett. 14, 316–318 (2002). [CrossRef]
23. R. Hainberger, T. Hoshita, T. Terahara, and H. Onaka, “Comparison of span configurations of Raman-amplified dispersion-managed fibers,” IEEE Photon. Technol. Lett. 14, 471–473 (2002). [CrossRef]
24. J. A. Buck, Fundamentals of Optical Fibers (Wiley, New York, 1995), Chapter 4.
25. G. P. Agrawal, Nonlinear Fiber Optics, 2nd ed. (Academic Press, San Diego, 1995), Chapter 2.
26. K.-J. Engel and R. Nagel, One-Parameter Semigroups for Linear Evolution Equations (Springer-Verlag, New York, 2000).
27. E. E. Narimanov and P. Mitra, “The channel capacity of a fiber optics communication system: perturbation theory,” J. Lightwave Technol. 20, 530–537 (2002). [CrossRef]
28. S. N. Knudsen and T. Veng, “Large effective area dispersion compensating fiber for cabled compensation of standard single mode fiber,” OFC 2000, paper TuG5.
29. K. Mukasa, H. Moridaira, T. Yagi, and K. Kokura, “New type of dispersion management transmission line with MDFSD for long-haul 40 Gb/s transmission,” OFC 2002, paper ThGG2.
30. K. Rottwitt and A. J. Stentz, “Raman amplification in lightwave communication systems,” in Optical Fiber Telecommunications IVA: Components, I. P. Kaminow and T. Li, eds. (Academic Press, San
Diego, 2002).
31. M. Vasilyev, B. Szalabofka, S. Tsuda, J. M. Grochocinski, and A. F. Evans, “Reduction of Raman MPI and noise figure in dispersion-managed fiber,” Electron. Lett. 38, no. 6, 271–272 (2002).
32. J.-C Bouteiller, K. Brar, and C. Headley, “Quasi-constant signal power transmission,” European Conference on Optical Communication2002, paper S3.04.
33. M. Vasilyev, “Raman-assisted transmission: toward ideal distributed amplification,” OFC 2003, paper WB1.
34. C. Rasmussen, T. Fjelde, J. Bennike, F. Liu, S. Dey, B. Mikkelsen, P. Mamyshev, P. Serbe, P. van der Wagt, Y. Akasaka, D. Harris, D. Gapontsev, V. Ivshin, and P. Reeves-Hall, “DWDM 40G
transmission over trans-Pacific distance (10,000 km) using CSRZ-DPSK, enhanced FEC and all-Raman amplified 100 km UltraWave fiber spans,” OFC 2003, paper PD18.
35. L. Gruner-Nielsen, Y. Qian, B. Palsdottir, P. B. Gaarde, S. Dyrbol, T. Veng, and Y Qian, “Module for simultaneous C + L-band dispersion compensation and Raman amplification,” OFC 2002, paper
36. T. Miyamoto, T. Tsuzaki, T. Okuno, M. Kakui, M. Hirano, M. Onishi, and M. Shigematsu, “Raman amplification over 100 nm-bandwidth with dispersion and dispersion slope compensation for conventional
single mode fiber,” OFC 2002, paper TuJ7.
37. E. Desurvire, Erbium-Doped Fiber Amplifiers: Principles and Applications (John Wiley & Sons, New York, 1994).
38. A. Striegler, A. Wietfeld, and B. Schmauss, “Fiber based compensation of IXPM induced timing jitter,” OFC 2004, paper MF72.
OCIS Codes
(060.2360) Fiber optics and optical communications : Fiber optics links and subsystems
(060.4370) Fiber optics and optical communications : Nonlinear optics, fibers
ToC Category:
Research Papers
Original Manuscript: August 16, 2004
Revised Manuscript: August 28, 2004
Published: September 6, 2004
Haiqing Wei and David Plant, "Intra-channel nonlinearity compensation with scaled translational symmetry," Opt. Express 12, 4282-4296 (2004)
Sort: Year | Journal | Reset
1. A. H. Gnauck and R. M. Jopson, �??Dispersion compensation for optical fiber systems,�?? in Optical Fiber Telecommunications III A, I. P. Kaminow and T. L. Koch, eds. (Academic Press, San Diego,
2. F. Forghieri, R. W. Tkach and A. R. Chraplyvy, �??Fiber nonlinearities and their impact on transmission systems,�?? in Optical Fiber Telecommunications III A, I. P. Kaminow and T. L. Koch, eds.
(Academic Press, San Diego, 1997).
3. V. Srikant, �??Broadband dispersion and dispersion slope compensation in high bit rate and ultra long haul systems,�?? OFC 2001, paper TuH1.
4. M. J. Li, �??Recent progress in fiber dispersion compensators,�?? European Conference on Optical Communication 2001, paper Th.M.1.1.
5. C. Pare, A. Villeneuve, and P.-A. Belanger, �??Compensating for dispersion and the nonlinear Kerr effect without phase conjugation,�?? Opt. Lett. 21, 459-461 (1996). [CrossRef] [PubMed]
6. D. M. Pepper and A. Yariv, �??Compensation for phase distortions in nonlinear media by phase conjugation,�?? Opt. Lett. 5, 59-60 (1980). [CrossRef] [PubMed]
7. S. Watanabe and M. Shirasaki, �??Exact compensation for both chromatic dispersion and Kerr effect in a transmission fiber using optical phase conjugation,�?? J. Lightwave Technol. 14, 243-248
(1996). [CrossRef]
8. I. Brener, B. Mikkelsen, K. Rottwitt, W. Burkett, G. Raybon, J. B. Stark, K. Parameswaran, M. H. Chou, M. M. Fejer, E. E. Chaban, R. Harel, D. L. Philen, and S. Kosinski, �??Cancellation of all
Kerr nonlinearities in long fiber spans using a LiNbO3 phase conjugator and Raman amplification,�?? OFC 2000, paper PD33.
9. H. Wei and D. V. Plant, �??Fundamental equations of nonlinear fiber optics,�?? in Optical Modeling and Performance Predictions, M. A. Kahan, ed., Proc. SPIE 5178, 255-266 (2003).
10. M. E. Marhic, N. Kagi, T.-K. Chiang, and L. G. Kazovsky, �??Cancellation of third-order nonlinear effects in amplified fiber links by dispersion compensation, phase conjugation, and alternating
dispersion,�?? Opt. Lett. 20, no. 8, 863-865 (1995). [CrossRef] [PubMed]
11. H. Wei and D. V. Plant, �??Simultaneous nonlinearity suppression and wide-band dispersion compensation using optical phase conjugation,�?? Opt. Express 12, no. 9, 1938-1958 (2004), <a href =
"http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-9-1938">http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-9-1938</a> [CrossRef] [PubMed]
12. A. Chowdhury and R.-J. Essiambre, �??Optical phase conjugation and pseudolinear transmission,�?? Opt. Lett. 29, no. 10, 1105-1107 (2004). [CrossRef] [PubMed]
13. R.-J. Essiambre, G. Raybon, and B. Mikkelson, �??Pseudo-linear transmission of high-speed TDM signals: 40 and 160 Gb/s,�?? in Optical Fiber Telecommunications IV B: Systems and Impairments, I. P.
Kaminow and T. Li, eds. (Academic Press, San Diego, 2002).
14. P. V. Mamyshev and N. A. Mamysheva, �??Pulse-overlapped dispersion-managed data transmission and intrachannel four-wave mixing,�?? Opt. Lett. 24, 1454-1456 (1999). [CrossRef]
15. A. Mecozzi, C. B. Clausen, and M. Shtaif, �??Analysis of intrachannel nonlinear effects in highly dispersed optical pulse transmission,�?? IEEE Photon. Technol. Lett. 12, 392-394 (2000).
16. A. Mecozzi, C. B. Clausen, M. Shtaif, S.-G. Park, and A. H. Gnauck, �??Cancellation of timing and amplitude jitter in symmetric links using highly dispersed pulses,�?? IEEE Photon. Technol. Lett.
13, 445-447 (2001). [CrossRef]
17. J. Martensson, A. Berntson, M. Westlund, A. Danielsson, P. Johannisson, D. Anderson, and M. Lisak, �??Timing jitter owing to intrachannel pulse interactions in dispersion-managed transmission
systems,�?? Opt. Lett. 26, 55-57 (2001). [CrossRef]
18. P. Johannisson, D. Anderson, A. Berntson, and J. Martensson, �??Generation and dynamics of ghost pulses in strongly dispersion-managed fiber-optic communication systems,�?? Opt. Lett. 26,
1227-1229 (2001). [CrossRef]
19. M. J. Ablowitz and T. Hirooka, �??Resonant nonlinear intrachannel interactions in strongly dispersion-managed transmission systems,�?? Opt. Lett. 25, 1750-1752 (2000). [CrossRef]
20. M. J. Ablowitz and T. Hirooka, �??Intrachannel pulse interactions in dispersion-managed transmission systems: timing shifts,�?? Opt. Lett. 26, 1846-1848 (2001). [CrossRef]
21. M. J. Ablowitz and T. Hirooka, �??Intrachannel pulse interactions in dispersion-managed transmission systems: energy transfer,�?? Opt. Lett. 27, 203-205 (2002). [CrossRef]
22. T. Hirooka and M. J. Ablowitz, �??Suppression of intrachannel dispersion-managed pulse interactions by distributed amplification,�?? IEEE Photon. Technol. Lett. 14, 316-318 (2002). [CrossRef]
23. R. Hainberger, T. Hoshita, T. Terahara, and H. Onaka, �??Comparison of span configurations of Raman-amplified dispersion-managed fibers,�?? IEEE Photon. Technol. Lett. 14, 471-473 (2002).
24. J. A. Buck, Fundamentals of Optical Fibers (Wiley, New York, 1995), Chapter 4.
25. G. P. Agrawal, Nonlinear Fiber Optics, 2nd ed. (Academic Press, San Diego, 1995), Chapter 2.
26. K.-J. Engel and R. Nagel, One-Parameter Semigroups for Linear Evolution Equations (Springer-Verlag, New York, 2000).
27. E. E. Narimanov and P. Mitra, �??The channel capacity of a fiber optics communication system: perturbation theory,�?? J. Lightwave Technol. 20, 530-537 (2002). [CrossRef]
28. S. N. Knudsen and T. Veng, �??Large effective area dispersion compensating fiber for cabled compensation of standard single mode fiber,�?? OFC 2000, paper TuG5.
29. K. Mukasa, H. Moridaira, T. Yagi, and K. Kokura, �??New type of dispersion management transmission line with MDFSD for long-haul 40 Gb/s transmission,�?? OFC 2002, paper ThGG2.
30. K. Rottwitt and A. J. Stentz, �??Raman amplification in lightwave communication systems,�?? in Optical Fiber Telecommunications IV A: Components, I. P. Kaminow and T. Li, eds. (Academic Press,
San Diego, 2002).
31. M. Vasilyev, B. Szalabofka, S. Tsuda, J. M. Grochocinski, and A. F. Evans, �??Reduction of Raman MPI and noise figure in dispersion-managed fiber,�?? Electron. Lett. 38, no. 6, 271-272 (2002)
32. J.-C. Bouteiller, K. Brar, and C. Headley, �??Quasi-constant signal power transmission,�?? European Conference on Optical Communication 2002, paper S3.04.
33. M. Vasilyev, �??Raman-assisted transmission: toward ideal distributed amplification,�?? OFC 2003, paper WB1.
34. C. Rasmussen, T. Fjelde, J. Bennike, F. Liu, S. Dey, B. Mikkelsen, P. Mamyshev, P. Serbe, P. van der Wagt, Y. Akasaka, D. Harris, D. Gapontsev, V. Ivshin, P. Reeves-Hall, �??DWDM 40G transmission
over trans-Pacific distance (10,000 km) using CSRZ-DPSK, enhanced FEC and all-Raman amplified 100 km UltraWaveTM fiber spans,�?? OFC 2003, paper PD18.
35. Gruner-Nielsen, Y. Qian, B. Palsdottir, P. B. Gaarde, S. Dyrbol, T. Veng, and Y. Qian, �??Module for simultaneous C + L-band dispersion compensation and Raman amplification,�?? OFC 2002, paper
36. T. Miyamoto, T. Tsuzaki, T. Okuno, M. Kakui, M. Hirano, M. Onishi, and M. Shigematsu, �??Raman amplification over 100 nm-bandwidth with dispersion and dispersion slope compensation for
conventional single mode fiber,�?? OFC 2002, paper TuJ7.
37. E. Desurvire, Erbium-Doped Fiber Amplifiers: Principles and Applications (John Wiley & Sons, New York, 1994).
38. A. Striegler, A. Wietfeld, and B. Schmauss, �??Fiber based compensation of IXPM induced timing jitter,�?? OFC 2004, paper MF72.
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-12-18-4282&id=81088","timestamp":"2014-04-18T03:09:12Z","content_type":null,"content_length":"279294","record_id":"<urn:uuid:8f7d9c22-3c83-4169-a708-c9dad4d13d89>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Controller Design for a Second-Order Plant with Uncertain Parameters and Disturbance: Application to a DC Motor
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 169519, 16 pages
Research Article
Controller Design for a Second-Order Plant with Uncertain Parameters and Disturbance: Application to a DC Motor
^1Facultad de Ingeniería y Arquitectura, Universidad Católica de Manizales, Cr 23 No 60-63, Manizales 170002, Colombia
^2Departamento de Ingeniería Eléctrica, Electrónica y Computación, Facultad de Ingeniería y Arquitectura, Universidad Nacional de Colombia, Sede Manizales, Percepción y Control Inteligente, Bloque Q,
Campus La Nubia, Manizales 170003, Colombia
Received 1 August 2012; Revised 20 December 2012; Accepted 28 December 2012
Academic Editor: Gani Stamov
Copyright © 2013 Alejandro Rincón et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
This paper shows the controller design for a second-order plant with unknown varying behavior in the parameters and in the disturbance. The state adaptive backstepping technique is used as control
framework, but important modifications are introduced. The controller design achieves mainly the following two benefits: upper or lower bounds of the time-varying parameters of the model are not
required, and the formulation of the control and update laws and stability analysis are simpler than closely related works that use the Nussbaum gain method. The controller has been developed and
tested for a DC motor speed control and it has been implemented in a Rapid Control Prototyping system based on Digital Signal Processing for dSPACE platform. The motor speed converges to a predefined
desired output signal.
1. Introduction
An important challenge for controller design plants is the unknown time-varying behavior of its parameters (cf [1–3]). The state adaptive backstepping (SAB) of [4] is an important framework to design
this kind of controllers (see [5–9]). In the adaptive controllers that are based on the SAB and do not use the Nussbaum gain method, the transient behavior of the tracking error is upper bounded by
an unknown positive constant, as can be noticed from [5, 10–13]. Such constant bound is the function of (i) constant upper bounds of varying bounded plant model parameters, (ii) constant plant model
parameters, (iii) user-defined parameters of the update laws, and (iv) the initial values of the plant model states. In addition, it does not involve integral terms that depend on Nussbaum functions.
Therefore, the constant upper bound of the tracking error can be made small by choosing large values of the update law gains. This would ensure that the tracking error takes on small values. To
handle the effect of unknown varying behavior of plant model parameters, the robustness and Nussbaum gain techniques are usually combined with SAB control schemes.
The robust-SAB control schemes involve a control law with a compensation term and a modification of the update law, for example, the projection type modification (see [14, 15]), or the modification
(see [16–19]). The main drawback of this technique is the following: (D1) upper or lower bounds of the plant coefficients are required to be known to achieve the asymptotic convergence of the
tracking error to a residual set of user-defined size.
On the other hand, neural networks allow to represent part of the nonlinear behavior of real systems and can take into account the time-varying behavior. In the case of completely unknown systems,
they represent the whole plant model terms. Usually, the use of neural networks leads to approximation error, which is nonlinear and possibly time varying. The effect of this term has been tackled by
means of robust adaptive control schemes based on the Lyapunov or the Lyapunov-like function; see [15, 19]. Those adaptive controllers exhibit some drawbacks, as shown in the following. In [19], a
nonlinear system in control affine strict-feedback form is considered, and a neural network SAB control scheme is designed. The unknown state-dependent terms are represented by RBF neural networks,
with unknown coefficients and known basis functions. The following assumptions are made: (i) the mentioned coefficients are unknown and upper bounded by known positive constants, and (ii) the
identification error is upper bounded by an unknown constant. The update laws allow to tackle the effect of the unknown coefficients of the RBF representation. In that paper, the stability analysis
indicates that the Lyapunov function converges to a residual set whose size depends on both the upper bound of the identification error and the upper bound of the coefficients of the RBF
representation. Therefore, the tracking error converges to a residual set whose size depends on those bounds. Hence, those upper bounds must be known to obtain the convergence of the tracking error
to a residual set of user-defined size. In [15] a nonlinear system in control affine state space form is considered, and a neural network based on output adaptive backstepping (OAB) is considered.
The unknown nonlinear state-dependent terms are represented by neural networks with unknown coefficients and known basis functions. The following assumptions are made: (i) the coefficients of
representation are unknown and constant, and (ii) the identification error term is upper bounded by a constant. The projection type update laws allow to tackle the effect of the unknown coefficients
of representation. In that paper, the stability analysis indicates that the Lyapunov function converges to a residual set whose size depends on the upper bound of the identification error term.
Therefore, the tracking error converges to a residual set whose size also depends on the identification error. Hence, the main drawback of the mentioned adaptive control schemes is the following:
(D2) the upper bound of the identification error term must be known to achieve the convergence of the tracking error to a residual set of user-defined size.
As can be noticed from [20–24], Nussbaum-SAB control schemes are usually based on the schemes in [25–27] which are in turn based on the Universal Stabilizer that was originally presented in [28] and
discussed in [29, pages 335–338]. As can be concluded from [20, 21], a proper design of the Nussbaum-SAB control scheme overcomes the main drawback of the mentioned and projection based robust-SAB
control schemes, as upper or lower bounds of the plant model parameters are not required to be known, and the convergence of the tracking error to a residual set of user-defined size is guaranteed.
Other recent Nussbaum-SAB control schemes indicate that the Nussbaum gain technique exhibits the following drawback: (D3) the upper bound of the transient behavior of the Lyapunov function depends on
integral terms that involve Nussbaum functions and have the time as the upper limit of the integral operation (see [24, page 477], [20, page 1791], [6, page 856], and [30, page 4639]). Therefore, the
upper bound of the transient behavior of the tracking error depends on such integral terms, so that the tracking error may take on overly large values. This is in agreement with the violent behavior
mentioned in [29, page 337]. In addition, some of the control schemes that use this technique have the following drawbacks: (i) some upper or lower bounds of the plant coefficients are required to be
known in order to guarantee the asymptotic convergence of the tracking error to a residual set of user-defined size as in [25, 30], and (ii) the control or update laws involve signum type signals as
in [24, 31].
SAB control schemes have been developed and applied to motors, and some of them include the incorporation of the Nussbaum and robustness techniques. In [5] an adaptive controller is designed for a
linear motor drive. The mathematical model used to describe the motor is in controllable form, and the friction coefficients are assumed constant and unknown. Nevertheless, the upper bound of the
disturbance term and the upper and lower bounds of the friction coefficients are required to be known. In [6] an adaptive controller based on the Nussbaum gain technique and the modification of the
update law is designed for a class of SISO systems and applied to a DC motor turning a robotic load. Each differential equation of the SISO system involves an additive and unknown disturbance-like
term, which is upper bounded by a known nonnegative function with unknown coefficients. Nevertheless, the upper and lower bounds of the plant model parameters must be known to guarantee the
convergence of the tracking error to a residual set of user-defined size, and the upper bound of the transient behavior of the tracking error depends on integral terms that involve the Nussbaum
functions. The last drawback is common in Nussbaum adaptive control schemes. In [7], an adaptive controller is designed for the position control of an motion control stage using a linear ultrasonic
motor. The friction force includes the static friction, Coulomb friction, and viscous friction. The idea is to control the -axis, -axis, and -axis, separately. A lumped uncertainty term results from
the unknown parameter variations and external force disturbances. The lumped uncertainty is represented by means of an adaptive fuzzy neural network. The identification error is defined as the
difference between the real value of the lumped parameter and the representation based on the Sugeno adaptive fuzzy neural network. Such identification error is handled by means of an updated
parameter provided by an additional update law and an input compensator. The Lyapunov function includes a quadratic form depending on the difference between the identification error and its updated
value. Nevertheless, the time derivative of the Lyapunov function neglects the time derivative of the identification error; see page 681. It amounts to assume that the identification error is
constant or zero in the Lyapunov function. In [8], a linear induction motor is considered and the goal is to control the mover position. The friction force and the unknown time varying model
parameters are lumped into an unknown term whose upper bound is constant and unknown. The lumped unknown term is represented by a radial basis function network (RBFN), estimated in real time. The
reconstructed error is defined as the difference between the lumped term and the representation based on RBFN. The effect of the reconstructed error is tackled by means of an updated parameter
provided by an additional updating law. The Lyapunov function involves a quadratic form for the difference between the reconstructed error and its updated parameter. The drawback is that the time
derivative of the reconstructed error is neglected in the time derivative of the Lyapunov function, which is not realistic and could degrade the robustness of the controller. In summary, the main
drawbacks of the above control schemes are the following:(D4) upper and lower bounds of plant model parameters and lumped plant model terms are required to be known; (D5) the upper bound of the
transient behavior of the tracking error depends on integral terms that involve Nussbaum functions; (D6) the time derivative of identification error is neglected in the time derivative of the
Lyapunov function.
In the present work, an adaptive controller is developed for a permanent magnet DC motor. The state adaptive backstepping (SAB) of [4] is used as the basic framework for the controller design. In
order to handle the unknown varying model parameters, significant modifications are introduced in the approach, on the basis of the modifications appearing in [32]. The main modifications are as
follows: (i) use a truncated version of the quadratic form that depends on the backstepping states, and (ii) develop a convergence analysis based on the truncated version of the quadratic form. Using
the scheme proposed in this paper, the following benefits are obtained: (RC1) the resulting upper bound of the transient behavior of the tracking error is constant and does not depend on integral
terms involving Nussbaum functions, so that the transient behavior of the tracking error can be rendered small by properly choosing the controller parameters; (RC2) none of the exact values of the
plant model parameters are required to be known;(RC3) none of the upper bounds of the plant model parameters are required to be known;(RC4) the tracking error converges to a residual set whose size
is user defined, despite the lack of knowledge on both the exact values and the upper bounds of the plant model parameters;(RC5) discontinuous signals are avoided in the control and update laws;
(RC6) the time derivative of the Lyapunov function does not neglect the time derivative of any varying parameter.
The controller was applied to a permanent magnet DC motor whose voltage input is supplied by a buck power converter. With the aim to obtain a good agreement between simulations and experimental
set-up, the numerical simulation includes realistic characteristics such as internal resistances, discretization, and time delay. The controller was implemented in a digital platform. The control
design procedure and the stability analysis indicate that the drawbacks (D1), (D2), (D3), (D4), (D5), and (D6) are overcome, as the benefits (RC1) to (RC6) of the control scheme in [32] are achieved
in the present work. In addition, the bounded nature of all the closed loop signals is guaranteed.
This paper is organized as follows. In Section 2 the plant model used to design the controller and the goal of the control are presented. In Section 3 the design of the controller is developed. In
Section 4 the bounded nature of the closed loop signals and the convergence of the tracking error are proven. In Section 5 numerical and experimental results are presented, and finally, Section 6 is
devoted to conclusions.
2. Plant Model and Control Goal
The linear model corresponding to a DC permanent magnet motor is given by The state variables are the armature current and the motor speed . The control input is (i.e., the capacitor voltage supplied
by the buck converter) and the output of the system is . ; [V/rad/s] is the voltage constant, [mH] is the armature inductance, [] is the armature resistance, [N·m/rad/s] is the viscous friction
coefficient, [kg·m^2] is the inertia moment, [N·m/A] is the motor torque constant, [N·m] is the friction torque, and [N·m] is the load torque.
Remark 1. The only objective of the buck converter is to supply the voltage value obtained from the controller law. For this reason it is not taken into account in the controller design.
The following assumptions for the model (1) are made: (Ai) the parameters and vary with time but they are upper and lower bounded by unknown constants, (Aii) the parameters , , , , are unknown and
constant, and (Aiii) and are measured. The plant model (1) can be rewritten as where , , , , , and are positive constants. Assumption (Ai) implies that parameters , , and are unknown and time
varying, but they are upper and lower bounded by unknown constants: where , , , and are unknown positive constants, and means the minimum value taken for . Similarly means the maximum value taken for
. Assumption (Aii) indicates that the parameters , , and are unknown constants, and assumption (Aiii) implies that and must be measured. Now, consider where is the tracking error; is the desired
output is the reference value which is user defined; , , and are user-defined positive constants; and is a residual set. The objective of the control design is to formulate a control law for the
plant model (1) subject to assumptions (Ai) to (Aiii), and such that (Cgi) the tracking error asymptotically converges to the residual set , (Cgii) the controller does not involve discontinuous
signals, (Cgiii) the control law provides bounded values, and (Cgiv) the closed loops signals are bounded.
3. Control Design
In this section a controller for the plant defined by (1) and subject to assumptions (Ai) to (Aiii) is developed taking into account the control goals (Cgi) to (Cgiv) defined previously. The
procedure is based on the state adaptive backstepping of [4], but important modifications are introduced in order to handle the unknown time-varying plant model parameters. The controller is
developed such that the tracking error converges to a residual set whose size is user defined. Indeed, the control and update laws are formulated such that the time derivative of the Lyapunov-like
function is upper bounded by a function with the following characteristics: (TDi) the function is nonpositive, (TDii) the function is zero if the quadratic form that depends on and is lower than a
prespecified constant size, and (TDiii) the function is negative if such quadratic form is larger than a prespecified constant. If the time derivative of the Lyapunov-like function is upper bounded
by a function with such properties, then the asymptotic convergence of the tracking error to a residual set of user-defined size is guaranteed.
Discontinuous signals are avoided in the controller design because such signals may imply (see [33, 34]) loss of trajectory unicity, sliding motion of trajectories along the discontinuity surface
that may lead to chattering (see [34, pages 282-283]), and input chattering, which is an undesired component of large commutation rate in the control input (see [34, page 292]). Large commutation
rate may lead to high power consumption and wear of mechanical components (cf. [35, 36]). Adaptive control based on the direct Lyapunov method involving discontinuous signals needs a rigorous
analysis which includes ensuring that trajectory unicity is preserved and developing the Filippov's construction for the case in which sliding motion occurs, in order to avoid chattering. Therefore,
it is advisable to avoid discontinuous signals in the controller design.
With the aim to compute the controller the following steps are developed: (i) define the first state as the difference between the output and the desired output and differentiate it with respect to
time, (ii) define a quadratic function that depends on and differentiate it with respect to time; (iii) introduce upper bounds for the time-varying model coefficients, and parameterize them in terms
of parameter and regression vectors; (iv) express the parameter vector in terms of updating error and updated parameters, and define the state ; (v) differentiate with respect to time, define a
quadratic function that depends on , and differentiate it with respect to time; (vi) introduce upper bounds for the time-varying model coefficients, and parameterize them in terms of parameter and
regression vectors; (vii) express the parameter vector in terms of updating error and updated parameter, and formulate the control law; and (viii) formulate the Lyapunov-like function and
differentiate it with respect to time, and formulate the update laws.
Step 1. The state variable is defined as Differentiating (9) with respect to time and using (2) the following is obtained:
Step 2. A quadratic form given by is defined. Differentiating (11) with respect to time, using (10), and adding and subtracting yields The term has been added to obtain asymptotic convergence of the
tracking error later. The unknown time-varying behavior of , , and is a significant obstacle for the controller design; for this reason the bounds defined in (5), (6), and (7) will be introduced in
Step 3 using the Young's inequality and parameterizations.
Step 3. Because and are time varying and unknown, they should be expressed in terms of their upper bounds, and these bounds should be expressed in terms of updated parameters and updating errors.
Properties defined in (5) and (7) yield Substituting (13) into (12) and arranging terms the following is obtained: Equation (14) implies that the possible definition of would include the
discontinuous term sgn. To remedy that the Young's inequality can be applied to (13) such that the term leads to . Applying the Young's inequality (cf. [37, page 123]) and arranging yield The lower
bound and the constant have been introduced in order to complete the proof of stability and boundedness of the closed loop signals later. For compactness, the terms involving the unknown constants
and can be arranged in an unknown constant vector . Substituting (15) into (12) and parameterizing, the following is obtained: where where is an unknown constant parameter vector and is the known
regressor vector.
Step 4. Since is unknown, it should be expressed in terms of updated parameter vector and updating error. The parameter vector can be rewritten as where and is the updated parameter vector provided
by the updating law which will be defined later, and is the updating error. Substituting (19) into (16) and using (6) yield The updated parameter vector is nonnegative as will be shown later. Using
this fact and (6) it follows that notice that is common to the terms involving and . Thus, is as a common factor of those terms and the new state is defined by Replacing (23) into (22) yields
Remark 2. The definition of the state is significantly different with respect to that of the basic adaptive backstepping scheme of [4]. Indeed, involves the vector , and such vector is multiplied by
Remark 3. Important modifications have been developed until now, that is, the introduction of the upper bounds of and (see (13)), the application of the Young's inequality (see (15)), and the
parameterization including the constant (see (16)) are some of them.
Step 5. Differentiating (23) with respect to time, the following is obtained: where . can be rewritten as where Introducing (2) and (3) into (26) yields The following quadratic form that depends on
and is chosen: Differentiating with respect to time and introducing (24) and (28) the following is obtained: The term is added and subtracted in order to obtain asymptotic convergence of the tracking
error: The control input is defined as follows: where is a user-defined constant. In particular, an adequate choice of could prevent saturations of the control input; for this reason the value of
should be taken from its normal operation range. is established by means of the controller design. Substituting (33) into (32) yields
Step 6. Since , , and are unknown and time varying they should be expressed in terms of their upper bounds. In view of (5), (6), and (7) the term involving the squared brackets can be rewritten as
Substituting (35) into (34) yields This expression indicates that the possible control rule for would involve the discontinuous signal sgn. This can be remedied by using the Young's inequality, so
that the term appearing in the right side of (35) leads to . For compactness (35) can be rewritten as where is the regression vector whose entries are known, and is the parameter vector, whose
entries are positive, constant, and unknown. The constant has been introduced in order to handle the unknown constant parameter appearing in the term .
Step 7. Because the parameter vector is unknown, it should be expressed in terms of updated parameter vector and updating error. The parameter can be rewritten as where where is the updated parameter
vector provided by the update law which will be defined later, and is the updating error. Substituting (39) into (37) yields Arranging the term and applying the Young's inequality (cf. [37, page
123]) yield The constant is added to prove the stability. Substituting (42) into (41) yield Substituting (43) into (34) and arranging yield The following expression can be used for : In view of (33),
the control law for is Substituting (46) into (44) the following is obtained: To handle the effect of the constant , the following quadratic-like function is defined: The term is defined in (29).
Function defined by (48) and (49) has the following properties: Differentiating (48) with respect to time the following is obtained: where From (52) it follows that is nonnegative, so that it can be
introduced in both sides of (47) without changing the sense of the inequality: Combining (51) and (53) yields
Step 8. The following Lyapunov-like function is defined: where and are defined in (20) and (40), respectively. To compute , the time derivative of is computed as Now, differentiating (55) with
respect to time and using (54) and (57) yield To tackle the effect of the terms involving the updating errors and the following update laws are formulated: where and are diagonal matrices whose
elements are user-defined positive constants. From (17) and (52) it follows that and such that . Substituting (59) into (58) yields Although the control law (46) and the update laws (59) have been
formulated, the values of the constants and have not been defined. The constants and are positive constants defined by the user and they must satisfy A simple choice that satisfies the above
requirement is From (52) the following is obtained: From (63), (61), and (60) it follows that Finally, the combination of the above expressions yields The developed controller involves the control
law (46) and the update laws (59). The signals and parameters necessary to implement it are , , , , , , , , and which are given by (9), (23), (17), (38), (27), (52), (29), and (48), respectively. In
addition, , , the diagonal elements of and , , , and are user-defined positive constants. In particular, and must satisfy (61), where is a user-defined positive constant.
Remark 4. Expression (65) indicates that fulfills conditions (TDi), (TDii), and (TDiii) mentioned at the beginning of Section 3.
Remark 5. The developed controller has the following benefits. (i) it does not use upper or lower bounds of coefficients of model (2) and (3). Indeed, the controller does not use any of the constants
, , , , , and . This implies less modeling effort. (ii) It does not involve discontinuous signals. This implies that the vector field of the closed loop system is locally Lipschitz continuous, so
that trajectory unicity is preserved and sliding motion is absent according to [33]. The locally Lipschitz nature of is important to avoid discontinuous signals in the update law.
Remark 6. The main elements to handle the unknown varying nature of coefficients , , and are as follows: (i) introduce the constant in the parametrization in (16), (17), and (18); (ii) introduce the
relationship between and provided by (6) in (24); (iii) express , , and in terms of their upper bounds in (35); (iv) introduce the constant in the unknown parameter vector (see (37) and (38)); (v)
apply the Young's inequality to the term (see (42)); and (vi) formulate the function which is a truncated version of the quadratic form (see (48)). The vanishing of allows preserving the decreasing
nature of the Lyapunov-like function, as can be noticed from (65). The continuous nature of the derivative of with respect to allows to avoid discontinuous signals in the update law.
Remark 7. The effect of a low value of is analyzed at the following. From (59), (52), (49), and (61) it follows that a low value of implies the following facts: (i) the term is nonzero for longer
time lapses, and consequently the update law (59) is active for longer time lapses; the updated parameter increases during longer time lapses, which leads to reach larger values; (ii) the constant is
smaller, and consequently the values of , , , and increase; and (iii) the chosen values of and have to be lower in order to accomplish condition (61). Therefore, the term becomes larger and from (46)
it follows that takes on larger values, which is interpreted as a bigger control effort.
The developed controller achieves some of the benefits mentioned in introduction. Sections 4.1 and 4.2 complete the proof of the other proposed benefits.
4. Boundedness Analysis
In this section it is proven that the closed loop signals are bounded if the developed controller is used; also the convergence of the tracking error to a residual set is proven.
4.1. Boundedness of the Closed Loop Signals
Theorem 8 (boundedness of the closed loop signals). Consider the plant model given by (1) which is subject to assumptions (Ai) to (Aiii). The signals and are defined in (9) and (23); , , , and are
defined in (17), (27), and (38), respectively; the signals , , and are defined in (29), (48), and (52), respectively; the constant is defined by (49) and the constants and satisfy (61). If the
controller defined in (46) and (59) is applied, then the signals , , , , and remain bounded.
Proof. From (65) it follows that where From (66) and (55) it follows that Using (56) the following is obtained: and; consequently and . The upper bound for the tracking error is defined as follows.
Solving (48) for yields Using (71) yields Both expressions of (73) can be combined to obtain Introducing (29) yields where is defined in (67). Therefore and . Because , then which is an upper bound
for the transient behavior of the tracking error .
In the following it is proven that is bounded. From (9), (23), , , , and it follows that and . Therefore from (17), (27), and (38) it follows that , , , and . Finally from (46) it follows that .
Remark 9. Notice that the upper bound of (76) does not involve integral terms, which is an important advantage with respect to controllers that involve the Nussbaum gain method (see [20, 21]).
4.2. Convergence Analysis
Now it is proven that the developed controller induces asymptotic convergence of the tracking error to the residual set , where , with defined by the user.
Theorem 10 (convergence of the tracking error). Consider the plant model given by (1) which is subject to assumptions (Ai) to (Aiii); the signals , , , , , , , , and are defined by (9), (23), (17), (
27), (38), (29), (48), and (52), respectively; the constant is defined in (49) and the constants , satisfy (61). If the controller given by (46) and (59) is applied, then the tracking error
asymptotically converges to a residual set , where .
Proof. In view of (52), inequality (65) can be rewritten as It can be noticed that the term is not continuous, because it involves an abrupt change at ; for this reason the Barbalat's lemma cannot be
applied on . To remedy that, (77) can be expressed in terms of a function with continuous derivative as follows: where Arranging and integrating (79) the following is obtained: Therefore . In order
to apply the Barbalat's lemma it is necessary to prove that and . Since it follows from (80) that . Differentiating (80) with respect to time yields Notice that is continuous with respect to . Since
then . Because , , , , and it follows from (10) and (25) that and . Thus, from (30) it follows that . Because and then it follows from (82) that . Because and the Barbalat's lemma (cf. [38, page 76])
indicates that asymptotically converges to zero. From (80) it follows that converges to , where . Furthermore, from (29) it follows that asymptotically converges to , where . Since and , then
asymptotically converges to , where .
Remark 11. The tracking error converges to a residual set whose size is user defined and not altered by the varying parameters.
5. Numerical and Experimental Results
In this section numerical and experimental results are shown. Figure 1 shows the block diagram of the system under study. This system is divided into two major groups: the first one is composed by
all hardware parts, including physical and electronic components; the second one is related to software and is implemented in a dSPACE platform, where the acquisition signals and the control
technique are performed.
The hardware is composed of a permanent magnet DC motor (PMDC) with rated power 250Watts, rated Volts 42VDC, rated current 6Amps, and 4000RPM of maximum speed. For the acquisition of motor speed
, a 1000 pulses per turn encoder was used. A series resistance was used to measure the armature current (). The digital part and the backstepping control technique are implemented in the control and
development card dSPACE DS1104. This card is programmed from Matlab/Simulink platform and it has a graphical display interface called ControlDesk. The controller is implemented in Simulink and is
downloaded to the DSP. The sampling rate for all variables ( and ) is set to 4kHz. The state variable is 12-bit resolution; the controlled variable is sensed by an encoder which has 28-bit
resolution and the duty cycle () is 10-bit resolution. At each sampling time (s) the controller uses the measured and to calculate the duty cycle , as follows: (i) the control input is determined
according to the control and update laws based on the proposed procedure (see Section 4), (ii) the duty cycle is computed from , and (iii) the duty cycle is transformed into a PWMC pulse signal. To
obtain simulation results the parameters of DC motor (, , , , , , and ) and backstepping controller (, , , , , , and ) are entered to the control block by the user, as constant parameters. The load
torque is time varying and unknown.
Figure 2 shows a sketch of the Simulink benchmark. The controller uses the measurements of and to compute , the duty cycle applied to buck converter is given by . The zero-order hold, quantizer, and
delay are included in order to model the signal acquisition and the analog to digital signal conversion. The motor and buck converter parameters used in simulation were experimentally measured and
are presented in Table 1. Recall that the values of the motor parameters (, , , , , , , and ) and the converter parameters (, , , , , and ) are not used by the control or update laws, neither in
simulation nor experimentation.
Figure 3 shows the desired output and the measured and simulated output when rad/s. In Figures 4(a) and 4(b), the numerical and experimental tracking errors are shown. It can be seen that in
simulation the error converges to a residual set rad/s (whose size is given by ) and in experiment there is a bit error, probably due to quantization effects, delays, or unmodeled dynamics.
Nevertheless, experimental and numerical results agree.
Figure 5 shows the numerical and experimental controller performance when changes from rad/s to rad/s at . As in previous example, rad/s in steady state for simulation case and in experimental
case the results are very close to this bound.
Figure 6 shows the results when the load torque changes from N·m to N·m at s. Notice that the controller achieves successfully tracking error, and the steady state bound is very close to the given
value . An estimator for the torque was added only with the aim to show simulation and real values of load torque, but this estimator does not work in the controller.
It can be noticed from Remark 7 that low values of lead to high control effort. For this reason, a small error region (low value of ) causes saturation and faster response of the actuator. In case
that the actuator cannot respond quickly, the condition of error region is not satisfied and the control design is not completely successful. In this way, the definition of is a compromise between
requirements of the output and actuator performance.
6. Conclusions
In all simulations, output error converges to a residual set defined by the user when the designed controller in this paper is applied to the plant. Small differences between experiment and theoretic
results (Figures 4 and 5) are mainly due to hardware considerations and aspects related to the implementation which was not taken into account in the controller design. Some of them are delay in the
control action, quantization effects which do not guarantee continuous control signal, noise and delays in the measured variables, and inaccuracy in the sensors. Nevertheless, starting from a
complete unknown model, experiments and simulations show a high agreement, and experimental results validate the control technique, even in the cases when the set point is changed 50% of its initial
value, and in the case when the load is changed 37% of its initial value. The designer must take into account the differences between experiments and simulations, prior to defining the error region.
The controller design based on the state adaptive backstepping involves a state transformation that provides two new states. The main elements to handle the unknown varying behavior of moment of
inertia and load torque are introducing the upper bound of the model coefficients and introducing the lower bound of model coefficient in the parameterization.
With the aim to apply Lyapunov theory to demonstrate the stability of the controlled system a truncated quadratic function was formulated (Lyapunov-like function), in such a way that its magnitude
and time derivative vanish when the new states reach a target region, which implied adequate properties of its time derivative.
The controller design and proof of boundedness and convergence properties are simpler in comparison to current works that use the Nussbaum gain technique.
This work was partially supported by Universidad Nacional de Colombia—Manizales, Project 12475, Vicerrectoría de Investigación, DIMA, Resolution no. VR-2185.
1. Z. Li, J. Chen, G. Zhang, and M. G. Gan, “Adaptive robust control for DC motors with input saturation,” IET Control Theory & Applications, vol. 5, no. 16, pp. 1895–1905, 2011. View at Publisher ·
View at Google Scholar · View at MathSciNet
2. J. Linares-Flores, J. Reger, and H. Sira-Ramirez, “Load torque estimation and passivity-based control of a boost-converter/DC-motor combination,” IEEE Transactions on Control Systems Technology,
vol. 18, no. 6, pp. 1398–1405, 2010. View at Publisher · View at Google Scholar · View at Scopus
3. M. A. Khanesar, O. Kaynak, and M. Teshnehlab, “Direct model reference Takagi-Sugeno fuzzy control of SISO nonlinear systems,” IEEE Transactions on Fuzzy Systems, vol. 19, no. 5, pp. 914–924,
2011. View at Publisher · View at Google Scholar
4. I. Kanellakopoulos, P. V. Kokotović, and A. S. Morse, “Systematic design of adaptive controllers for feedback linearizable systems,” IEEE Transactions on Automatic Control, vol. 36, no. 11, pp.
1241–1253, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
5. Y. Hong and B. Yao, “A globally stable saturated desired compensation adaptive robust control for linear motor systems with comparative experiments,” Automatica, vol. 43, no. 10, pp. 1840–1848,
2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
6. S. Tong and Y. Li, “Fuzzy adaptive robust backstepping stabilization for SISO nonlinear systems with unknown virtual control direction,” Information Sciences, vol. 180, no. 23, pp. 4619–4640,
2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. F. J. Lin, P. H. Shieh, and P. H. Chou, “Robust adaptive backstepping motion control of linear ultrasonic motors using fuzzy neural network,” IEEE Transactions on Fuzzy Systems, vol. 16, no. 3,
pp. 676–692, 2008. View at Publisher · View at Google Scholar · View at Scopus
8. F. J. Lin, L. T. Teng, C. Y. Chen, and Y. C. Hung, “FPGA-based adaptive backstepping control system using RBFN for linear induction motor drive,” IET Electric Power Applications, vol. 2, no. 6,
pp. 325–340, 2008. View at Publisher · View at Google Scholar · View at Scopus
9. A. El Magri, F. Giri, A. Abouloifa, and F. Z. Chaoui, “Robust control of synchronous motor through AC/DC/AC converters,” Control Engineering Practice, vol. 18, no. 5, pp. 540–553, 2010. View at
Publisher · View at Google Scholar · View at Scopus
10. J. Zhou, C. Wen, and Y. Zhang, “Adaptive backstepping control of a class of uncertain nonlinear systems with unknown backlash-like hysteresis,” IEEE Transactions on Automatic Control, vol. 49,
no. 10, pp. 1751–1757, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
11. J. Zhou, C. Wen, and Y. Zhang, “Adaptive backstepping control of a class of uncertain nonlinear systems with unknown dead-zone,” in Proceedings of the IEEE Conference on Robotics, Automation and
Mechatronics, pp. 513–518, December 2004. View at Scopus
12. J. Zhou, C. Wen, and W. Wang, “Adaptive backstepping control of uncertain systems with unknown input time-delay,” Automatica, vol. 45, no. 6, pp. 1415–1422, 2009. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
13. C. Wen, J. Zhou, and W. Wang, “Decentralized adaptive backstepping stabilization of interconnected systems with dynamic input and output interactions,” Automatica, vol. 45, no. 1, pp. 55–67,
2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
14. A.-C. Huang and Y.-S. Kuo, “Sliding control of non-linear systems containing time-varying uncertainties with unknown bounds,” International Journal of Control, vol. 74, no. 3, pp. 252–264, 2001.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
15. Y. Zhang, P. Y. Peng, and Z. P. Jiang, “Stable neural controller design for unknown nonlinear systems using backstepping,” IEEE Transactions on Neural Networks, vol. 11, no. 6, pp. 1347–1360,
2000. View at Scopus
16. C. P. Bechlioulis and G. A. Rovithakis, “Adaptive control with guaranteed transient and steady state tracking error bounds for strict feedback systems,” Automatica, vol. 45, no. 2, pp. 532–538,
2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
17. Y. Li, S. Qiang, X. Zhuang, and O. Kaynak, “Robust and adaptive backstepping control for nonlinear systems using RBF neural networks,” IEEE Transactions on Neural Networks, vol. 15, no. 3, pp.
693–701, 2004. View at Publisher · View at Google Scholar · View at Scopus
18. J. Na, X. Ren, G. Herrmann, and Z. Qiao, “Adaptive neural dynamic surface control for servo systems with unknown dead-zone,” Control Engineering Practice, vol. 19, no. 11, pp. 1328–1343, 2011.
View at Publisher · View at Google Scholar
19. D. Wang and J. Huang, “Neural network-based adaptive dynamic surface control for a class of uncertain nonlinear systems in strict-feedback form,” IEEE Transactions on Neural Networks, vol. 16,
no. 1, pp. 195–202, 2005. View at Publisher · View at Google Scholar · View at Scopus
20. C.-Y. Su, Y. Feng, H. Hong, and X. Chen, “Adaptive control of system involving complex hysteretic nonlinearities: a generalised Prandtl-Ishlinskii modelling approach,” International Journal of
Control, vol. 82, no. 10, pp. 1786–1793, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
21. Y. Feng, H. Hong, X. Chen, and C. Y. Su, “Robust adaptive controller design for a class of nonlinear systems preceded by generalized Prandtl-Ishlinskii hysteresis representation,” in Proceedings
of the 7th World Congress on Intelligent Control and Automation (WCICA '08), pp. 382–387, Chongqing, China, June 2008. View at Publisher · View at Google Scholar · View at Scopus
22. Y. Feng, C. Y. Su, and H. Hong, “Universal construction of robust adaptive control laws for a class of nonlinear systems preceded by generalized Prandtl-Ishlinskii representation,” in Proceedings
of the 3rd IEEE Conference on Industrial Electronics and Applications (ICIEA '08), pp. 153–158, Singapore, June 2008. View at Publisher · View at Google Scholar · View at Scopus
23. Y. Feng, Y. M. Hu, and C. Y. Su, “Robust adaptive control for a class of perturbed strict-feedback nonlinear systems with unknown Prandtl-Ishlinskii hysteresis,” in Proceedings of the IEEE
International Symposium on Intelligent Control (ISIC '06), pp. 106–111, Munich, Germany, October 2006. View at Publisher · View at Google Scholar · View at Scopus
24. H. Du, S. S. Ge, and J. K. Liu, “Adaptive neural network output feedback control for a class of non-affine non-linear systems with unmodelled dynamics,” IET Control Theory and Applications, vol.
5, no. 3, pp. 465–477, 2011. View at Publisher · View at Google Scholar
25. S. S. Ge and J. Wang, “Robust adaptive tracking for time-varying uncertain nonlinear systems with unknown control coefficients,” IEEE Transactions on Automatic Control, vol. 48, no. 8, pp.
1463–1469, 2003. View at Publisher · View at Google Scholar · View at Scopus
26. Y. Xudong and J. Jingping, “Adaptive nonlinear design without a priori knowledge of control directions,” IEEE Transactions on Automatic Control, vol. 43, no. 11, pp. 1617–1621, 1998. View at
27. S. S. Ge and J. Wang, “Robust adaptive neural control for a class of perturbed strict feedback nonlinear systems,” IEEE Transactions on Neural Networks, vol. 13, no. 6, pp. 1409–1419, 2002. View
at Publisher · View at Google Scholar · View at Scopus
28. R. D. Nussbaum, “Some remarks on a conjecture in parameter adaptive control,” Systems and Control Letters, vol. 3, no. 5, pp. 243–246, 1983. View at Scopus
29. K. Astrom and B. Wittenmark, Adaptive Control, Addison-Wesly, 2nd edition, 1995.
30. S. Tong, C. Liu, and Y. Li, “Fuzzy-adaptive decentralized output-feedback control for large-scale nonlinear systems with dynamical uncertainties,” IEEE Transactions on Fuzzy Systems, vol. 18, no.
5, pp. 845–861, 2010. View at Publisher · View at Google Scholar · View at Scopus
31. H. E. Psillakis, “Further results on the use of nussbaum gains in adaptive neural network control,” IEEE Transactions on Automatic Control, vol. 55, no. 12, pp. 2841–2846, 2010. View at Publisher
· View at Google Scholar · View at Scopus
32. A. Rincon, F. Angulo, and G. Osorio, “A robust state feedback adaptive controller with improved transient tracking error bounds for plants with unknown varying control gain,” in Applications of
Nonlinear Control, chapter 5, pp. 79–98, INTECH, 2012.
33. M. M. Polycarpou and P. A. Ioannou, “On the existence and uniqueness of solutions in adaptive control systems,” IEEE Transactions on Automatic Control, vol. 38, no. 3, pp. 474–479, 1993. View at
Publisher · View at Google Scholar · View at Scopus
34. J. Slotine and W. Li, Applied Nonlinear Control, Prentice Hall, Englewood Cliffs, NJ, USA, 1991.
35. A. Leva, L. Piroddi, M. Di Felice, A. Boer, and R. Paganini, “Adaptive relay-based control of household freezers with on-off actuators,” Control Engineering Practice, vol. 18, no. 1, pp. 94–102,
2010. View at Publisher · View at Google Scholar · View at Scopus
36. M. D. Felice, L. Piroddi, A. Leva, and A. Boer, “Adaptive temperature control of a household refrigerator,” in Proceedings of the American Control Conference (ACC '09), pp. 889–894, St. Louis,
Mo, USA, June 2009. View at Publisher · View at Google Scholar · View at Scopus
37. H. Royden, Real Analysis, Prentice Hall, Upper Saddle River, NJ, USA, 1988.
38. P. Ioannou and J. Sun, Robust Adaptive Control, Prentice Hall, Upper Saddle River, NJ, USA, 1996. | {"url":"http://www.hindawi.com/journals/aaa/2013/169519/","timestamp":"2014-04-20T18:50:49Z","content_type":null,"content_length":"835000","record_id":"<urn:uuid:e565c970-a8f6-46fc-883a-ff6d5dc6869c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- User Profile for: mow_@_opar.freeserve.co.uk
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: mow_@_opar.freeserve.co.uk
User Profile for: mow_@_opar.freeserve.co.uk
UserID: 155464
Name: mows
Registered: 12/13/04
Total Posts: 4
Recent Messages Discussion Posted
1 Re: List:{1}, {2,3},{4,5,6}.... comp.soft-sys.math.mathematica Jan 11, 2013 10:22 PM
2 Re: clock puzzle... sci.math.independent Aug 2, 2003 12:33 PM
3 Re: clock puzzle... sci.math.independent Jul 31, 2003 2:41 PM
4 Re: Layman's (simple) question about Wiles, FLT and Taniyama-Shimura sci.math.independent Mar 16, 2002 10:52 AM
Show all user messages | {"url":"http://mathforum.org/kb/profile.jspa?userID=155464","timestamp":"2014-04-17T13:18:47Z","content_type":null,"content_length":"11330","record_id":"<urn:uuid:4a2a038c-3e64-4976-b44e-0c26fef3adda>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question about best rational approximations.
December 8th 2011, 08:35 AM #1
Question about best rational approximations.
I have a quick question about best rational approximations of irrational numbers using continued fractions etc. (Continued fraction - Wikipedia, the free encyclopedia) Suppose we have a real
number $x$, and we know that some rational number $\frac{a}{b}$ is a best approximation of $x$, in other word a convergent of the continued fraction of $x$.
Now suppose some other ration number $\frac{a'}{b'}$ is such that $x<\frac{a'}{b'}<\frac{a}{b}$. We then know for sure that $b<b'$, but is $\frac{a'}{b'}$ necessarily a best approximation of $x$
as well? It seems to me like this would be true, but I can't think of any good reason why.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/number-theory/193783-question-about-best-rational-approximations.html","timestamp":"2014-04-19T23:36:16Z","content_type":null,"content_length":"31564","record_id":"<urn:uuid:8cca6899-6c31-4311-b6bd-1354c1504864>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
8.10 An Engine Of The Orbital Maneuvering System ... | Chegg.com
An engine of the orbital maneuvering system (OMS) on a spaceshuttle exerts a force of 2.62×10^4 over atime interval of 4.00 , exhausting a negligible mass of fuelrelative to the shuttle's mass of
9.55×10^4 .
What is the y component of the impulse of the force forthis time interval of 4.00 ?
What is the y component of the shuttle's change inmomentum from this impulse?
What is the y component of the shuttle's change invelocity from this impulse? | {"url":"http://www.chegg.com/homework-help/questions-and-answers/810-engine-orbital-maneuvering-system-oms-spaceshuttle-exerts-force-262-104-atime-interval-q89493","timestamp":"2014-04-18T02:13:29Z","content_type":null,"content_length":"21357","record_id":"<urn:uuid:093d020e-f2a6-4cae-913f-dab4ff60448d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new proof of Cayley’s formula for counting labeled trees
Results 1 - 10 of 11
- J. COMBINATORIAL THEORY A , 1998
"... Various enumerations of labeled trees and forests, including Cayley's formula n n\Gamma2 for the number of trees labeled by [n], and Cayley's multinomial expansion over trees, are derived from
the following coalescent construction of a sequence of random forests (R n ; R n\Gamma1 ; : : : ; R 1 ..."
Cited by 38 (18 self)
Add to MetaCart
Various enumerations of labeled trees and forests, including Cayley's formula n n\Gamma2 for the number of trees labeled by [n], and Cayley's multinomial expansion over trees, are derived from the
following coalescent construction of a sequence of random forests (R n ; R n\Gamma1 ; : : : ; R 1 ) such that R k has uniform distribution over the set of all forests of k rooted trees labeled by
[n]. Let R n be the trivial forest with n root vertices and no edges. For n k 2, given that R n ; : : : ; R k have been defined so that R k is a rooted forest of k trees, define R k\Gamma1 by
addition to R k of a single edge picked uniformly at random from the set of n(k \Gamma 1) edges which when added to R k yield a rooted forest of k \Gamma 1 trees. This coalescent construction is
related to a model for a physical process of clustering or coagulation, the additive coalescent in which a system of masses is subject to binary coalescent collisions, with each pair of masses of
- Microsurveys in Discrete Probability, number 41 in DIMACS Ser. Discrete Math. Theoret. Comp. Sci , 1997
"... In a Galton-Watson branching process with offspring distribution (p 0 ; p 1 ; : : :) started with k individuals, the distribution of the total progeny is identical to the distribution of the
first passage time to \Gammak for a random walk started at 0 which takes steps of size j with probability p ..."
Cited by 38 (15 self)
Add to MetaCart
In a Galton-Watson branching process with offspring distribution (p 0 ; p 1 ; : : :) started with k individuals, the distribution of the total progeny is identical to the distribution of the first
passage time to \Gammak for a random walk started at 0 which takes steps of size j with probability p j+1 for j \Gamma1. The formula for this distribution is a probabilistic expression of the
Lagrange inversion formula for the coefficients in the power series expansion of f(z) k in terms of those of g(z) for f(z) defined implicitly by f(z) = zg(f(z)). The Lagrange inversion formula is the
analytic counterpart of various enumerations of trees and forests which generalize Cayley's formula kn n\Gammak\Gamma1 for the number of rooted forests labeled by a set of size n whose set of roots
is a particular subset of size k. These known results are derived by elementary combinatorial methods without appeal to the Lagrange formula, which is then obtained as a byproduct. This approach
unifies an...
- Electronic Journal of Combinatorics
"... Abstract. A proper vertex of a rooted tree with totally ordered vertices is a vertex that is less than all its proper descendants. We count several kinds of labeled rooted trees and forests by
the number of proper vertices. Our results are all expressed in terms of the polynomials n−1 Pn(a, b, c) = ..."
Cited by 9 (0 self)
Add to MetaCart
Abstract. A proper vertex of a rooted tree with totally ordered vertices is a vertex that is less than all its proper descendants. We count several kinds of labeled rooted trees and forests by the
number of proper vertices. Our results are all expressed in terms of the polynomials n−1 Pn(a, b, c) = c (ia + (n − i)b + c), i=1 which reduce to to (n + 1) n−1 for a = b = c = 1. Our study of proper
vertices was motivated by Postnikov’s hook length formula (n + 1) n−1 = n!
"... Abstract — We introduce join scheduling algorithms that employ a balanced network utilization metric to optimize the use of all network paths in a global-scale database federation. This metric
allows algorithms to exploit excess capacity in the network, while avoiding narrow, long-haul paths. We giv ..."
Cited by 5 (1 self)
Add to MetaCart
Abstract — We introduce join scheduling algorithms that employ a balanced network utilization metric to optimize the use of all network paths in a global-scale database federation. This metric allows
algorithms to exploit excess capacity in the network, while avoiding narrow, long-haul paths. We give a twoapproximate, polynomial-time algorithm for serial (left-deep) join schedules. We also
present extensions to this algorithm that explore parallel schedules, reduce resource usage, and define tradeoffs between computation and network utilization. We evaluate these techniques within the
SkyQuery federation of Astronomy databases using spatial-join queries submitted by SkyQuery’s users. Experiments show that our algorithms realize near-optimal network utilization with minor
computational overhead. I.
"... . In this paper we are interesting in the enumeration of rooted labelled trees according to the relationship between the root and its sons. Let Tn;k be the family of Cayley trees on [n] such
that the root has exactly k smaller sons. In a first time we give a bijective proof of the fact that jTn+1;k ..."
Cited by 3 (0 self)
Add to MetaCart
. In this paper we are interesting in the enumeration of rooted labelled trees according to the relationship between the root and its sons. Let Tn;k be the family of Cayley trees on [n] such that the
root has exactly k smaller sons. In a first time we give a bijective proof of the fact that jTn+1;k j = \Gamma n k \Delta n n\Gammak . Moreover, we use the family Tn+1;0 of Cayley trees for which the
root is smaller than all its sons to give combinatorial explanations of various identities involving n n . We rely this family to the enumeration of minimal factorization of the n-cycle (1; 2; : : :
; n) as a product of transpositions. Finally, we use the fact that jTn+1;0 j = n n to prove bijectively that there are 2n n ordered alternating trees on [n + 1]. R' esum' e. Dans cet article nous
nous int'eressons `a l"enum'eration d'arbres 'etiquet'es enracin'es, en consid'erant un nouveau param`etre relatif `a l'ordre existant entre la racine et ses fils. Soit donc Tn;k la famille d...
, 2001
"... This paper presents a systematic approach to the discovery, interpretation and verification of various extensions of Hurwitz's multinomial identities, involving polynomials defined by sums over
all subsets of a finite set. The identities are interpreted as decompositions of forest volumes define ..."
Cited by 2 (0 self)
Add to MetaCart
This paper presents a systematic approach to the discovery, interpretation and verification of various extensions of Hurwitz's multinomial identities, involving polynomials defined by sums over all
subsets of a finite set. The identities are interpreted as decompositions of forest volumes defined by the enumerator polynomials of sets of rooted labeled forests. These decompositions involve the
following basic forest volume formula, which is a refinement of Cayley's multinomial expansion: for R ` S the polynomial enumerating out-degrees of vertices of rooted forests labeled by S whose set
of roots is R, with edges directed away from the roots, is ( P r2R x r )( P s2S x s ) jS j\GammajRj\Gamma1
- Math. Bull
"... Abstract. We consider a class of strongly q-log-convex polynomials based on a triangular recurrence relation with linear coefficients, and we show that the Bell polynomials, the Bessel
polynomials, the Ramanujan polynomials and the Dowling polynomials are strongly q-log-convex. We also prove that th ..."
Cited by 2 (1 self)
Add to MetaCart
Abstract. We consider a class of strongly q-log-convex polynomials based on a triangular recurrence relation with linear coefficients, and we show that the Bell polynomials, the Bessel polynomials,
the Ramanujan polynomials and the Dowling polynomials are strongly q-log-convex. We also prove that the Bessel transformation preserves log-convexity.
, 2001
"... Abstract. The Ramanujan polynomials were introduced by Ramanujan in his study of power series inversions. These polynomials have been closely related to the enumeration of trees. In an approach
to the Cayley formula on the number of trees, Shor discovers a refined recurrence relation in terms of the ..."
Add to MetaCart
Abstract. The Ramanujan polynomials were introduced by Ramanujan in his study of power series inversions. These polynomials have been closely related to the enumeration of trees. In an approach to
the Cayley formula on the number of trees, Shor discovers a refined recurrence relation in terms of the number of improper edges, without realizing the connection to the Ramanujan polynomials. On the
other hand, Dumont and Ramamonjisoa independently take the gramatical approach to a sequence associated with the Ramanujan polynomials and have reached the same conclusion as Shor’s. Furthermore,
Shor introduces a sequence of polynomials generalizing the numbers mentioned above. It was a great coincidence for Zeng to realize that the Shor polynomials turn out to be the Ramanujan polynomials
through an explicit substitution of parameters. Moreover, Zeng gives two combinatorial interpretations of the recurrence relation of Shor. On the other side of the story, Shor also discovers a
recursion of Ramanujan polynomials which is equivalent to the Berndt-Evans-Wilson recursion under the substitution of Zeng, and asks for a combinatorial interpretation. The objective of this paper is
to present a bijection for the Shor recursion, or and Berndt-Evans-Wilson recursion, answering the question of Shor. Such a bijection also leads to a combinatorial interpretation of the recurrence
relation originally given by Ramanujan.
, 2001
"... Abstract. The Ramanujan polynomials were introduced by Ramanujan in his study of power series inversions. In an approach to the Cayley formula on the number of trees, Shor discovers a refined
recurrence relation in terms of the number of improper edges, without realizing the connection to the Ramanu ..."
Add to MetaCart
Abstract. The Ramanujan polynomials were introduced by Ramanujan in his study of power series inversions. In an approach to the Cayley formula on the number of trees, Shor discovers a refined
recurrence relation in terms of the number of improper edges, without realizing the connection to the Ramanujan polynomials. On the other hand, Dumont and Ramamonjisoa independently take the
grammatical approach to a sequence associated with the Ramanujan polynomials and have reached the same conclusion as Shor’s. It was a coincidence for Zeng to realize that the Shor polynomials turn
out to be the Ramanujan polynomials through an explicit substitution of parameters. Shor also discovers a recursion of Ramanujan polynomials which is equivalent to the Berndt-Evans-Wilson recursion
under the substitution of Zeng, and asks for a combinatorial interpretation. The objective of this paper is to present a bijection for the Shor recursion, or and Berndt-Evans-Wilson recursion,
answering the question of Shor. Such a bijection also leads to a combinatorial interpretation of the recurrence relation originally given by Ramanujan. 1
, 2006
"... Abstract.Generalizing a sequence of Lambert, Cayley and Ramanujan, Chapoton has recently introduced a polynomial sequence Qn: = Qn(x,y,z,t) defined by Q1 = 1, Qn+1 = [x + nz + (y + t)(n + y∂y)]
Qn. In this paper we prove Chapoton’s conjecture on the duality formula: Qn(x,y,z,t) = Qn(x+nz+ nt,y, −t, ..."
Add to MetaCart
Abstract.Generalizing a sequence of Lambert, Cayley and Ramanujan, Chapoton has recently introduced a polynomial sequence Qn: = Qn(x,y,z,t) defined by Q1 = 1, Qn+1 = [x + nz + (y + t)(n + y∂y)]Qn. In
this paper we prove Chapoton’s conjecture on the duality formula: Qn(x,y,z,t) = Qn(x+nz+ nt,y, −t, −z), and answer his question about the combinatorial interpretation of Qn. Actually we give
combinatorial interpretations of these polynomials in terms of plane trees, half-mobile trees, and forests of plane trees. Our approach also leads to a general formula that unifies several known
results for enumerating trees and plane trees. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1435874","timestamp":"2014-04-16T21:58:48Z","content_type":null,"content_length":"37527","record_id":"<urn:uuid:7a05b9b8-5189-416b-8833-9f8663e8f45a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A rectangular pyramid and a rectangular prism have bases of the same dimensions as shown below. The surface area of the rectangular pyramid is 146 square inches. The height of the rectangular prism
is the same as the slant height of the pyramid. What is the surface area of the rectangular prism? 320 in2 336 in2 292 in2 438 in2
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/500da55de4b0ed432e101ae1","timestamp":"2014-04-19T22:53:32Z","content_type":null,"content_length":"45284","record_id":"<urn:uuid:757c7915-a280-46dd-91e4-9202f7f047b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
SI Units
The international system (SI) of units, prefixes, and symbols should be used for all physical quantities except that certain special units, which are specified later, may be used in astronomy,
without risk of confusion or ambiguity, in order to provide a better representation of the phenomena concerned. SI units are now used to a varying extent in all countries and disciplines, and this
system is taught in almost all schools, colleges and universities. The units of the centimetre-gram-second (CGS) system and other non-SI units, which will be unfamiliar to most young scientists,
should not be used even though they may be considered to have some advantages over SI units by some astronomers.
General information about SI units can be found in the publications of national standards organisations and in many textbooks and handbooks.
There are three classes of SI units: (a) the seven base units that are regarded as dimensionally independent; (b) two supplementary, dimensionless units for plane and solid angles; and (c) derived
units that are formed by combining base and supplementary units in algebraic expressions; such derived units often have special names and symbols and can be used in forming other derived units. The
units of classes (a) and (b) are listed in Table 1. The units of class (c) of greatest interest to astronomers are given in Table 2 for those with simple names and symbols, and in Table 3 for those
with compound names and symbols. In forming compound names division is indicated by per, while in the corresponding symbols it is permissible to use either a negative index or a solidus (oblique
stroke or slash); thus the SI: unit of velocity is a metre per second and the corresponding symbol is m s-l or m/s.
The space between the base units is important in such a case since m/s could be interpreted as a frequency of 1000 Hz; a space is not necessary if the preceding unit ends in a superscript; a full
stop (period) may be inserted between units to remove any ambiguity; the solidus should only be used in simple expressions and must never be used twice in the same compound unit.
Table 1. The names and symbols for the SI base and supplementary units.
Quantity SI Unit: Name Symbol
length metre m
mass kilogram kg
time ^(1) second
electric current ampere A
thermodynamic temperature kelvin K
amount of substance mole mol
luminous intensity candela cd
plane angle radian rad
solid angle steradian sr
^1 The abbreviation sec should not be used to denote a second of time.
Table 2. Special names and symbols for SI derived units.
Quantity SI Unit: Name Symbol Expression
frequency hertz Hz s^-l
force newton N kg m s^-2
pressure, stress pascal Pa N m^-2
energy joule J N m
power watt W J s^-l
electric charge coulomb C A s
electric potential volt V J C^-l
electric resistance ohm Omega V A^-l
electric conductance siemens A V^-l
electric capacitance farad F C V^-l
magnetic flux weber Wb V s
magnetic flux density tesla T Wb m^-2
inductance henry H Wb A^-l
luminous flux lumen lm cd sr
illuminance lux lx lm m^-2
Table 3. Examples of SI derived unite with compound names.
Quantity SI unit: Name symbol
density (mass) kilogram per cubic metre kg m^-3
current density ampere per square metre A m^-2
magnetic field strength ampere per metre A m^-l
electric field strength volt per metre V m^-l
dynamic viscosity pascal second Pa s
heat flux density watt per square metre W m^-2
heat capacity, entropy joule per kelvin J K^-l
energy density joule per cubic metre J m^-3
permittivity farad per metre F m^-l
permeability henry per metre H m^-l
radiant intensity watt per steradian W sr^-l
radiance watt per square metre per steradian W m^-2 Sr^-l
luminance candela per square metre cd m^-2
Table 4. SI prefixes and symbols for multiples and submultiples.
Submultiple Prefix Symbol Multiple Prefix Symbol
10^-1 deci d 10 deca da
10^-2 centi c 10^2 hecto h
10^-3 milli m 10^3 kilo k
10^-6 micro mu 10^6 mega M
10^-9 nano n 10^9 giga G
10^-12 pico p 10^12 tera T
10^-15 femto f 10^15 peta P
10^-18 atto a 10^18 exa E
Note: Decimal multiples and submultiples of the kilogram should be formed by attaching the appropriate SI prefix and symbol to gram and g, not to kilogram and kg.
4.12 SI prefixes: Decimal multiples and submultiples of the SI: units, except the kilogram, are formed by attaching the names or symbols of the appropriate prefixes to the names or symbols of the
units. The combination of the symbols for a prefix and unit is regarded as a single symbol which may be raised to a power without the use of parentheses. The recognised list of prefixes and symbols
is given in Table 4. These prefixes may be attached to one or more of the unit symbols in an expression for a compound unit and to the symbol for a non-SI unit. Compound prefixes should not be used.
4.13 Non-SI units: It is recognised that some units that are not part of the international system will continue to be used in appropriate contexts. Such units are listed in Table 5; they are either
defined exactly in terms of SI units or are defined in other ways and are determined by measurement. Other non-SI units, such as Imperial units and others listed in Table 6, should not normally be
Table 5. Non-SI units that are recognised for use in astronomny.
Quantity Unit: Name Symbol Value
time ^(1) minute min or " 60 s
time hour h 3600 s = 60 min
time day d 86 400 s = 24 h
time year (Julian) a 31.5576 Ms = 365.25 d
angle ^(2) second of arc " (pi/648 000) rad
angle minute of arc ' (pi/10 800) rad
angle degree o (pi/180) rad
angle ^(3) revolution(cycle) c 2pi rad
length astronomical unit au 0.149 598 Tm
length parsec pc 30.857 Pm
mass solar mass Mo 1.9891 x 10^30 kg
mass atomic mass unit u 1.660 540 x 10^-27 kg
energy electron volt eV 0.160 2177 aJ
flux density jansky ^(4) Jy 10^-26 W m^-2 Hz^-1
^1 The alternative symbol is not formally recognised in the SI system.
^2 The symbol mas is often used for a milliarcsecond (0".001).
^3 The unit and symbols are not formally recognised in the SI system.
^4 The jansky is mainly used in radio astronomy.
^5 The degree Celsius (oC) is used in specifying temperature for meteorological purposes, but otherwise the kelvin (K) should be used.
5.14 Time and angle : The units for sexagesimal measures of time and angle are included in Table 5. The names of the units of angle may be prefixed by 'arc' whenever there could be confusion with the
units of time. The symbols for these measures are to be typed or printed (where possible as superscripts) immediately following the numerical values; if the last sexagesimal value is divided
decimally, the decimal point should be placed under, or after, the symbol for the unit; leading zeros should be inserted in sexagesimal numbers as indicated in the following examples.
2d 13h 07m 15.259s 06h 19m 05.18s 120o 58' 08".26
These non-SI units should not normally be used for expressing intervals of time or angle that are to be used in combination with other units.
In expressing the precision or resolution of angular measurement, it is becoming common in astronomy to use the milliarcsecond as the unit, and to represent this by the symbol mas; this is preferable
to other abbreviations, but its meaning should be made clear at its first occurrence. The more appropriate SI Unit would be the nanoradian (1 nrad = 0.2 mas). In general, the degree with decimal
subdivision is recommended for use when the radian is not suitable and when there is no requirement to use the sexagesimal subdivision. If it is more appropriate to describe an angle in terms of
complete revolutions (or rotations or turns or cycles), then the most appropriate symbol appears to be a letter c; this may be used in a superior position as in 1c = 360o =2pi rad = 1 rev, but it may
be used as in 1 c/s = 1Hz.
The use of units of time for the representation of angular quantities, such as hour angle, right ascension and sidereal time, is common in astronomy, but it is a source of confusion and error in some
contexts, especially in formulae for numerical calculation. The symbol for a variable followed by the superscript for a unit may be used to indicate the numerical value of that variable when measured
in that unit.
5.15 Astronomical units: The IAU System of Astronomical Constants recognises a set of astronomical units of length, mass and time for use in connection with motions in the Solar System; they are
related to each other through the adopted value of the constant of gravitation when expressed in these units (IAU 1976). The symbol for the astronomical unit of length is au; the astronomical unit of
time is 1 day (d) of 86 400 SI seconds (s); the astronomical unit of mass is equal to the mass of the Sun and is often denoted by Mo, but the special subscript makes this symbol inconvenient for
general use.
An appropriate unit of length for studies of structure of the Galaxy is the parsec (pc), which is defined in terms of the astronomical unit of length (au). The unit known as the light-year is
appropriate to popular expositions on astronomy and is sometimes used in scientific papers as an indicator of distance.
The IAU has used the julian century of 36 525 days in the fundamental formulae for precession, but the more appropriate basic unit for such purposes and for expressing very long periods is the year.
The recognised symbol for a year is the letter a, rather than yr, which is often used in papers in English; the corresponding symbols for a century (ha and cy) should not be used. Although there are
several different kinds of year (as there are several kinds of day), it is best to regard a year as a julian year of 365.25 days (31.5576 Ms) unless otherwise specified.
It should be noted that sidereal, solar and universal time are best regarded as measures of hour angle expressed in time measure; they can be used to identify instants of time, but they are not
suitable for use as precise measures of intervals of time since the rate of rotation of Earth, on which they depend, is variable with respect to the SI second.
5.16 Obsolete units: It is strongly recommended that the non-SI units listed in Table 6 are no longer used. Some of the units listed are rarely used in current literature, but they have been included
for use in the study of past literature. Imperial and other non-metric units should not be used in connection with processes or phenomena, but there are a few situations where their use may be
justified (as in "the Hale 200-inch telescope on Mount Palomar"). The equivalent value in SI units should be given in parentheses if this is likely to be helpful.
Table 6. Non-SI units and symbols whose continued use is deprecated.
Quantity Unit: Name Symbol Value
length angstrom Å 10^-1O m = 0.1 nm
length micron mu 10^-6 m
length fermi 1 fm
area barn b 10^-28 m^2
volume cubic centimetre cc 10^-6 m^3
force dyne dyn 10^-5 N
energy erg erg 10^-7 J
energy ^(2) calorie cal 4.1868 J
pressure bar bar 10^5 Pa
pressure stand. atmosphere atm 101 325 Pa
acceleration (grav.) gal Gal 10^-2 m s^-2
gravity gradient eotvos E 10^-9 s^-2
magnetic flux density gauss G corresponds to 10^-4 T
magnetic flux density gamma corresponds to 10^-9 T
magn. field strength oersted Oe corr. to (1000/4pi) A m^-l
^1 Non-metric units, such as miles, feet, inches, tons, pounds, ounces, gallons, pints, etc., should not be used except in special circumstances.
^2 There are other obsolete definitions and values for the calorie.
The definitions of the SI units and an extensive list of conversion factors for obsolete units are given by Anderson (Physics Vade Mecum, American Institute of Physics 1981). In particular,
wavelengths should be expressed in metres with the appropriate SI prefix; e.g., for wavelengths in the visual range the nanometre (nm) should be used instead of the angstrom (A), which is a source of
confusion in comparisons with longer and shorter wavelengths expressed in recognised SI units. The notation of the form of a Greek Lambda foIlowed by a numerical value (which represents the
wavelength in angstroms) should also be abandoned.
The name micrometre should be used instead of micron. In all cases, the spelling metre should be used for the unit, while the spelling meter should be used for a measuring instrument (as in
micrometer). The word kilometre should be pronounced ki-lo-me-te, not kil-lom-e-ter.
If wavenumbers are used they should be based on the metre, not the centimetre; in any case the unit (m-l or cm-l) should be stated since they are not dimensionless quantities. The uses of frequency
(in Hz) at radio wavelengths and energy (in eV) at X-ray wavelengths are appropriate for some purposes, but they serve to obscure the essential unity of the electromagnetic spectrum, and so it may be
helpful to give the wavelength as well at the first occurrence; the correspondences between these units and wavelength are as follows:
wavelength in metres =2.997 924 58 x 10^8 / frequency in hertz
or = 1.239 842 4 x l0^6 / energy in electron-volts
5.17 Magnitude: The concept of apparent and absolute magnitude in connection with the brightness or luminosity of a star or other astronomical object will continue to be used in astronomy even though
it is difficult to relate the scales of magnitude to photometric measures in the SI system. Magnitude, being the logarithm of a ratio, is to be regarded as a dimensionless quantity; the name may be
abbreviated to mag without a full stop, and it should be written after the number. The use of a superscript m is not recommended. The method of determination of a magnitude or its wavelength range
may be indicated by appropriate letters in italic type as in U, B, V. The photometric system used should be clearly specified when precise magnitudes are given. | {"url":"http://iau.org/science/publications/proceedings_rules/units/","timestamp":"2014-04-16T22:02:07Z","content_type":null,"content_length":"71338","record_id":"<urn:uuid:5ff8b64d-43e1-4da6-9df1-3b52bfeb525d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Off-line compression by greedy textual substitution
Results 1 - 10 of 22
- IEEE TRANSACTIONS ON INFORMATION THEORY , 2005
"... This paper addresses the smallest grammar problem: What is the smallest context-free grammar that generates exactly one given string σ? This is a natural question about a fundamental object
connected to many fields, including data compression, Kolmogorov complexity, pattern identification, and addi ..."
Cited by 24 (0 self)
Add to MetaCart
This paper addresses the smallest grammar problem: What is the smallest context-free grammar that generates exactly one given string σ? This is a natural question about a fundamental object connected
to many fields, including data compression, Kolmogorov complexity, pattern identification, and addition chains. Due to the problem’s inherent complexity, our objective is to find an approximation
algorithm which finds a small grammar for the input string. We focus attention on the approximation ratio of the algorithm (and implicitly, worst-case behavior) to establish provable performance
guarantees and to address short-comings in the classical measure of redundancy in the literature. Our first results are a variety of hardness results, most notably that every efficient algorithm for
the smallest grammar problem has approximation ratio at least 8569 unless P = NP. 8568 We then bound approximation ratios for several of the bestknown grammar-based compression algorithms, including
LZ78, BISECTION, SEQUENTIAL, LONGEST MATCH, GREEDY, and RE-PAIR. Among these, the best upper bound we show is O(n 1/2). We finish by presenting two novel algorithms with exponentially better ratios
of O(log 3 n) and O(log(n/m ∗)), where m ∗ is the size of the smallest grammar for that input. The latter highlights a connection between grammar-based compression and LZ77.
, 2002
"... This thesis considers the smallest grammar problem: find the smallest context-free grammar that generates exactly one given string. We show that this problem is intractable, and so our objective
is to find approximation algorithms. This simple question is connected to many areas of research. Most im ..."
Cited by 9 (0 self)
Add to MetaCart
This thesis considers the smallest grammar problem: find the smallest context-free grammar that generates exactly one given string. We show that this problem is intractable, and so our objective is
to find approximation algorithms. This simple question is connected to many areas of research. Most importantly, there is a link to data compression; instead of storing a long string, one can store a
small grammar that generates it. A small grammar for a string also naturally brings out underlying patterns, a fact that is useful, for example, in DNA analysis. Moreover, the size of the smallest
context-free grammar generating a string can be regarded as a computable relaxation of Kolmogorov complexity. Finally, work on the smallest grammar problem qualitatively extends the study of
approximation algorithms to hierarchically-structured objects. In this thesis, we establish hardness results, evaluate several previously proposed algorithms, and then present new procedures with
much stronger approximation guarantees.
, 2011
"... Let S be a string of length N compressed into a contextfree grammar S of size n. We present two representations of S achieving O(log N) random access time, and either O(n · αk(n)) construction
time and space on the pointer machine model, or O(n) construction time and space on the RAM. Here, αk(n) is ..."
Cited by 9 (0 self)
Add to MetaCart
Let S be a string of length N compressed into a contextfree grammar S of size n. We present two representations of S achieving O(log N) random access time, and either O(n · αk(n)) construction time
and space on the pointer machine model, or O(n) construction time and space on the RAM. Here, αk(n) is the inverse of the k th row of Ackermann’s function. Our representations also efficiently
support decompression of any substring in S: we can decompress any substring of length m in the same complexity as a single random access query and additional O(m) time. Combining these results with
fast algorithms for uncompressed approximate string matching leads to several efficient algorithms for approximate string matching on grammar-compressed strings without decompression. For instance,
we can find all approximate occurrences of a pattern P with at most k errors in time O(n(min{|P |k, k 4 + |P |} + log N) + occ), where occ is the number of occurrences of P in S. Finally, we are able
to generalize our results to navigation and other operations on grammar-compressed trees. All of the above bounds significantly improve the currently best known results. To achieve these bounds, we
introduce several new techniques and data structures of independent interest, including a predecessor data structure, two ”biased” weighted ancestor data structures, and a compact representation of
heavy-paths in grammars.
, 2004
"... Grammar-based compression algorithms infer context-free grammars to represent the input data. The grammar is then transformed into a symbol stream and finally encoded in binary. We explore the
utility of grammar-based compression of DNA sequences. We strive to optimize the three stages of grammar-ba ..."
Cited by 7 (0 self)
Add to MetaCart
Grammar-based compression algorithms infer context-free grammars to represent the input data. The grammar is then transformed into a symbol stream and finally encoded in binary. We explore the
utility of grammar-based compression of DNA sequences. We strive to optimize the three stages of grammar-based compression to work optimally for DNA. DNA is notoriously hard to compress, and
ultimately, our algorithm fails to achieve better compression than the best competitor. 1
- Software - Practice and Experience , 2004
"... In this paper we consider the problem of DNA compression. It is well known that one of the main features of DNA sequences is that they contain substrings which are duplicated except for a few
random mutations. For this reason most DNA compressors work by searching and encoding approximate repeats. W ..."
Cited by 6 (0 self)
Add to MetaCart
In this paper we consider the problem of DNA compression. It is well known that one of the main features of DNA sequences is that they contain substrings which are duplicated except for a few random
mutations. For this reason most DNA compressors work by searching and encoding approximate repeats. We depart from this strategy by searching and encoding only exact repeats. However, we use an
encoding designed to take advantage of the possible presence of approximate repeats. Our approach leads to an algorithm which is an order of magnitude faster than any other algorithm and achieves a
compression ratio very close to the best DNA compressors. Another important feature of our algorithm is its small space occupancy which makes it possible to compress sequences hundreds of megabytes
long, well beyond the range of any previous DNA compressor. 1
- In Proc. of the IEEE Data Compression Conference, 53–62 , 2007
"... ..."
- Proc. 25th Australasian Computer Science Conference , 2002
"... Recently several oJfline data compression schemes have been published that expend large amounts of computing resources when encoding a file, but decode the file quickly. These compressors work
by identifying phrases in the input data, and storing the data as a series of pointer to these phrases. Thi ..."
Cited by 5 (0 self)
Add to MetaCart
Recently several oJfline data compression schemes have been published that expend large amounts of computing resources when encoding a file, but decode the file quickly. These compressors work by
identifying phrases in the input data, and storing the data as a series of pointer to these phrases. This paper explores the application of an algorithm for computing all repeating substrings within
a string for phrase selection in an offiine data compressor. Using our approach, we obtain compression similar to that of the best known offiine compressors on genetic data, but poor results on
general text. It seems, however, that an alternate approach based on selecting repeating substrings is feasible. Keywords: strings, of_ fline data compression, textual substitution, repeating
substrings 1
- in Proc. 10th International Symp. on String Processing and Information Retrieval (SPIRE’03 , 2003
"... Abstract. Given a text, grammar-based compression is to construct a grammar that generates the text. There are many kinds of text compression techniques of this type. Each compression scheme is
categorized as being either off-line or on-line, according to how a text is processed. One representative ..."
Cited by 3 (3 self)
Add to MetaCart
Abstract. Given a text, grammar-based compression is to construct a grammar that generates the text. There are many kinds of text compression techniques of this type. Each compression scheme is
categorized as being either off-line or on-line, according to how a text is processed. One representative tactics for off-line compression is to substitute the longest repeated factors of a text with
a production rule. In this paper, we present an algorithm that compresses a text basing on this longestfirst principle, in linear time. The algorithm employs a suitable index structure for a text,
and involves technically efficient operations on the structure. 1
- in "International Journal of Foundations of Computer Science (IJFCS)", 2010. Symbiose 31
"... Abstract. Motivated by grammatical inference and data compression applications, we propose an algorithm to update a suffix array after the substitution, in the indexed text, of some occurrences
of a given word by a new character. Compared to other published index update methods, the problem addresse ..."
Cited by 3 (0 self)
Add to MetaCart
Abstract. Motivated by grammatical inference and data compression applications, we propose an algorithm to update a suffix array after the substitution, in the indexed text, of some occurrences of a
given word by a new character. Compared to other published index update methods, the problem addressed here may require the modification of a large number of distinct positions over the original
text. The proposed algorithm uses the specific internal order of suffix arrays in order to update simultaneously groups of entries, and ensures that only entries to be modified are visited.
Experiments confirm a significant execution time speed-up compared to the construction of suffix array from scratch at each step of the application.
"... Several diagnostic tracing techniques (e.g., event, power, and control-flow tracing) have been proposed for run-time debugging and postmortem analysis of wireless sensor networks (WSNs). Traces
generated by such techniques can become large, defying the harsh resource constraints of WSNs. Compression ..."
Cited by 2 (1 self)
Add to MetaCart
Several diagnostic tracing techniques (e.g., event, power, and control-flow tracing) have been proposed for run-time debugging and postmortem analysis of wireless sensor networks (WSNs). Traces
generated by such techniques can become large, defying the harsh resource constraints of WSNs. Compression is a straightforward candidate to reduce trace sizes, yet is challenged by the same resource
constraints. Established trace compression algorithms perform unsatisfactorily under these constraints. We propose Prius, a novel hybrid (offline/online) trace compression technique that enables
application of established trace compression algorithms for WSNs and achieves high compression rates and significant energy savings. We have implemented such hybrid versions of two established
compression techniques for TinyOS and evaluated them on various applications. Prius respects the resource constraints of WSNs (5 % average program memory overhead) whilst reducing energy consumption
on average by 46 % and 49% compared to straightforward online adaptations of established compression algorithms and the state-of-the-art tracespecific compression algorithm respectively. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=483562","timestamp":"2014-04-21T02:40:59Z","content_type":null,"content_length":"39038","record_id":"<urn:uuid:6a1d1a96-f7aa-4425-bd20-9dc85dbe0082>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
Context-free grammar
June 27th 2010, 04:04 PM
Context-free grammar
Develop context-free grammars that generate languages:
L1 = {w|w is palindrome in {a,b}^* , w=w^*}
L2 = {ww^* | w is word of {a,b}^*}
My attempt
S -> aSa | bSb | E | a | b
E is empty
Correct ?
2) I could not. Any tips?
June 28th 2010, 11:26 AM
2) I think is (ww)^* which means that it is a word that read from the early to mid is the same when read to the end of the half. But I can not generate the language
June 28th 2010, 02:37 PM
1) The syntax of L1 = {w|w is palindrome in {a,b}^* , w=w^*} does not seem right. How can w = w^*? My guess is that one of the following is meant:
(1a) L1 = {w|w is palindrome in {a,b}^*}
Then I think your answer would be correct except that empty strings are not considered palindromes. So I would modify as follows:
S -> aSa | bSb | a | b
(1b) L1 = {w|u is palindrome in {a,b}^* , w=u^*}
Consider that every single character is a palindrome, for example "a" is a palindrome, and so is "b." So this is just all strings over {a,b}^*.
S -> Sa | Sb | E
(1c) L1 = {w*|w is palindrome in {a,b}^*}
This is equivalent to (1b).
2) With similar reasoning to above, I believe this language is all non-empty strings over {a,b}^*. So I get
S -> Sa | Sb | a | b
June 28th 2010, 04:41 PM
OK. Thank you
And the language:
L3 = {w | w regular expression is about {x} }
Which means: it is about regular expression {x} ? I can generation a context-free grammar ?
June 28th 2010, 06:42 PM
I'm not sure exactly what this means, but I guess it means regular expressions with an alphabet consisting of the single character x, such as x? , x* , x+ , x | x, etc. There isn't a whole lot of
variety, and the most general regex is x*. So I would just write
S -> Sx | E
Maybe check with your book or notes to see if there is some other definition given though.
June 28th 2010, 07:57 PM
Thanks. I think you're right
June 29th 2010, 05:46 AM
I found another language I do not understand
L4 = {w|w is word of {x,y,(,)}^* with balanced parentheses}
What "with balanced parentheses" ?
June 29th 2010, 06:21 AM
() balanced
(()) balanced
()() balanced
(()()) balanced
(() not balanced
)( not balanced
(())) not balanced
potentially useful link.
June 29th 2010, 06:53 AM
Then it is just that:
S -> x | y | () | (S) | E
Correct ?
June 29th 2010, 08:00 AM
June 29th 2010, 08:15 AM
S -> xS | yS | ()S | (S) | E
I had not noticed. Now I think that is correct. This right?
June 29th 2010, 08:35 AM
June 29th 2010, 09:12 AM
Now I can
S -> xS | yS | ()S | (S) | SS | E
June 29th 2010, 09:13 AM
Now I can
S -> xS | yS | ()S | (S) | SS | E
June 29th 2010, 09:29 AM | {"url":"http://mathhelpforum.com/discrete-math/149540-context-free-grammar-print.html","timestamp":"2014-04-17T02:22:17Z","content_type":null,"content_length":"16497","record_id":"<urn:uuid:537ae45c-8d76-43c6-b9ea-e10704414ab4>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
CACR Wind Surge
Wind Surge in a Basin
Strong winds on the surface of a body of water will cause the water to pile up against the downwind side, due to the stresses exerted on the water surface. This applet illustrates this effect for a
water body with the wind blowing directly across it (here left to right). The body of water for this applet is either a closed basin (such as a lake: choose Upwind Boundary: closed) or a constant
depth continental shelf, open to the deep ocean at the upwind end--choose Upwind Boundary: Open. The important variables are the length of the basin (shelf), the still water depth, and the wind
speed. For the case of the basin, at the downstream side, a wall (in red) exists to keep the water in the basin. If it is too low, water will escape the basin and the maximum surge in the basin is
equal to the wall height. In the case of a shelf (the open boundary), this wall will be flooded and the surge at the shoreline will exceed the wall.
Press the Calculate Button to determine the surge. You can edit the variables and recalulate. (Try 120 km/hr wind.) The output variables eta(0) and eta(l) are the deviations from still water level at
the upwind and downwind ends of the basin respectively, in meters. The other outputs are c and xstar. xstar appears for the basin case when the bottom of the basin is blown dry in a strong wind.
xstar denotes the location of the edge of the water.
The equation for the water behavior is given in Dean and Dalrymple, Water Wave Mechanics for Engineers and Scientists, chapter 5, Eq. 5.96,
x 10 (-6).
If the surge elevation is not greater than the downwind wall, the total amount of water in the basin is the same before and during the wind. If overtopping occurs, there is less water in the basin.
Further, for strong winds, the bottom can be exposed on the upwind side.
Note that this figure is distorted. The horizontal extent of the figure is the basin length and the vertical extent is the total of the basin still water depth and the wall height that you specified.
If the wind stops suddenly, the basin will then probably seiche, with the water rocking back and forth in the basin. Try using the Seiche calculator to examine this subsequent behavior of the water.
Type in your basin geometry (note length of basin in Seiche is meters, not kilometers) and use Modal Number =1.
Problem: Assess the effect of varying the windspeed on the surge elevations for a given basin geometry. Do the same with the water depth and basin length for a fixed wind speed. Plot your results.
Problem: Examine the influence of the end wall. Find a basin size and wind speed such that a given wall height is overtopped (note: if overtopping occurs, the word 'overtopping' is written next to
the eta(l) value.) Then, increase the wall elevation until overtopping stops. Explain the difference in results.
Note: The case of a constant depth continental shelf with an offshore wind can be examined with the applet, by considering the wall (height set to zero) in the closed basin case as the edge of the
continental shelf and the shoreline would be on the left side of the figure. xstar denotes the distance offshore to the edge of the sea.
Comments: Robert A. Dalrymple
Center for Applied Coastal Research
University of Delaware, Newark DE 19716 | {"url":"http://coastal.udel.edu/faculty/rad/windsurge.html","timestamp":"2014-04-17T09:47:34Z","content_type":null,"content_length":"5419","record_id":"<urn:uuid:f4ef2287-aa72-474e-8632-f520f9e7d97f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Improving consensus structure by eliminating averaging artifacts
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Struct Biol. 2009; 9: 12.
Improving consensus structure by eliminating averaging artifacts
Common structural biology methods (i.e., NMR and molecular dynamics) often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and
RNA) is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries,
including unphysical bond lengths and angles.
Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure
(an extended or a 'close-by' structure) towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of
1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the
average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures). However, the percentage of atoms involved in clashes is
greatly reduced (from 63% to 1%); in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38) of refined structures resulted in lower RMSD to the native protein
versus the averaged structure. Finally, compared to PULCHRA [1], our approach produces representative structure of similar RMSD quality, but with much fewer clashes.
The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to
almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction [2], which could also benefit from our approach.
Methods for the experimental or theoretical determination of protein structures often output their results as an ensemble. In the case of experimental data like X-ray crystallography data, the
ensemble represents both the conformational diversity and the inability to resolve the time and spatial aspects of the experiment, whereas in the case of computational experiments, the ensemble in
part represents the uncertainty of interpreting the data[3]. However, often, there is a requirement for a single consensus structure. One way to generate this 'consensus' or 'representative
structure' is to calculate the centroid structure by averaging the Cartesian coordinates of the ensemble of superimposed structures.
A series of computational and experimental studies have been performed to rationalize the averaging methodology. Zagrovic et al. [4] proposed the "mean-structure hypothesis" which states that the
geometry of the collapsed unfolded state of small peptides and proteins in an average sense corresponds to the geometry of the native structure at equilibrium. Huang et al[5] have shown that finding
the "averaged structure" from a set of decoys yield structures that are closer to the native structure than most individual structures. Moreover, Zagrovic et al[6] have shown mathematically that the
RMSD between the "averaged structure" and the native structure is more similar than the most individual structures to the native structure. Furthermore, it was also argued that finding average
distance matrices and using distance based root mean square deviation as a metric may be one way to capture the relevant features of ensembles of structure and compare them with other reference
Unlike point based averaging where each member is a point, in averaging of structures, the "averaged model" often has unrealistic local geometry, including unphysical bond lengths and angles. In this
regard, several methods have been developed to remove averaging artifacts. Due to the process of protein structure prediction, methods to remove averaging artifacts are most commonly developed in
this context. The 'regularize' function of REFMAC [7] can be used to regularize the bonds and angles. Furthermore, Betancourt and Skolnick [8] developed a clustering approach, called SCAR, that uses
a harmonic potential to refine centroid structures. However, structure prediction results indicate that SPICKER [9] outperforms SCAR in terms of model selection. Furthermore, it has been shown [10]
that 'the models generated by TASSER [11] have incorrect side-chain conformations and poor hydrogen bonding patterns partly because of the on-lattice modelling and the unphysical geometry of the
SPICKER[9] cluster centroid structure'. PULCHRA [1], which combines a conjugant gradient search with a harmonic potential, supersedes both SCAR and SPICKER. Similarly, Kolinski and Bujnicki [12] have
introduced an elegant approach using a combination of template-based and de novo modelling followed by hierarchical clustering that employed averaging of very diverse models from threading, that
results in consensus structures with improved local and global quality.
In general, averaging artifacts become more pronounced when members of the ensemble are more divergent. This artifact is exacerbated in TASSER due to the fact that it begins with a lattice model and
averaging is performed across clusters of dissimilar structures. Consequently, the averaged structure is often not suitable for detailed atomic model building due to unrealistic bond lengths and
angles and unphysical local geometry. In the same vein, the community wide experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP) also penalizes structures with
unphysical bond lengths and unrealistic geometries. CASP defines two types of clashes for bond length. The first type of clash involves atoms that are less than 1.9 Å apart, and the other type of
clash involves atoms that are less than 3.6 Å apart. We adopt these criteria in what follows.
Herein, we apply the proposed algorithm for removing averaging artifacts from clusters of structures generated by the TASSER (Threading/ASSembly/Refinement) algorithm [11]. Within TASSER, generated
structures are clustered using SPICKER [9] and the cluster with the highest structural density is selected (this is still true in current version of TASSER [10]). Subsequently, the centroid model
(called COMBO) that is obtained by averaging all the cluster members of the most densely populated cluster is selected as the predicted structure. In various benchmarks of the TASSER algorithm [13],
the averaged structure (aka, the COMBO model) is generally closest to the native in terms of global RMSD. It is closer to the native structure than all the individual cluster members, including the
medoid (CLOSC model). Hence, TASSER outputs the cluster centroid (COMBO model) as its final model. In this regard, averaged models have also been shown to outperform minimum free energy structures in
the context of RNA secondary structure prediction [2].
Our goal is to generate a structure that is as close as possible to the 'averaged structure' while maintaining realistic bond lengths and angles and local geometry. Unless otherwise stated, the term
bond length refers to 'virtual bond length' between two Cα atoms throughout this report, and bond angle refers to 'virtual bond angle' between any three consecutive Cα atoms. To address this issue,
we have developed a new algorithm, MCORE (Monte CarlO based REfinement) that is designed to generate such structures by minimizing the difference between the 'averaged' and the physically reasonable
structure using a Monte Carlo minimization procedure. We show that our approach is robust and general and can overcome averaging artifacts with minimal reduction of structure quality as assessed by
the RMSD of the resulting model from the native structure. Once the refined Cα model is obtained, then approaches like the one based on Backbone Building from Quadrilaterals proposed by Gront et al.
[14] can be used to complete backbone reconstruction.
The central idea behind our approach is to start from a structure that has physically allowed bond lengths and then minimize the difference between this starting structure and the averaged structure.
In this respect, our methodology consists of two basic components: (1) generation of the starting structure and (2) minimization of this starting structure in the presence of the averaged structure.
Starting Structure
We explore two types of starting structures: (1) a fully extended structure with bond length corresponding to the average bond length obtained from the PDB and all ψ and ψ = 180°, (2) a model that is
close to the 'averaged structure' but has physically reasonable bond length and angles, which we call the 'close-by model'. A typical model of this type can be the structure that is closest (based on
RMSD) to the 'averaged structure' in an ensemble of proteins. In case of TASSER, CLOSC models fall in this category. In case, when two structures have the same RMSD to the averaged structure, one of
them is chosen at random. Extended structures will be required when no 'close-by model' is available.
Energy Function
The pseudo energy potential, V, in our algorithm is presented in equation (1). The potential, V, consists of three components: a harmonic term for excluded volume violations, a harmonic term for
virtual bond angle violations and a third term that drives the conformation towards the target structure. Thus, V is given by
where N is the number of Cα atoms; k[excl], k[ang], k[clos ]are the weights of corresponding contributions to V. r[kl ]is the distance between the k^th and l^th Cα atoms, r[0_excl ]is the cutoff
parameter for excluded volume violations and is set to 4 Å if r[kl ]< 4.0 Å, otherwise r[0_excl ]is set to be equal to r[kl]. That is, this contribution to the potential is turned off. θ[i, i+1, i+2
]is the virtual bond angle formed by the i^th, i+1^st, and i+2^nd Cα atom θ[0_ang ]is the cut-off angle, and is set to be 70° if θ[i, i+1, i+2 ]< 70°, 150° if θ[i, i+1, i+2 ]> 150°, or θ[0_ang ]= θ[i
, i+1, i+2 ]otherwise; $dkkt$ is the distance between the corresponding Cα atoms of the target structure and the current conformation, and d[0_clo ]is the maximum allowed displacement between the
corresponding Cα atoms and is set to be equal to 0.001 if $dkkt$ > 0.001 Å or is set equal to $dkkt$ otherwise. The values for these parameters are chosen such that they are close to those by
Oldfield et al. [15] The values of k[clos], k[excl], and k[ang]are chosen to be 1.0831, 0.56818 and 0.015, respectively, on the basis of optimization of the parameters using MINUIT [16] to maximize
the correlation between the energy function and the RMSD to the native structure and manual adjustment based on empirical observation for a set of 726 proteins that are used for training parameters
as described in the data set. The RMSD values are measured on the Cα atoms for all the cases except those where specified.
Move Sets
Another important aspect of a Monte Carlo simulation is the move set that moves the structure from the current conformation to the next one. Selection of move sets is very critical to the performance
of the simulation itself. We have designed two types of move sets, one of which is global and another is local. Both sets preserve initial bond lengths. A schematic overview of both is depicted in
Figure Figure1.1. There are two types of local moves: i) one to five bead moves that preserve the geometry of the chain outside the fragment whose conformation is changed and ii) one to four bead
moves at both ends of the chain. In both the cases, the geometry of the chain outside the targeted fragment is preserved. The global move involves a global rotation of the entire chain, which for the
i^th residue involves a rotation about the i-1 to i^th bond. A given Monte Carlo step consists of N-k-1 attempts at a k-bead move (where k = 1 to 5), plus k (= 5) attempts at each of l-bead
N-terminal and l-bead C-terminal moves (where l = 1 to 4), and one attempt at a global reorientation move. Of course, attempt locations are randomly chosen.
Schematic diagram of move sets. Illustration of different move sets. The circles represent Cα atoms. The axis joining two black circles in each figure represents the axis of rotation of all other
involved atoms. The solid line represents the orientation ...
We performed computational experiments on the set of 726 proteins described below where an extended structure (which has identical bond lengths to the native from which it was generated) was driven
towards the corresponding average structure using the algorithm described above. The snapshot of energy vs. number of steps for this set of experiments is shown in Figure Figure2.2. The average Cα
RMSD of the proteins to their respective native structure for a relatively short (1000 steps) run was 0.06 Å.
Snapshot of energy Vs Number of steps. The snapshot of energy Vs Number of Monte Carlo steps for driving the corresponding extended structure to its native structure for a set of 726 proteins as
defined in the data set.
Convergency Criteria
There are no straightforward convergence criteria for Monte Carlo Simulations (MCS). However, two obvious convergence criteria are: (1) allowing for a pre-specified total number of steps and (2)
allowing the algorithm to proceed until it ceases to make progress. Herein, we use both types of convergency criteria. Starting from the 726 extended structures, the average final RMSD for a 2000
steps run was 0.05 Å. Hence, we choose 2000 steps as the specified steps for our simulation. Furthermore, we also devised a mechanism to stop the algorithm when it ceases to make progress. We define
that the algorithm ceases to make progress after step j if following criteria is satisfied for every i, where 1 <i <n:
∀[i](RMSD[j ]- RMSD[j-i]) <T
where RMSD[j ]is the RMSD of the conformation after j steps and RMDS[j-i ]is the RMSD of the conformation after j-i steps. The value of i goes from 1 through n and T is the tolerance cutoff. Once the
step j (where the algorithm ceases to make progress) is obtained, the simulation is run for extra x steps. In other words, the simulation is stopped after j+x number of steps if the value of RMSD of
the last n steps is within the tolerance region compared to the value of the step j. The values of n, x and T were chosen empirically and were chosen to be 50, 10 and 0.05 respectively. Monte Carlo
simulations were performed using the above move set and the standard Metropolis criteria at a temperature of 450 K [17].
Data Set
To verify the application of MCORE algorithm, we use it to remove the averaging artifacts in the output of TASSER algorithm. The data set used for this study consists of 2090 non-homologous single
domain proteins with less than 200 residues with a maximum of 35% pairwise sequence identity to each other that cover the Protein Data Bank. All of these proteins have an initial RMSD of COMBO model
(averaged model) against the native protein to be less than 6.5 Å. This is from the fact that the predicted models that are about 6.0 Å to the native structures are likely to have the same fold as
the native structure [18]. In addition, from the TASSER outputs we have corresponding COMBO structure and CLOSC structures for each of these proteins. Out of the 2090 proteins, 726 proteins are used
for training of model parameters, whereas the remaining 1364 proteins are used for validation. All root mean square deviation (RMSD) values refer to Cα atom comparisons unless otherwise stated.
Results and discussion
Comparison of Two Types of Starting Structures
For the comparison between the two types of starting structure schemes: extended structure and 'close-by' model, we performed computational experiments on the set of 1364 test set of proteins as
described in the data set. For each type of starting structure scheme, we run our algorithm for 100, 200 and 2000 steps (the results are presented in Table Table1).1). It can be observed from the
table that starting from close-by models produce better results in all three regimes (i.e., 100, 200 and 2000 steps) relative to the extended models. Hence, for the comparison of our method with
CLOSC and COMBO models, we use the close-by starting scheme. The algorithm with close-by starting scheme that uses convergency criteria as described in equation 2 is termed MCORE, whereas the version
of the algorithm with fixed number of steps (= 2000) using the same close-by starting scheme is termed MCORE-L.
Comparison of results for two types of starting structures^1
Refinement of COMBO Models
Before comparing the results of the refined models, we also take an opportunity to analyze the RMSD to native of the 1364 COMBO and CLOSC models. Across the dataset, only 100 CLOSC models had lower
RMSD values relative to the native structure compared to the COMBO models, reiterating the advantage of averaged structures in this regard. The average RMSD of COMBO model to the native structure is
3.28 Å, whereas the average RMSD of CLOSC model to the corresponding native structure is 3.55 Å.
Upon application of the MCORE algorithm to the refine the COMBO models, it is found that the refined representative structures have similar RMSD values to native structures, but with far fewer
unphysical characteristics. Figure 3(a) plots the RMSD values of the MCORE to native comparisons versus the COMBO to native comparisons, which demonstrates a strong linear correlation between the two
methods. The RMSD values of the COMBO models are only slightly better than those from MCORE. This point is reinforced by Figure 3(b), which plots the density of RMSD differences between the methods.
The majority of RMSD differences are slightly less than 0.5 Å. In addition, 38 MCORE refined structures had even better RMSD than their corresponding COMBO model. Figure 3(c) plots fraction of
clashes within the MCORE vs. COMBO models. Clearly, the MCORE models have far fewer clashes than their COMBO counterparts. The average percentage of clashes in MCORE refined models is 1.09%, whereas
the average percentage of clashes in COMBO models is 63.0%.
Comparison of COMBO models and MCORE models. (a) Scatter plot of the Cα RMSD of combo models and respective MCORE refined models for a set of 1364 proteins compared to corresponding native structure
(b) Density plot of distribution of RMSD deviations ...
We also compared the RMSD (to native) of the MCORE refined COMBO models to that of the unrefined CLOSC models. Here, it was observed that only 99 of 1364 MCORE refined models had poorer RMSD values
than the corresponding CLOSC models. The average RMSD for MCORE models was 3.36 Å. In addition, for MCORE-L, we were able to obtain an average RMSD of 3.35 Å. Based on the much reduced compute time
of the MCORE algorithm (discussed below), it is satisfying to note that the average RMSD of MCORE-L models and MCORE models are virtually the same. These results are summarized in Table Table22.
Cα RMSD of MCORE and other Models compared to the native structure for a set of 1364 proteins in terms of RMSD, TM-SCORE and percentage of atoms in the clashes^2.
Overall, it is found that MCORE produces models better in terms of RMSD and TM-score [19] versus the corresponding CLOSC models. Moreover, the MCORE models are only slightly worse than the averaged
COMBO models, which is consistent with our initial problem statement. Note that the larger TM-score, the better the model. Moreover, if we discriminate clashes into the two CASP types, then it is
more evident that our refined models are much better in terms of clashes as they do not have any atoms involved in clashes that are less than 1.9 Å, whereas COMBO models have 4.5% of the atoms
involved in this regime of clashes.
We also investigated the average number of steps and time for the MCORE and MCORE-L algorithms. It was observed that for MCORE the number of average Monte Carlo Steps is less than 110 (= 109.21) and
the average running time is 1.88 minutes. Moreover, for MCORE-L, the average running time is 20 minutes. Hence, MCORE can be applied to a wide variety of problems concerning the averaging of
macromolecular structures due to its fast execution time.
We also analyzed some representative proteins that have higher RMSD deviation compared to the COMBO structure. In Figure 4(a) we present the COMBO and native model and in Figure 4(b) we present the
MCORE and native model of protein 1QLE (chain D) which had the largest RMSD deviation compared to the COMBO structure. Furthermore, to highlight the differences, we magnify the N-terminus region of
Figures 4(a) in 4(c) and 4(b) in 4(d) respectively. As can be seen in the figure, the N-terminus region of the cytochrome C oxidase of the protein in the COMBO model is totally unphysical and hence,
the large RMSD deviation between the MCORE models and COMBO models. We also analyzed the virtual bond distance in the model and found that there were two bonds less than 0.8 Å and 9 bonds less than
2.5 Å. We also analyzed other representative structures. As suspected, we found that in most of the cases where there was a large deviation between the COMBO RMSD and MCORE RMSD, there was
involvement of unphysical bond lengths which reiterates the fact that there is trade-off between local geometric correctness and deviation from the target structure.
Comparison of PULCHRA models and MCORE models. (a) Scatter plot of the RMSD of PULCHRA models and respective MCORE refined models for a set of 1364 proteins (b) Scatter plot of the total number of
clashes in PULCHRA models and corresponding refined models ...
Comparison with PULCHRA
For the comparison of MCORE algorithm with existing approaches, we also compared our results with PULCHRA [1] refinement. PULCHRA is an all atom reconstruction method that has an optimization of Cα
carbon position using steepest descent minimization procedure. Figure 5(a) plots the RMSD values of the PULCHRA to native comparisons versus MCORE to native comparison. Figure 5(b) plots the fraction
of atoms involved in clashes less than 3.6 Å in PULCHRA models versus the MCORE models. The average RMSD of MCORE refined COMBO models was found to be 3.36 Å as compared to 3.35 Å for PULCHRA.
However, in terms of clashes MCORE models on average only have 1.09%, whereas 3.64% of the atoms from the PULCHRA models are involved in clashes. This difference in clashes is statistically
significant as shown in Table Table33 using a standard Z-test. Moreover, if we break down the clashes into clashes less than 1.9 Å and clashes less than 3.6 Å, it is found that MCORE models do not
have any clashes less than 1.9 Å, whereas PULCHRA does (see Table Table2).2). The clashes less than 1.9 Å are severe for further refinement of the models. While the MCORE models are slightly worse
than those from PULCHRA in terms of RMSD to native by 0.01 Å, they have a statistically significant improvement in terms of clashes. Moreover, 480 (out of the 1464) of the MCORE models resulted in
better refinement of the COMBO model versus PULCHRA. In addition, for long runs of MCORE (MCORE-L), we were able to obtain an average RMSD of 3.35 Å, which is exactly same as obtained by PULCHRA.
Moreover, the MCORE-L models had far fewer clashes than the PULCHRA models (1.2% vs. 3.64%, respectively).
Comparison of MCORE to COMBO and to PULCHRA in terms of RMSD and percentage of atoms involved in the clashes^3.
Representative Rasmol view of a PDB (1QLE:D) where the RMSD of the MCORE model and the corresponding COMBO model was the highest. a) The COMBO model compared to the native structure. b) Refined model
compared to the native structure, c) magnified N-terminal ...
Furthermore, MCORE algorithm is comparable to PULCHRA in terms of efficiency also, as on average the computation time is around a minute. One of the major advantages of our approach compared to
PULCHRA is that if the input structure is heavily distorted, PULCHRA might fail to converge where MCORE will always converge.
All-atom Model Reconstruction
It is essential to have a model with physical bond lengths and bond angles if further analysis is to be performed on the model. Since structure prediction methods often produce Cα-only models,
all-atom models must be constructed from the Cα descriptions. In this regard, we built all-atom representations of the MCORE refined Cα atom models. The initial backbone reconstruction method applied
is the backbone reconstruction method of an algorithm proposed by Milik et al [20]. Once the backbone atoms are reconstructed, any side-chain packing methods [21,22] can be utilized to build the
side-chains. We performed the side-chain reconstruction using one of the most widely used side-chain packing algorithms SCWRL 3.0 [22]. The MCORE refined models for the set of 1364 proteins had an
average all-atom RMSD of 4.19 Å (which is, or course, higher than the value of 3.35 Å for the Cα models). The PULCHRA refined all-atom models on the same dataset had a comparable average value of
4.17 Å.
In this paper, we presented MCORE, a Monte-Carlo based algorithm for removing averaging artifacts of the averaged structure to improve the quality of the consensus structure. We verified the
application of the proposed algorithm by applying the algorithm to refine the COMBO models of a set of 1364 proteins generated by TASSER algorithm, refining and correcting unphysical bond length and
bond angles. On average, the RMSD to native of the refined model is 3.36 Å; where as RMSD of the COMBO model to the native is 3.28 Å, which is a mere 0.08 Å poorer than the RMSD of the COMBO model
(averaged model). On the other hand, the average percentage of atoms involved in the clashes in the refined MCORE models is reduced from 63% (for the COMBO models) to only 1.0%. Moreover, slight RMSD
gains were obtained by using a version of the MCORE algorithm that samples longer. However, the difference between MCORE-L (the longer version) and MCORE (Table (Table3)3) is not statistically
significant, emphasizing that our convergence criterion is robust.
We have also generated a framework for producing all-atom models from the Cα only atom models by first reconstructing the backbone and then doing the side-chain reconstruction using existing
methodologies. An obvious extension of the work is to refine not only Cα models, but also to apply MCORE to all-atom models. In essence, the new refinement algorithm helps in attaining structures
with more physical bond lengths and bond angles by overcoming averaging artifacts produced due to averaging of structures. It has to be noted that there is always a trade-off between local geometric
correctness and the deviation from the target structure. Generating averaged structures that are not heavily distorted can minimize this trade-off. These results provide a genuine model for the
subsequent analysis of the respective protein structure using molecular mechanics force field. In addition, this algorithm does not have convergence problems like PULCHRA (which sometimes fails to
converge if the input models are heavily distorted). Although the algorithm was tested for TASSER models only, the presented approach is general and can be applied to remove averaging artifacts
arising from averaging over any ensemble of molecular conformations.
RMSD: Root Mean Square Deviation; TASSER: Threading/ASSembly/refinement; CASP: Critical Assessment of Techniques for Protein Structure Prediction; NMR: Nuclear Magnetic Resonance; MCORE : MOnte Carlo
based Refinement; COMBO model: Centroid model; CLOSC model: Model which is closest to the centroid Model.
Authors' contributions
DBKC wrote the program and carried out the experiments and authored the manuscript. All authors read and approved the final manuscript.
Author would like to acknowledge Dr. Jeffrey Skolnick at Georgia Institute of Technology for guidance and providing the computational resources. Significant portion of the work was done when the
author was at Georgia Institute of Technology. Furthermore, he also acknowledges Dr. Dennis R. Livesay at University of North Carolina at Charlotte for proofreading the paper and for helpful
insights. Moreover, he would like to acknowledge Dr. Adrian Arakaki, Dr. Liliana Wroblewska, Dr. Hongyi Zhou and Dr. Shashi B. Pandit at Georgia Institute of Technology for stimulating discussions.
Also, he would like to thank Dr. James Mottonen and Luis Carlos Gonzalez for proof reading.
• Rotkiewicz P, Skolnick J. Fast procedure for reconstruction of full-atom protein models from reduced representations. Journal of Computational Chemsitry. 2008;29:1460–1465. doi: 10.1002/
jcc.20906. [PMC free article] [PubMed] [Cross Ref]
• Ding Y, Chan CY, Lawrence CE. RNA secondary structure prediction by centroids in a Boltzmann weighted ensemble. Rna. 2005;11:1157–1166. doi: 10.1261/rna.2500605. [PMC free article] [PubMed] [
Cross Ref]
• Furnham N, de Bakker PI, Gore S, Burke DF, Blundell TL. Comparative modelling by restraint-based conformational sampling. BMC structural biology. 2008;8:7. doi: 10.1186/1472-6807-8-7. [PMC free
article] [PubMed] [Cross Ref]
• Zagrovic B, Snow CD, Khaliq S, Shirts MR, Pande VS. Native-like mean structure in the unfolded ensemble of small proteins. Journal of molecular biology. 2002;323:153–164. doi: 10.1016/S0022-2836
(02)00888-4. [PubMed] [Cross Ref]
• Huang ES, Samudrala R, Ponder JW. Distance geometry generates native-like folds for small helical proteins using the consensus distances of predicted protein structures. Protein Sci. 1998;7
:1998–2003. doi: 10.1002/pro.5560070916. [PMC free article] [PubMed] [Cross Ref]
• Zagrovic B, Pande VS. How does averaging affect protein structure comparison on the ensemble level? Biophys J. 2004;87:2240–2246. doi: 10.1529/biophysj.104.042184. [PMC free article] [PubMed] [
Cross Ref]
• Murshudov GN, Vagin AA, Dodson EJ. Refinement of macromolecular structures by the maximum-likelihood method. Acta Crystallogr D Biol Crystallogr. 1997;53:240–255. doi: 10.1107/S0907444996012255.
[PubMed] [Cross Ref]
• Betancourt MR, Skolnick J. Finding the needle in a haystack: Educing native folds from ambiguous ab initio protein structure. Journal of Computational Chemistry. 2001;22:339–353. doi: 10.1002/
1096-987X(200102)22:3<339::AID-JCC1006>3.0.CO;2-R. [Cross Ref]
• Zhang Y, Skolnick J. SPICKER: a clustering approach to identify near-native protein folds. Journal of Computational Chemistry. 2004;25:865–871. doi: 10.1002/jcc.20011. [PubMed] [Cross Ref]
• Zhou H, Pandit SB, Lee SY, Borreguero J, Chen H, Wroblewska L, Skolnick J. Analysis of TASSER-based CASP7 protein structure prediction results. Proteins. 2007;69:90–97. doi: 10.1002/prot.21649. [
PubMed] [Cross Ref]
• Zhang Y, Arakaki AK, Skolnick J. TASSER: an automated method for the prediction of protein tertiary structures in CASP6. Proteins. 2005;61:91–98. doi: 10.1002/prot.20724. [PubMed] [Cross Ref]
• Kolinski A, Bujnicki JM. Generalized protein structure prediction based on combination of fold-recognition with de novo folding and evaluation of models. Proteins. 2005;61:84–90. doi: 10.1002/
prot.20723. [PubMed] [Cross Ref]
• Zhang Y, Devries ME, Skolnick J. Structure modeling of all identified G protein-coupled receptors in the human genome. PLoS Comput Biol. 2006;2:e13. doi: 10.1371/journal.pcbi.0020013. [PMC free
article] [PubMed] [Cross Ref]
• Gront D, Kmiecik S, Kolinski A. Backbone building from quadrilaterals: a fast and accurate algorithm for protein backbone reconstruction from alpha carbon coordinates. Journal of Computational
Chemistry. 2007;28:1593–1597. doi: 10.1002/jcc.20624. [PubMed] [Cross Ref]
• Oldfield TJ, Hubbard RE. Analysis of C alpha geometry in protein structures. Proteins. 1994;18:324–337. doi: 10.1002/prot.340180404. [PubMed] [Cross Ref]
• James F. MINUIT Function Minimization and Error Analysis. CERN Program Library Long Writeup. 1998;D506
• Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E. Equation-of-state calculations by fast computing machines. Journal of Chemical Physics. 1953;21:1087–1092. doi: 10.1063/1.1699114.
[Cross Ref]
• Reva BA, Finkelstein AV, Skolnick J. What is the probability of a chance prediction of a protein structure with an rmsd of 6 A? Fold Des. 1998;3:141–147. doi: 10.1016/S1359-0278(98)00019-4. [
PubMed] [Cross Ref]
• Zhang Y, Skolnick J. Scoring function for automated assessment of protein structure template quality. Proteins. 2004;57:702–710. doi: 10.1002/prot.20264. [PubMed] [Cross Ref]
• Milik M, Kolinski A, Skolnick J. Algorithm for rapid reconstruction of protein backbone from alpha carbon coordinates. Journal of Computational Chemistry. 1997;18:80–85. doi: 10.1002/(SICI)
1096-987X(19970115)18:1<80::AID-JCC8>3.0.CO;2-W. [Cross Ref]
• Dukka Bahadur KC, Tomita E, Suzuki J, Akutsu T. Protein side-chain packing problem: a maximum edge-weight clique algorithmic approach. J Bioinform Comput Biol. 2005;3:103–126. doi: 10.1142/
S0219720005000904. [PubMed] [Cross Ref]
• Canutescu AA, Shelenkov AA, Dunbrack RL., Jr A graph-theory algorithm for rapid protein side-chain prediction. Protein Sci. 2003;12:2001–2014. doi: 10.1110/ps.03154503. [PMC free article] [PubMed
] [Cross Ref]
Articles from BMC Structural Biology are provided here courtesy of BioMed Central
• PubMed
PubMed citations for these articles
• Substance
PubChem Substance links
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2662860/?tool=pubmed","timestamp":"2014-04-16T23:35:45Z","content_type":null,"content_length":"101764","record_id":"<urn:uuid:63ddd541-2eb6-4c2d-ad04-2a681638cd34>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus and physics ... hahaha
Calculus and physics ... hahaha
Apr 17, 13 3:41 pm
For 4+2s or 5s, physics and calculus are in the curriculum. For M.Arch. 3s, they tell you that you are to take physics and/or calculus prior to the start of the program and you have to provide proof
of completing them. So be it. I did that, even though I had had calculus before and was rusty at it, to be on the safe side.
Fast forward to a-school. You get into your first structures class. In our case, in the introductory structural behavior course, the prof got into it, at about the middle of the term, for about ONE
week. NO ONE knew what the f**k was going on, and we returned to algebraic computations for problems. It was totally tangential, it seemed, from the way the course had been progressing from the
inception through the use of algebraic calculations. In fact, in the remaining technology sequence, we never saw calculus again. Physics wasn't even necessary, because the concepts are explained
again in these classes, though mechanics would make the first structures course seem more familiar.
At any rate, has anyone seen the use of calculus, primarily, or physics in their courses coded as architecture? I'm thinking NOT. What a joke. But what are you gonna do?
Apr 17, 13 4:02 pm
Common to make people take calc and physics in loads of fields that don't make use of it. I see it as a weed-out mechanism.
I could go on about how it's beautiful to have an appreciation for how the equations actually work before you go on and take the math for granted for the rest of your life, but I'll spare you.
Apr 17, 13 4:08 pm
"NO ONE knew what the f**k was going on"
lol, i feel confident in saying, probably 85% of those who graduated college in architecture had no clue what was going on! in fact, because of my program requirements, i took trig twice, calc twice,
structures I twice, and structures II twice, just to pass them all with a B or better.
on a positive note, i dont need auto cad to determine the length of rolled material (s=r*theta)
Apr 17, 13 4:29 pm
If you can perform the problem solving necessary to pass calculus with a B or better, the process of getting to that stage will make you a more self disciplined and better problem solver than if you
did not take calculus. It's indirect knowledge, the process - you will just be sharper.
Apr 17, 13 4:41 pm
the process of getting to that stage will make you a more self disciplined and better problem solver than if you did not take calculus. It's indirect knowledge, the process - you will just be
Agreed. Had this discussion in college with a girl in pre-med. She had to take 1-1/2 years of calculus. She was an A student and a past valedictorian. About this, she said "It's not like I'm going to
see a patient and ask to take their derivative." Good point.
I had a rough time with calculus in undergrad. The reasons were several - not thrilled with my curricular choice and it felt weird to be moving into mathematics which I could no longer visualize,
unlike geometry, algebra 2, and trigonometry. When I took it again, it was the only thing I was doing, besides working, and I got an A. I still didn't know what to expect in structures, but since it
was all algebraic and tabular, I did well. I was surprised because structures is what intimidated me most about going off to a-school.
Apr 17, 13 5:21 pm
BTW - structures as taught to architects, is based on algebraic formulas - SE students use Calc
Apr 17, 13 6:23 pm
calculus was taken out of the curriculum just when i started archi school back in the dark ages in canada. we still had numerical mathematics and physics and anyway most of us had calc in high school
so nobody cared one way or the other. structure course was just statics so pretty easy. i enjoyed it.
here in japan the math is still pretty rigorous, but then again the license here means you stamp your own engineering calculations for building approval, if you want too. would be interesting if
north america went to that system too. sure would make better sense of the curriculum.
Apr 17, 13 7:07 pm
I barely passed structures xDD I think its just there for the odd few who make something of it in the real world.. For the rest, its just there to flex one's brain.
Apr 17, 13 7:25 pm
here in japan the math is still pretty rigorous, but then again the license here means you stamp your own engineering calculations for building approval, if you want too. would be interesting if
north america went to that system too. sure would make better sense of the curriculum.
I don't know. Again, there's too much variability in American a-schools in their curricula. Some have 2 classes, where you barely dip your toes in, and some have 4 classes, with a separate class for
each major building material (steel, concrete, even wood) beyond an introductory one in material behavior. Even with that, one only visits beam sizing, column sizing, seismic, et. al. only once. For
an architect to do such work, the curriculum would have to be overly beefed up at the expense of a broad base and would require even more classes.
The road to become a structural engineer here in the US is lengthy. A PE exam precedes the SE exam. I think the separation of functions is correct. Architects have their hands very full. I would hate
to throw load tracing on some complex geometries (slanted ones, circular ones, etc.) to the architects, after taking barely a handful of these classes. It's the work of specialists. I'm glad I had
separate courses in the materials because it helps make one more conversant with the SEs. I'm also glad they were distilled to algebraic and tabular computations, and were open-book/open notes, or it
would have been over the top. They were time consuming enough..
Apr 18, 13 12:24 am
That's surprising observant. I would have thought you were an advocate for more difficult requirements to give license more legitimacy. That is, given your stance otherwise.
We hire an engineer too. I wouldn't want to spend my time doing engineering. The hundreds of pages of calculations even if they are computer generated are a real chunk of work. The ability to
understand the possibilities in a structural solution is pretty important though. I can see how great my colleagues are at it and suspect it is part of the reason sejima and Ito get such phenomenal
performance from their engineering partners. No idea if calculus matters for that or not... Intuitively I'd say yes, but who knows.
Apr 18, 13 1:14 am
I would have thought you were an advocate for more difficult requirements
There's a big difference between wanting at least a 4 year college architecture degree to license and training students to become structural engineers. Again, look at how many threads on here,
especially around this time, are about prospective students biting their nails as to where they will be accepted and where they should go. They should ultimately be rewarded for that - with a license
... if they want one and do the work. I remember that spring when I went to the mailbox every day ... and the agitation that went with that. The list of courses required to be a properly trained
structural engineer is long, and beyond the scope of an architectural curriculum and I think few architects could effectively serve two masters. However, I believe that the trend to go toward fewer
structural courses (just two) is poor. It's not too hard to fit 3 or 4 structures courses within a 4 year BA/BS degree, let alone a 5 year accredited B.Arch. The American NAAB needs to crack down on
what needs to be taught to accredit programs. I think the minimum should be a studio every term (except for freshmen), plus 2 construction, 2 environ tech, and 3 structures courses ... in addition to
basic history and theory. That may sound staid, but the content can be made relevant and up to date ... via the inclusion of newer technologies and codes, the applicability of sustainability, and
emerging theories.
Apr 18, 13 8:15 am
I'm in hell trying to register for a summer course in calculus to fulfil the ysoa prerequisite before August. The community college won't even let me take it because I don't have any math college
credit. What do other M. Arch I students do? This is a nightmare. Any suggestions?
Apr 18, 13 10:04 am
Some colleges do a Maymester before the regular summer semester starts. Maybe you could take a math course during that so that you could take calculus during the regular summer semester.
Alternatively, if you've had a good deal of math in your previous education, you might find someone at the community college who can get you signed up for calculus. I'd start with the department
chair. The only way I see this working is if you have had trigonometry. This is essentially what I did. I had a good deal of math in high school and was able to start with precalculus in college. A
former teacher with pull at the college was able to help me avoid lower level courses that I didn't need.
Apr 18, 13 11:52 am
Contact other community colleges and explain your situation. It should be ok if you had 3 to 4 years of HS math. Take it at a state school in the evening where they won't check what you've done
before (I had calculus, but enrollment as a non-matriculated student was on your own and no one asked me). Some schools offer a boot-camp in math and physics for their 3 year programs. Univ. of
Colorado Denver has such a summer program for M.Arch. and, I'm sure it's not full, so you might get in as non-matriculated and hang out in the Rockies in your time off. It will NOT be cheap if you
are a non-resident.
But keep harping on the c.c.'s or local satellite 4 years, especially if you had some good math in high school.
Jono Lee
Apr 18, 13 12:29 pm
Thompson Rivers University < google that.
That may solve all your problems.
Jono Lee
Apr 18, 13 12:31 pm
Sorry, to be more specific, they offer distance ed/online courses at the college level. They accepted the courses over at UCLA to fulfill the requirements
Apr 18, 13 3:34 pm
Yes, any information to prospective students which people can provide is helpful. If I recall, some schools on semesters start offering split summer session courses beginning in mid-May. Still,
students should verify their choices for calculus and physics with the school to make sure they will be deemed acceptable. It never hurts to ask. | {"url":"http://archinect.com/forum/thread/71421712/calculus-and-physics-hahaha","timestamp":"2014-04-17T13:38:42Z","content_type":null,"content_length":"105036","record_id":"<urn:uuid:0e568ca2-b347-45ea-b2ad-0ec5d7b4263c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Works of Archimedes: Translation and Commentary, Volume 1: The Two Books
Archimedes was the most creative, the most powerful, and in many ways the most interesting of the mathematicians of the Ancient World. This is the only available English translation of his work.
Ergo, every library needs a copy of this book. Anyone interested in the work of Archimedes will want it too, though they may well be scared away by the price.
I can hear the objections already. "Wait a minute! What about Heath's translation of Archimedes? That's in the MAA's Basic Library List already, and since it is a Dover book, I can even afford a
Well, Heath's edition is useful, and it has served the English-speaking world well. But consider the first proposition in On Sphere and Cylinder. This is what Heath gives us:
If a polygon be cricumscribed about a circle, the perimeter of the circumscribed polygon is greater than the perimeter of the circle.
Let any two adjacent sides, meeting in A, touch the circle at P, Q respectively.
Then [Assumptions, 2]
PA + AQ > (arc PQ)
A similar inequality holds for each angle of the polygon; and, by addition, the required result follows.
And here is Netz's translation:
If a polygon is circumscribed around a circle, the perimeter of the circumscribed polygon is greater than the perimeter of the circle.
For let a polygon — the one set down — be circumscribed around a circle. I say that the perimeter of the polygon is greater than the perimeter of the circle.
For since BAΛ taken together is greater than the circumference BΛ through its containing the circumference while having the same limits, similarly ΔΓ, ΓB taken together are ΔB as well; and ΛK, KΘ
taken together than ΛΘ; and ZHΘ taken together than ZΘ; and once more, ΔE, EZ taken together than ΔZ; therefore the whole perimeter of the polygon is greater than the circumference of the circle.
In sum, Heath tells us what (he thinks) Archimedes meant, but feels free to modernize notation and shorten the text. Netz gives us what Archimedes wrote.
Does it matter? Well, it depends what we are trying to do. If we are interested, for example, in how Archimedes dealt with generality, it seems very significant that he worked with a specific polygon
(a pentagon, in fact), enumerating its sides one by one!
Proposition 1 is probably the easiest one in this book; two things should then be noted. First, the difference between Netz's literal translation and Heath's paraphrase gets much bigger as the
complexity of the arguments increases. Second, Netz is considerably harder to read, parse, and absorb.
I think it's worth the effort. Reading Archimedes in Netz's translation, one feels much more clearly how different Greek mathematics is from modern mathematics. Rather than "a fellow of another
college", Archimedes is revealed as an inhabitant of Ancient Syracuse working within the Ancient Greek mathematical tradition. We can understand and admire him, but we also understand how different
he is from us.
In addition, Netz gives us useful extras. He discusses the diagrams as they appear in the textual tradition, noting in particular their variation. (The diagram in Heath is nothing like the diagram in
the manuscripts, it seems.) He also gives us a translation of Eutocius' commentary on these two books, which provides insight on how Archimedes was read and understood (or not) a few centuries later.
Finally, Netz's notes are interesting and different, focusing less on the mathematics and more on Archimedes' thought processes, mode of expression, and goals.
What is missing? Well, the most obvious thing is that this is only the first volume of Netz's translation. We'll have to wait for the rest. In addition, Heath's edition is prefaced by a long
introduction discussing Archimedes' life and work. I hope Netz will undertake that eventually, perhaps after he finishes the translation itself.
Netz gives us only the English text. I would have liked (especially given the price) to have the Greek too. Netz says that he is mostly using Heiberg's text as published by Teubner (when he deviates
from that, he tells us). Unfortunately, that edition is not very easy to obtain. Perhaps once the complete translation is done we can ask Cambridge to produce a version with the Greek text and facing
translation, perhaps without all the notes, for weird folks like me.
For anyone seriously interested in Archimedes and in Greek mathematics, this is the edition to have. Have your library buy them one by one, and the financial pain will be less. And keep your eyes
open for the other volumes.
Fernando Q. Gouvêa is Carter Professor of Mathematics at Colby College, editor of MAA Reviews, and crazy about books. It has taken him three years to write this review, in part because he wanted to
read the book so carefully. | {"url":"http://www.maa.org/publications/maa-reviews/the-works-of-archimedes-translation-and-commentary-volume-1-the-two-books-on-the-sphere-and-the?device=mobile","timestamp":"2014-04-16T04:36:50Z","content_type":null,"content_length":"29787","record_id":"<urn:uuid:a68fefa2-b726-4296-aa3a-702acc59e7ae>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Properties of DWNs
Next | Prev | Up | Top | JOS Index | JOS Pubs | JOS Home | Search
Each delay element of a DWN can be interpreted precisely as a sampled traveling-wave component in a physical system, unlike the delay elements in ladder and lattice digital filters. Due to the
particular bilinear-transform frequency warping used in typical WDFs, the delay elements in WDFs can be precisely interpreted as containing samples of physical traveling waves at dc, sampling rate.
Because simple sampling of traveling waves is used to define DWNs, aliasing can occur if the bandwidths of the physical signals become too large, or if nonlinearities or time-varying parameters
create signal components at frequencies above the Nyquist limit. (The bilinear transform, on the other hand, does not alias.) An advantage of simple sampling is that the frequency axis is preserved
exactly up to half the sampling rate, while in the case of the bilinear transform, the frequency axis is warped so that only dc,
Due to the precise physical interpretation of DWNs, nonlinear and time-varying extensions are well behaved and tend to remain ``physical'', provided aliasing is controlled. (See Section 6.)
Because delay elements appear in physically meaningful locations in both the forward and reverse signal paths of a DWN, there is no restriction to a reflectively terminated cascade chain of
scattering junctions as is normal in the ladder/lattice filter context. Digital waveguides can be coupled at junctions, cascaded, looped, or branched, to any degree of network complexity. As a
result, much more general network topologies are available, corresponding to arbitrary physical constructions.
Lumped elements can be integrated into DWNs and results from WDF theory can be used to model both linear and nonlinear lumped circuit elements [58,127,21].
The instantaneous power anywhere in a DWN can be made invariant with respect to time-varying filter coefficients, as discussed in Sections 4 and 5. This can be seen as generalizing the normalized
ladder filter [35,92,95].
As a result of the strict passivity which follows directly from the physical interpretation, no instability, limit cycles, or overflow oscillations can occur, even in the time-varying case, as long
as ``passive scattering'' is used at all waveguide junctions [113,92]. As explained in Section 6, passive scattering may be trivially obtained simply by using extended internal precision in each
junction followed by magnitude truncation of all outgoing waves leaving the junction. However, in scattering intensive applications such as the 2D and 3D mesh, magnitude truncation often yields too
much damping due to round-off, and more refined schemes must be used.
The basic characteristics of DWNs can be summarized as follows [95]:
• DWNs are derived by sampling traveling-wave descriptions of distributed physical wave-propagation systems such as strings, acoustic tubes, plates, gases, and solids.
• Each delay element of a DWN has a precise physical interpretation as a sample of a unidirectional traveling wave.
• The frequency axis is preserved up to half the sampling rate (i.e., it is not warped according to the bilinear transform).
• Physically meaningful nonlinear, time-varying extensions are straightforward.
• Aliasing can occur due to nonlinearities, time variation, or inadequate bandlimiting of initial conditions and/or input signals.
• Fully general modeling geometries are available (e.g., in contrast to ladder/lattice filters).
• Lumped models can be simply interfaced to DWNs.
• Overall signal energy can be simply controlled.
• Instability, limit cycles, can overflow oscillations can be suppressed by using ``passive scattering.''
• Sensitivity to coefficient quantization can be minimized.
• A synthesis procedure exists for constructing any single-input, single-output (SISO) transfer function by means of a DWN [111], and any 112].
Next | Prev | Up | Top | JOS Index | JOS Pubs | JOS Home | Search Download wgj.pdf | {"url":"https://ccrma.stanford.edu/~jos/wgj/Properties_DWNs.html","timestamp":"2014-04-16T17:04:22Z","content_type":null,"content_length":"14824","record_id":"<urn:uuid:3fa48394-7bcd-4786-8e63-fa847dcb80eb>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Converge or diverge plus show why
January 9th 2012, 07:41 AM
Converge or diverge plus show why
show why the con/diverge:
i think she diverges. because the terms get bigger (closer to 1 each) so you are just adding almost 1 + 1 +1 +1 ... it tails off to infinity... diverge... but i think i need to use a rigurous
proof not just intuition.
i'd say she converges since the denom gets rapidly massive so the whole thing is getting rapidly tiny... i can't see what it converges too though? and how to show it?
as above, i think its the same.
but i don't know how to show these using the whole episolon thing. the epsilon thing really bugs me. do i need the epsilon thing? can someone help me to show these things rather than just use
intuition... also am i right or wrong???
January 9th 2012, 08:12 AM
Also sprach Zarathustra
Re: Converge or diverge plus show why
show why the con/diverge:
i think she diverges. because the terms get bigger (closer to 1 each) so you are just adding almost 1 + 1 +1 +1 ... it tails off to infinity... diverge... but i think i need to use a rigurous
proof not just intuition.
i'd say she converges since the denom gets rapidly massive so the whole thing is getting rapidly tiny... i can't see what it converges too though? and how to show it?
as above, i think its the same.
but i don't know how to show these using the whole episolon thing. the epsilon thing really bugs me. do i need the epsilon thing? can someone help me to show these things rather than just use
intuition... also am i right or wrong???
In the first one $\lim_{r\to\infty} a_r=1$, so... ?
The second one: $|\frac{\sin(r^{2})}{5^{r}}|<\frac{1}{|5^{r}|}$
In the third: use Ratio test - Wikipedia, the free encyclopedia
January 9th 2012, 08:58 AM
Re: Converge or diverge plus show why
In the first one $\lim_{r\to\infty} a_r=1$, so... ?
The second one: $|\frac{\sin(r^{2})}{5^{r}}|<\frac{1}{|5^{r}|}$
In the third: use Ratio test - Wikipedia, the free encyclopedia
1st - divergent by nonnull tesT??
2nd - i still dont see enough info, i would try the comparison test, again i come up short? or is this better to use absolute convergence properties?
January 9th 2012, 01:12 PM
Re: Converge or diverge plus show why
January 9th 2012, 03:24 PM
Prove It
Re: Converge or diverge plus show why
show why the con/diverge:
i think she diverges. because the terms get bigger (closer to 1 each) so you are just adding almost 1 + 1 +1 +1 ... it tails off to infinity... diverge... but i think i need to use a rigurous
proof not just intuition.
i'd say she converges since the denom gets rapidly massive so the whole thing is getting rapidly tiny... i can't see what it converges too though? and how to show it?
as above, i think its the same.
but i don't know how to show these using the whole episolon thing. the epsilon thing really bugs me. do i need the epsilon thing? can someone help me to show these things rather than just use
intuition... also am i right or wrong???
A necessary (but not sufficient) condition for a series to converge is that the individual terms have to tend to 0.
Therefore, a valid way to show that a series diverges is to show that the individual terms do NOT tend to 0.
For the first
\displaystyle \begin{align*} \lim_{r \to \infty}\frac{r^2 - 1}{r^2 + 1} &= \lim_{r \to \infty}\frac{r^2 + 1 - 2}{r^2 + 1} \\ &= \lim_{r \to \infty}1 - \frac{2}{r^2 + 1} \\ &= 1 - 0 \\ &= 1 \end
Clearly, the terms do not tend to 0, so the series diverges.
January 9th 2012, 03:31 PM
Prove It
Re: Converge or diverge plus show why
Think about it like this. Suppose you have some series. Since there may be some negative values in it, the sum will never be any greater than the sum of the absolute values of the terms (since
they are all positive). Therefore, by the comparison test, if the "larger series" (the series of absolute values) converges, then so must the "smaller series" (the original series).
So for your second series, by showing that \displaystyle \begin{align*} \sum{\left| \frac{ \sin{\left(r^2\right)} }{ 5^r } \right|} \end{align*} converges, you show \displaystyle \begin{align*} \
sum{\frac{\sin{\left(r^2\right)}}{5^r}} \end{align*} also converges.
January 9th 2012, 03:41 PM
Re: Converge or diverge plus show why
A necessary (but not sufficient) condition for a series to converge is that the individual terms have to tend to 0.
Therefore, a valid way to show that a series diverges is to show that the individual terms do NOT tend to 0.
For the first
\displaystyle \begin{align*} \lim_{r \to \infty}\frac{r^2 - 1}{r^2 + 1} &= \lim_{r \to \infty}\frac{r^2 + 1 - 2}{r^2 + 1} \\ &= \lim_{r \to \infty}1 - \frac{2}{r^2 + 1} \\ &= 1 - 0 \\ &= 1 \end
Clearly, the terms do not tend to 0, so the series diverges.
thats what we call nonnull test. so am i right there (do you think i would need to prove the sequence of terms converges to 1 or can i just state it doesnt converge to 0 by intuition?)
as for the 2nd one... ok i am using the comparison test, and the property of absolute convergence... yes this is one of the properties we are told. i think i get that one now but how do you know
the example you gave is always greater (or equal) to the series in question?
my main problem is knowing what needs to be shown and what can just be stated... :S
see my intuition was correct
but i dont always know how to show it
January 9th 2012, 03:58 PM
Re: Converge or diverge plus show why
January 9th 2012, 04:14 PM
Prove It
Re: Converge or diverge plus show why
January 9th 2012, 04:18 PM
Prove It
Re: Converge or diverge plus show why
January 10th 2012, 02:49 AM
Re: Converge or diverge plus show why
January 10th 2012, 03:01 AM
Re: Converge or diverge plus show why
January 11th 2012, 04:16 AM
Prove It
Re: Converge or diverge plus show why
You should know that \displaystyle \begin{align*} |\sin{X}| \leq 1 \end{align*} for all \displaystyle \begin{align*} X \end{align*}.
\displaystyle \begin{align*} \left| \sin{ \left( r^2 \right) } \right| &\leq 1 \\ \frac{ \left| \sin{ \left( r^2 \right) } \right|}{ \left| 5^r \right| } &\leq \frac{1}{ \left| 5^r \right| } \\ \
left| \frac{\sin{\left(r^2\right)}}{5^r} \right| &\leq \left| \frac{1}{5^r} \right| \end{align*}
January 11th 2012, 06:17 AM
Re: Converge or diverge plus show why
of course i know this and i am an idiot for not seeing this was neccessary. thanks for the pointer
but wait doesnt $1 \geq |\sin x|$ imply $1 \geq \sin x$ anyway so why do i have to bother with using absolute convergence properties in the first place? can't i just directly use the comparison
with the |1/5^r|?
also we would need to show that 1/5^r converges, but how?
(can someone just confirm my suspicion here also - that it is sinr^2 and not just sin r, actually makes no difference here? is this just an attempt to deceive?)
January 11th 2012, 07:00 AM
Re: Converge or diverge plus show why | {"url":"http://mathhelpforum.com/differential-geometry/195066-converge-diverge-plus-show-why-print.html","timestamp":"2014-04-16T18:09:35Z","content_type":null,"content_length":"31542","record_id":"<urn:uuid:05183001-0d6a-403e-84ef-e8c0c9554d1d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compression of Strings with Approximate Repeats
L. Allison, T. Edgoose and T. I. Dix
We describe a model for strings of characters that is loosely based on the Lempel Ziv model with the addition that a repeated substring can be an approximate match to the original substring; this is
close to the situation of DNA, for example. Typically there are many explanations for a given string under the model, some optimal and many suboptimal. Rather than commit to one optimal explanation,
we sum the probabilities over all explanations under the model because this gives the probability of the data under the model. The model has a small number of parameters and these can be estimated
from the given string by an expectation-maximization (EM) algorithm. Each iteration of the EM algorithm takes O(n^2) time and a few iterations are typically sufficient. O(n^2) complexity is
impractical for strings of more than a few tens of thousands of characters and a faster approximation algorithm is also given. The model is further extended to include approximate reverse
complementary repeats when analyzing DNA strings. Tests include the recovery of parameter estimates from known sources and applications to real DNA strings. Keywords: pattern discovery, repeats,
sequence analysis, hidden Markov model, DNA, data compression.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://www.aaai.org/Library/ISMB/1998/ismb98-002.php","timestamp":"2014-04-21T05:00:29Z","content_type":null,"content_length":"3114","record_id":"<urn:uuid:7d9fbb4b-9eb0-49d0-8c4c-48bd79ced771>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex numbers
Determine all complex solutions of the equation z^4 - 4z^2 + 16 = 0 in Cartesian form.
Note that this equation is quadratic in $z^2$. Take $w=z^2$ to turn the equation into $w^2-4w+16=0$. Apply the quadratic formula to get $w=\frac{4\pm\sqrt{16-64}}{2}=\frac{4\pm4\sqrt{3}i}{2}=4\left(\
tfrac{1}{ 2}\pm\tfrac{\sqrt{3}}{2}i\right)$. Note that in complex polar form we have $w_1=4\left[\cos\!\left(\tfrac{\pi}{3}\right)+i\sin\!\left(\tf rac{\pi}{3}\right)\right]$ and $w_2=4\left[\cos\!\
left(\tfrac{5\pi}{3}\right)+i\sin\!\left(\t frac{5\pi}{3}\right)\right]$ To get the four solutions for $z$, evaluate $z_{1,2}=w_1^{1/2}$ and $z_{3,4}=w_2^{1/2}$ by applying DeMoivre's Theorem. Can
you finish this off? | {"url":"http://mathhelpforum.com/pre-calculus/123283-complex-numbers.html","timestamp":"2014-04-20T13:48:04Z","content_type":null,"content_length":"34910","record_id":"<urn:uuid:b6a0ab50-0ed1-45cb-aca8-522b66c2432e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Equations
it is related to simplification of algebraic terms
I am trying to help my student, but have been out of school since 1985. I don't remember how to do this problem.
r+13 over 12=1
club a has the first book free and the rest 8 dollars and club c the first two books cost 10 dollars and the rest 6 dollars
I need help solving this
How do you break this problem down step by step to get the answer
this is the question e+12.2=40 Show Steps Please
3x+6 =2 a
My Teacher Asked Me to Do a Project. My Project is about equations obviously. I want to know if 1(2a+4)=2(3a+6) is a 2 Step Equation or a 4 Step Equation.
my son is having trouble solving two step equations, and to be honest i never did this kind of math please help...
I need more help withEquations for algebra
I am confused as to how to break down the b^4
solve equation for x
There are 24 boxes total. Pencils are $10 a box and erasers are $4 a box. Total cost is $210. What is the equation?
make this into an equation | {"url":"http://www.wyzant.com/resources/answers/algebra_equations?f=active&pagesize=20&pagenum=3","timestamp":"2014-04-21T01:14:38Z","content_type":null,"content_length":"46869","record_id":"<urn:uuid:c0e4c1cc-46c1-4a34-8979-be190e414d87>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by isha on Sunday, February 28, 2010 at 6:10pm.
find derivative of (8sqrt(x)+(9/2root3(x)))^2
• parentheses required - MathMate, Sunday, February 28, 2010 at 7:13pm
You have not supplied sufficient parentheses to render the expression unambiguous, that probably explains why you get different results from different sources.
Is it
9/(2root3(x)) ?
• calculus - isha, Sunday, February 28, 2010 at 7:19pm
its like 9/2(x)^1/3
• calculus - MathMate, Sunday, February 28, 2010 at 7:34pm
It is still not clear to me if (x)^1/3 is with the numerator or the denominator.
I assume you are transcribing from a type-set fraction where the paretheses around the denominator are understood. When transcribing to a single line (as in this case), you will need to insert
explicitely the parentheses around ALL denominators and numerators to avoid ambiguity.
• calculus - Anonymous, Sunday, February 28, 2010 at 7:42pm
its in denominator
• calculus - MathMate, Sunday, February 28, 2010 at 8:23pm
So we're looking at find the derivative of the following expression:
Using the chain rule, we get
= 2(8*sqrt(x)+((9/2)/(x)^(1/3))) * d(8*sqrt(x)+((9/2)/(x)^(1/3)))/dx
By writing
=8x^(1/2) + (9/2)x^(-1/3)
We can differentiate u using the power rule:
du/dx = 8(1/2)x^(-1/2) + (9/2)(-1/3)x^(-4/3)
So dy/dx
=2(8*sqrt(x)+((9/2)/(x)^(1/3)))*(8(1/2)x^(-1/2) + (9/2)(-1/3)x^(-4/3) )
after simplification.
Check my work.
• calculus - Anonymous, Sunday, February 28, 2010 at 9:17pm
is this the final answer?
• calculus - MathMate, Sunday, February 28, 2010 at 9:42pm
Yes it is, but you should check my work to make sure you understand how this is done, and that I did not make a mistake.
Related Questions
Calc - If z = 2cis 60°, find z^4 in rectangular form. A.-8+8sqrt(3i) B.-8-8sqrt(...
Calculus - Any help would be much appreciated with the steps involved in each ...
Calculus - Any help would be much appreciated with the steps involved in each ...
Calculus - Given the function: f(x) = x^2 + 1 / x^2 - 9 a)find y and x ...
Calculus - Given the function: f(x) = x^2 + 1 / x^2 - 9 a)find y and x ...
Calculus - Given the function: f(x) = x^2 + 1 / x^2 - 9 a)find y and x ...
Calculus - derivatives - Okay, I want to find the derivative of (x^x)^(x^x)... ...
Calculus - I need to find the second derivative of y=x(x-1)^1/2. I found the ...
calculus - find the second derivative of F(x)=3e^-2x(^2) to find the inflection ...
calculus - Differentiate both sides of the double angle identify sin 2x= 2 sin x... | {"url":"http://www.jiskha.com/display.cgi?id=1267398627","timestamp":"2014-04-20T06:33:00Z","content_type":null,"content_length":"10469","record_id":"<urn:uuid:13a723a0-6947-4084-9693-4ece86b14b75>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tailor Your Tables with stargazer: New Features for LaTeX and Text Output
July 15, 2013
By Tal Galili
Guest post by Marek Hlavac
Since its first introduction on this blog, stargazer, a package for turning R statistical output into beautiful LaTeX and ASCII text tables, has made a great deal of progress. Compared to available
alternatives (such as apsrtable or texreg), the latest version (4.0) of stargazer supports the broadest range of model objects. In particular, it can create side-by-side regression tables from
statistical model objects created by packages AER, betareg, dynlm, eha, ergm, gee, gmm, lme4, MASS, mgcv, nlme, nnet, ordinal, plm, pscl, quantreg, relevent, rms, robustbase, spdep, stats, survey,
survival and Zelig. You can install stargazer from CRAN in the usual way:
New Features: Text Output and Confidence Intervals
In this blog post, I would like to draw attention to two new features of stargazer that make the package even more useful:
• stargazer can now produce ASCII text output, in addition to LaTeX code. As a result, users can now create beautiful tables that can easily be inserted into Microsoft Word documents, published on
websites, or sent via e-mail. Sharing your regression results has never been easier. Users can also use this feature to preview their LaTeX tables before they use the stargazer-generated code in
their .tex documents.
• In addition to standard errors, stargazer can now report confidence intervals at user-specified confidence levels (with a default of 95 percent). This possibility might be especially appealing to
researchers in public health and biostatistics, as the reporting of confidence intervals is very common in these disciplines.
In the reproducible example presented below, I demonstrate these two new features in action.
Reproducible Example
I begin by creating model objects for two Ordinary Least Squares (OLS) models (using the lm() command) and a probit model (using glm() ). Note that I use data from attitude, one of the standard data
frames that should be provided with your installation of R.
## 2 OLS models
linear.1 <- lm(rating ~ complaints + privileges + learning + raises + critical, data=attitude)
linear.2 <- lm(rating ~ complaints + privileges + learning, data=attitude)
## create an indicator dependent variable, and run a probit model
attitude$high.rating <- (attitude$rating > 70)
probit.model <- glm(high.rating ~ learning + critical + advance, data=attitude, family = binomial(link = "probit"))
I then use stargazer to create a ‘traditional’ LaTeX table with standard errors. With the sole exception of the argument no.space – which I use to save space by removing all empty lines in the table
– both the command call and the resulting table should look familiar from earlier versions of the package:
stargazer(linear.1, linear.2, probit.model, title="Regression Results", align=TRUE, dep.var.labels=c("Overall Rating","High Rating"), covariate.labels=c("Handling of Complaints","No Special Privileges", "Opportunity to Learn","Performance-Based Raises","Too Critical","Advancement"), omit.stat=c("LL","ser","f"), no.space=TRUE)
In the next table, I limit myself to the two linear models, and report 90 percent confidence intervals (using the ci and ci.level arguments). In addition, I use the argument single.row to report the
coefficients and confidence intervals on the same row.
stargazer(linear.1, linear.2, title="Regression Results",
dep.var.labels=c("Overall Rating","High Rating"),
covariate.labels=c("Handling of Complaints","No Special Privileges",
"Opportunity to Learn","Performance-Based Raises","Too Critical","Advancement"), omit.stat=c("LL","ser","f"), ci=TRUE, ci.level=0.90, single.row=TRUE)
To produce ASCII text output, rather than LaTeX code, I can simply set the argument type to “text”:
stargazer(linear.1, linear.2, type="text", title="Regression Results",
dep.var.labels=c("Overall Rating","High Rating"),
covariate.labels=c("Handling of Complaints","No Special Privileges",
"Opportunity to Learn","Performance-Based Raises","Too Critical","Advancement"), omit.stat=c("LL","ser","f"), ci=TRUE, ci.level=0.90, single.row=TRUE)
What Else is New?
The two new features that I have focused on in this blog post, of course, do not exhaust the range of innovations that the new stargazer brings. The package can now output beautiful LaTeX and ASCII
text tables directly into .tex and.txt files, respectively, using the out argument.
Additionally, users have a greater scope for making changes to the table’s formatting. A much-demanded addition to version 4.0 concerns column labels. Using arguments column.labels and
column.separate, users can now add a label to each of the columns in their regression table. Such labels can be used to indicate, among other things, the sub-sample or research hypothesis that a
particular column refers to. In addition, users can also change the caption above the names of the dependent variables (argument dep.var.caption), as well as tinker with the font size in the
resulting table (argument font.size).
More advanced users can now choose whether the LaTeX table should be enclosed within a floating environment (arguments float and float.env), and where the resulting table should be placed within the
LaTeX document (argument table.placement). In this way, they might, for example, create a LaTeX table that is rotated by 90 degrees (when float.env = “sidewaystable”).
Marek Hlavac is a doctoral student in the Political Economy and Government program at Harvard Unviersity. If you have any suggestions for future versions of the stargazer package, please contact him
at [email protected] .
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/tailor-your-tables-with-stargazer-new-features-for-latex-and-text-output/","timestamp":"2014-04-18T05:37:10Z","content_type":null,"content_length":"50492","record_id":"<urn:uuid:de8d5b04-b613-4a9d-be7c-0a2c32f9f011>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics help
October 24th 2012, 07:45 PM #1
Oct 2012
Chapel Hill, NC
Statistics help
I cannot get this problem for the life of me. I've been working on it for at least an hour and have produced no work.
It is known that the incomes of subscribers to a particular magazine have a normal distribution with a standard deviation of $6,600. A random sample of 25 subscribers is taken.
a. What is the probability that the sample standard deviation of their incomes is more than $4,000?
b. What is the probability that the sample standard deviation of their incomes is less than $8,000?
Any help, even just a started, would be great.
Re: Statistics help
Hey blind527.
Recall the chi-square distribution where (n-1)s^2/sigma^2 has a chi-square distribution with n-1 degrees of freedom. You are given sigma^2 as 6600^2 so you can look at the distribution for s^2 by
using the appropriate chi-square PDF.
Usually in statistical inference, we do the opposite: we start with s^2 and try to make inferences on sigma^2 but we can do the same thing in reverse and it is in some cases very useful to do the
unconventional thing.
Re: Statistics help
Thank you for that, but I really just don't know how to apply any of this to the problem. I know that sigma^2 is 6600^2, but not sure what I do with anything relating to the problem itself.
Re: Statistics help
So since you have s^2(n-1)/sigma^2 ~ chi-square(n-1) = X^2 you want P(s^2 > 4000^2). Lets do a few transformations:
P(s^2 > 4000^2) implies
P((n-1)s^2 > 4000^2*(n-1)) which implies
P((n-1)s^2 > 4000^2*(n-1)/sigma^2) which implies
P(X^2 > 4000^2*(n-1)/sigma^2) where X^2 has a chi-square distribution with n-1 degrees of freedom.
Since P(X^2 > x) = 1 - P(X^2 < x) you can use a computer to calculate P(X^2 < x) which is just the cumulative probability for a chi-square(n-1) and x is given above in our derivation and can be
calculated since we know n and sigma.
Re: Statistics help
I forgot to thank you for your help. Your explanation helped me greatly.
October 24th 2012, 07:50 PM #2
MHF Contributor
Sep 2012
October 27th 2012, 11:26 AM #3
Oct 2012
Chapel Hill, NC
October 27th 2012, 05:54 PM #4
MHF Contributor
Sep 2012
November 5th 2012, 07:03 PM #5
Oct 2012
Chapel Hill, NC | {"url":"http://mathhelpforum.com/statistics/206041-statistics-help.html","timestamp":"2014-04-19T10:17:43Z","content_type":null,"content_length":"39254","record_id":"<urn:uuid:9a1932dd-7d98-499a-94b8-7034d486cc2e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Most General Lossless Feedback Matrices
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
As shown in §C.15.3, an FDN feedback matrix eigenvalues have modulus 1 and its eigenvectors are linearly independent.
where Hermitian conjugate (i.e., the complex-conjugate transpose) of orthogonal matrix, and we write matrix transposition.
All unitary (and orthogonal) matrices have unit-modulus eigenvalues and linearly independent eigenvectors. As a result, when used as a feedback matrix in an FDN, the resulting FDN will be lossless
(until the delay-line damping filters are inserted, as discussed in §3.7.4 below).
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email] | {"url":"https://ccrma.stanford.edu/~jos/pasp/Most_General_Lossless_Feedback.html","timestamp":"2014-04-19T12:05:36Z","content_type":null,"content_length":"9304","record_id":"<urn:uuid:7e4ecb55-b291-4370-91d9-11e52f1f51af>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Seminar Projects
Seminar Projects Performed (1986-1990)
Recent Projects | Past Projects : 2006-2010 | 2001-2005 | 1996-2000 | 1991-1995 | 1986-1990 | 1981-1985 | 1976-1980
Session Year Student Advisor Title
SP 1990 Diana L. Deaver Unknown The Mathematical Analysis of Deep and Shallow Water Particle Trajectories
SP 1990 Gregory de Vitry John Dooley Roddy's Shakes: Measuring Them.
SP 1990 Hugh Herr Michael Nolan Lower Extremity Prosthetics: The Static Equilibrium Conditions of the Lower Extremity Fluid Prosthetic Socket and the Amputated Limb, with
Applications of Theoretical Results
SP 1990 Ned Longenecker Unknown A Learning Center to Study the Effect of Friction on a Rolling Sphere and its Brachistochrone
SP 1990 Martin Monaghan John Dooley An Investigation in Wing Dynamics
SP 1990 Stacey Ragan John Dooley Ultrasonic Testing of Steel for Cylindrical Defects
FA 1990 John M. Hilton John Dooley Assembling and Observing the Characteristics of a Driven Damped Pendulum
FA 1990 Darren Max Unknown Determination of a Neutron Flux and the Identity of an Unknown Radioactive Material
FA 1990 Glenn Zimmerman John Dooley Diatom Resonance Analysis
SP 1989 Richard Ellmaker Michael Nolan The Effect of Quenched Random Impurities on the Critical Phenomena of a Two Dimensional Ising Ferromagnet
SP 1989 Kimberly F. Haag John Dooley Viscoelasticity
SP 1989 Jeffrey M. Kaufhold Michael Nolan A Mathematical Analysis of the Stability of a Rattleback - A Peculiar Rigid Body
SP 1989 Christopher D. Sloop Joseph Grosh Implementation and Design of a Feasible Digital Seismograph
FA 1989 Louise A. Derose Unknown Methods of Solving Laplace Equation in Two Dimensions
FA 1989 Carolyn Fair Unknown A Vortex Amplifier
FA 1989 Kamala A. Frye John Dooley Index of Refraction of Plasma
FA 1989 Robert Lawson John Dooley Low Temperature Thermometry Using a Gas Thermometer
FA 1989 Robert A. Myers Michael Nolan An Analysis of the Motion of a Tippie Top
FA 1989 Charles J. Root Conrad Motion of Free Falling Rotors
FA 1989 Elise Schlager John Dooley Holographic Diffraction Grating
FA 1989 Shoua Yang Pat Cooney Pattern Processing by Using Two-Dimensional Discrete Fourier Transform
SP 1988 Keith A. Aument Michael Nolan The Probability of Making the Single Ball and Combination Shots in the Game of Pocket Billiards
SP 1988 Matthew T. Buchko Unknown Increasing the Sensitivity of a Thermocouple Vacuum Gauge at Low Pressures
SP 1988 Jeffrey L. Gassert Conrad Normal Modes in Mechanical Systems
SP 1988 Patrick T. Killian Conrad Optical Rotary Dispersion
SP 1988 Timothy P. Kressly John Dooley Displacement Analysis of a Heated Aluminum Block by Double Exposure Holographic Interferometry
SP 1988 Marjorie K. Mcgaughey Michael Nolan Chaos and the "Chaotic" Pendulum
SP 1988 Eric Brian Molz John Dooley Using Time-Dependent Temperature to Calculate Diffusivity
SP 1988 H. David Rosenfeld John Dooley Measurement of Ultrasonic Shear-Wave Attenuation in Y-Ba-Cu-O
SP 1988 Stacy Shank John Dooley Wind Tunnels: A Learning Experience
SP 1988 Stephen R. Waddington Unknown Voltage Generation Through Flow of Liquid in Dielectric Tubes
SP 1988 Kevin R. Witman Unknown Photoelectric Photometry of the Star HD111487
SU 1988 Timothy D. Groff Unknown Frictional Force Dependent on Contact area
FA 1988 Andrew Hershey Michael Nolan Aerodynamic Lift on a Spinning Sphere
FA 1988 Tram M. Tran Michael Nolan Chaos In a Waterwheel
FA 1988 Christiaan Dennis Michael Nolan Achieving the Maximum Concentration Ratio for the Nonimaging Solar Concentrator
SP 1987 Mark A. Allen Zenaida Uy Diffraction Patterns from Surface Water Waves
SP 1987 Irene Campbell Zenaida Uy Relaxation Peak in Ammonium Hydroxide
SP 1987 Michael C. Currao Michael Nolan Graphical Representation of Heat Travel Using Relaxation Methods
SP 1987 Zalini Khan Michael Nolan Algorithmic Solution of the Diffusion Equation
SP 1987 Kevin A. Lafferty Unknown Multimode Optical Waveguides, A Ray Classification
SP 1987 John A. Moon Conrad A Study of the Dispersion of Surface Polaritons on Silver
SP 1987 Jim Seidler Michael Nolan Designing a Light Tracking Device: An Experiment in Microprocessor Control
SP 1987 Paul M. Sier C.W. Price The Zeeman Effect
SP 1987 John E. Slezosky C.W. Price Fourier Analysis of the Bowed String
SP 1987 Curtis D. Snavely Unknown Constructing a Forced Damped Harmonic Oscillator
SP 1987 Jeffrey Way Conrad The 50% Solution
SP 1987 Donald E. Winters Michael Nolan Experimental Analysis of Water Wave Propagation
SP 1987 Mary R. Zelinski John Dooley Sound Waves in Lab, in Theory, in Practice
FA 1987 Troy Herr Michael Nolan Air Resistance, and Cable Tension of the Hammer Throw
FA 1987 James E. Lindemuth Conrad Using Group Theory to Determine the Normal Modes of Oscillation for Three Dynamic Systems
FA 1987 Charles G. Makosky Unknown An Investigation of the Ranque-Hilsch Vortex Tube"
FA 1987 Michael G. Rudler John Dooley A Study of the Coriolis Effect within a Mass Flow Meter
SP 1986 Judith L. Criddle Unknown Thermal Improvements for Windows
SP 1986 Bruce K. LaSala Pat Cooney Rutherford and the Atom: Development of a Computer Based Lesson Plan for use in the High School Classroom
SP 1986 Gregory J. Petrille John Dooley Microwave Communication's Antenna Relay System
SP 1986 Greg A. Walters Michael Nolan Hydraulic Jump in Circular Geometry
SP 1986 J. Michael Winey Unknown Dependence of a Diffraction Pattern on the Angle of Incidence
SP 1986 Carson H. Zirkle John Dooley Holographic Interferometry
FA 1986 Joyce Brown Pat Cooney Detection Systems for Particle-Induced X-ray Emission
FA 1986 Rick Martin John Dooley Diffraction of Light by Acoustic Standing Waves
FA 1986 James F. Ringlein John Dooley Demonstrating Conservation of Momentum in Introductory Physics | {"url":"http://www.millersville.edu/physics/seminars/projects86-90.php","timestamp":"2014-04-17T18:42:23Z","content_type":null,"content_length":"29208","record_id":"<urn:uuid:19e56b02-8ce0-411c-a5bd-85650a06d113>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Amortization Schedules and Principal Prepayment, Part 1: Shortening a 30-Year Mortgage Into 15
I’ve been tinkering around with my mortgage. Have you ever wondered how the monthly payment was determined? It’s called amortization. An amortization schedule is a way to make equal payments over a
period of time, but have the payments split between principal and interest so that the interest paid over time decreases over time along with the loan amount remaining. It is a balancing act to be
fair to both borrower and lender, and you can find a mathematical derivation here.
The most direct way to see where you are on your amortization schedule is to ask your lender to send you a copy. Alternatively, you can generate one yourself by using a mortgage calculator with this
feature. Here is the amortization schedule for a $200,000 loan with a fixed interest rate of 5% over 30 years.
(May not be visible in RSS format. Here is the
direct link
As you can see, in the beginning most of your payment goes towards interest, and only a little reduces your principal, or outstanding loan amount. As time goes on, your payment stays the same, but
the chunk going towards interest decreases as the principal shrinks.
Mortgage Principal Prepayment
If you want to pay off the loan in less than 30 years, you’ll have to pay more than required. This is known as principal pre-payment. The effect of making such additional payments can be visualized
by imagining that it moves you “ahead” in the amortizaton schedule.
Here’s an example using the schedule shown above. Let’s say you’re just getting ready to make your first payment of $1,074. At this rate, you still have 359 out of 360 monthly payments left to go!
How much money would it take to shave off one extra payment off the end? To find that, you just have to look at the principal portion of Month #2, which I highlighted orange: $241.
If you pay $241 additional with your first payment now, you’ll won’t have to pay the $1,074 due on Month #360. Why is this? Working backwards, you can confirm that this is pretty much a 5% compounded
return on $241 for 30 years, as expected. In addition, you’ll be shifted forward to Month #3 on the schedule. So next month your (still required) payment of $1,074 will have a bit more applied
towards principal, and a bit less towards interest.
Making a 30-year Mortgage into a 15-year Mortgage
This actually creates an interesting way to shorten your mortgage. What if you kept paying the next month’s principal payment on top of your required $1,074 each month. You’d add on $241, then $243,
then $245, and so on. Every month you’d shave off one month off the end, leaving you with a 15-year mortgage! You can also imagine this as skipping every other payment by just paying the principal
and saving the interest.
This can work out nicely because the extra required will start out reasonably low at $241, and increase gradually with time along with your income and/or cashflow.
An alternative is to add $510 to every payment each month to shorten the term to 15 years. Although if you’re sure you want to do that, you might want to just get a 15-year fixed mortgage at a lower
interest rate.
Read on in Part 2: Return on Investment Verification.
1. Billy S. says:
Your idea to pay the additional principle of the next payment is a wonderful idea for those who have reasonable expectations for salary increases! For example, teachers are paid based on
experience and educational attainment. A new teacher who will be working toward a Master’s Degree or higher would eventually have the extra money to pay the $600-$700 a month extra that would be
required towards the end of the 15 year period.
I would however caution against the 15-year fixed mortgage. Having the option of the lower payment in the 30-year fixed is cheap insurance in case of emergencies (layoff, unexpected child, etc).
One wouldn’t want to risk losing their home because they tried to pay it off too quickly.
2. Bill says:
Or if you want a 5% return on your money, just pay it off. I spent a lot of time thinking about this issue and decided that being in the 28% tax bracket, it was not worth the deduction as opposed
to how much I was spending in interest. I still have a mortgage but I only have to pay for a couple of years and 90% of my payments are going in my pocket rather then the banks. I could pay the
remaining off but I am using that liquidity as some dry powder for the market woes. Another bonus is that with the Obama stimulas plans that people that either have no interest or not enough to
offset the standard deduction they will still be able to deduct their real estate tax’s.
3. Don says:
Technically, that Wikipedia link is not a derivation. It’s an example with a formula. A derivation would be a proof where you derived the formula from principles.
I sort of regularly run this kind of analysis against possible early retirement dates. My absolute earliest retirement date is in 2024, but my mortgage goes until 2035. If I add $120/month, my
mortgage would expire in mid 2029, which might make a nice early retirement date.
4. Jason Unger says:
Wow. That was an extremely simple, straight-forward explanation of how to save tons of money on your mortgage.
As a first-time homebuyer, this is something I’d love to try once we move.
Great work!
5. Robert says:
My lender actually has a calculator that lets be plug in different numbers for “extra monthly principle payment” or “one time payment”. It shows how many fewer payments I’d make over the life of
the loan and the difference in interest paid over the life of the loan also. And of course if I plug in 0 for additional payments I get the regulator schedule.
6. teeej says:
should you count on that much extra income down the road? It seems like if you are planning for the worst, you should pay more now and gradually lessen the amount, so if you lost an income
source, you’d be further ahead of the game. I guess you could continue to make the extra payments from emergency savings…but I guess your priorities might change then too.
7. SJ says:
Hrm, ya know I never thought abt how mortgages work.
That formula is kind of interesting… now if only i rmbr’d enough to figure out how to derive it =)
8. TR says:
I’ve seen this idea before, and some plans go as far as making multiple principal payments in advance. For less drastic reductions which may be more palatable to some, as mentioned you could send
an additional fixed amount each month (figure out what’s reasonable), send one extra full payment per year (which could reduce time on a 30 yr mortgage 6-7 yrs if you are consistent), or go
The last option comes at a cost as most “savings plans” are administered by a third party for an initial sign up fee. They don’t just hand over the money either. The net result of sending through
a plan is very similar to sending a single extra payment on your own per year, just broken up a different way. Still, it will shave time off the mortgage.
I’m interested in trying the amortization method and am hoping I can calculate it using the link(s) posted.
9. Maury says:
Using the rule of 72, and assuming 4% inflation, everything will double in price in approximately 18 years. (Your coffee, jeans, house value, salary etc.)
So a $240 payment now actually has the same purchasing power as a $480 payment in month 216.
Jonathan’s payment at that time would be $518, not $480, so really, it isn’t that much more than just the standard inflation adjustment. While the amounts seem larger, those amounts are 30 years
from now. Just ask an older person what they paid for a house 30 years ago to get a first hand account of inflation!
I personally don’t make extra payments as I prefer my money accessible. I think over 30 years, the stock market will do better than 5%. So I borrow at 5% and hopefully make more by investing than
paying off my house. It’s the cheapest money you can borrow.
An alternative way to think about this is… Has there been a 30 year period where a diversified portfolio did worse than 5%?
10. JimmyDaGeek says:
In case anyone wants to get a headache by deriving the formula, you are solving an algebraic equation where each month’s payment is equal to last month’s payment, AND, where each month’s payment
consists of the interest on last month’s principal plus enough principal to make up the payment. This is why your interest payment goes down each month while your principal payment goes up.
This is all tied up with present value and future value calculations.
11. NoMoreWork says:
Let me first say that I don’t own a home. However, I’ve played with the numbers before and think this is an excellent idea. The one benefit you didn’t mention is that if you get the 30yr instead
of the 15yr and make the extra principle payments your REQUIRED monthly payment is a lot less. Thus, if you ever fall on hard times it is easy to back off the extra amount for period of time and
not go into default. The 30yr + extra provides a nice safety cushion that doesn’t end up costing to much compared to a straight 15yr loan at lower interest.
12. Jonathan says:
Yes, I also value the flexibility of a 30-year mortgage. The “minimum payment” is much lower vs a 15-year, and I can keep X months of it in reserve for tight times.
Don – Yes, I kind of just meant how the number was derived. Here is a true derivation of the formula. I’m not geeky enough to try it myself (anymore).
13. daniel says:
thanks for the post!
14. Strick says:
Or cheap-os like me can do this with a 15-year mortgage and get it paid off in 7.5 years, with the 15 year payment being the fall-back in tighter times.
15. Jeff says:
Believe it or not, there are pros and cons to this. Understand that your 5% interest is only slightly higher than the rate of inflation. That means your money is also devaluing at slightly slower
rate than it being paid off.
That extra $240 you are spending now will likely have similar purchasing power to $1073 30 years from now. Probably slightly less, but if inflation goes up, than it will actually be worth more!
However, 16 years from now, your salary will likely have increased at a much higher rate than the difference between inflation and your interest rate.
In other words, by not making extra payments to your principal you can *often* maximize your total purchasing power throughout the 30 years. It’s an investment either way. It depends upon the
rate of inflation. If inflation goes up drastically the $1073 will be a piece of cake to pay whereas you are spending the $240 of more powerful money that you could have used to actually obtain
other goods.
The constantly changing value of money (and houses for that matter) makes these sort of calculations a little more complicated.
Some food for thought.
For myself the extra money now is far, far more valuable than paying off the mortgage in the future. But everybody’s situation is different.
16. auntie_green says:
I think this is a good idea as long as its not in lieu of other “savings” – ie , if you can still max out your 401(K) and then pay extra to the mortgage, do it. But I wouldn’t pay extra to the
mortgage in lieu of maxing out the 401(k) and putting cash away for a rainy day
17. ChrisMR says:
i just refinanced down from 30 (had 25 left) to 20 and cut my monthly payment by $60.
i made sure i got the amortization table from the bank so i can look at pre-pay options, which I plan to use and track carefully.
my father-in-law still has the amortization table from when he bought his home in the early 80′s. he has marks by all his payments and pre-payments. he is long since paid off, but he has kept the
page of a reminder of the times and all the work it took to get there.
18. Jules says:
Maybe I should look at the formula, but I don’t really see how paying almost nothing but interest in the beginning is “fair to the borrower.”
19. Tim says:
Most people tend to freak out when they see that over the course of 30 years, the amount of money going toward interest is almost as much as the cost of the home itself (in your case $186,640
goes towards interest over 30 years). But if you can earn more on your money than the 5% (who knows the answer to that really?), then you’re better off keeping the mortgage.
20. auntie_green says:
Jules’ comment might be worth a whole post on its own. “Fair to the borrower” has been getting a ton of press today, with Obama coming out on credit cards (I know, credit cards are different than
mortgages, but “fair to borrower” remains the same in both instances)
I know I’m going to get a LOT of negatives here. But “fair to borrower” irks me. If as a borrower, you don’t like the terms being offered, don’t borrow!
21. Sarah says:
@Auntie: I get what you’re saying about just not borrowing, and agree in principle, but if all the companies have the same terms, what choice does a consumer have?
If I want a house without a loan, it would take me 30 years to save up enough for it! At that rate, I might as well just take out a loan, even with not so great terms.
22. Strick says:
Jules/Auntie/Sarah – amortization is just math. Because a larger balance means more interest, it works out that the first payments are almost all interest. That interest due on the first payment
is whatever it is given the balance and the rate (balance*interest rate/12=X). So the only way for the first payment to have more effect on principal would be to pay more, which you can do by
shorting the period (maybe to a 15 year loan, even more on a 10 year loan, etc.) or pay higher payments at first which would effectively shorten the length of the loan (extra principal payment),
both of which are an option someone can take. Sure you could just call some of the interest payment principal payment if that makes you feel better, but all the interest is then not getting paid
and would have to be re-capitalized and you’d end up in the same place.
Lets leave the “fair to borrower” language to issues it could logically apply to, like mortage companies upping the rate on you the day of the closing after you’ve placed a deposit, crooked
appraisals, etc. Math is not out to get you.
23. JimmyDaGeek says:
I want to add to what Strick wrote. Do you believe it is “fair to borrower” to pay all the interest on the money borrowed? If so, how often?
Contrary to popular delusions, mortgages are not “front-loaded”, regardless that you pay mostly interest in the beginning. When you pay back a typical loan for a house, car, etc. you are paying
interest on the balance owed for the entire previous month. I believe that mortgages calculated this interest by dividing the interest rate by 12 and multiplying it against the current balance,
regardless of the number of days in the month. If you look at your amortization table and make the interest calculation, you will see that your monthly payment also includes enough principal to
make your total payment the same as the last month. There is nothing keeping you from paying more principal, if you want to make things “fairer”
24. Kathy says:
@Sarah: if you want to use an extreme example, the other choice a consumer has is to not buy something you can’t afford – i.e. don’t borrow. It wouldn’t take 30 years to save the entire amount
because you’re not going to spend all that extra money on interest. In fact it would probably take significantly less if you use compounding interest in your favor instead of the banks! You’re
not the victim here – you just want to own a house before you can afford it, plain and simple. That being said, I’m not anti-mortgage, but I’m not going to complain about one either.
On a different note… for those who make the but-I-can-make-more-than-5% argument, that’s generally true but I did want to throw in the idea that debt = risk, and there’s a hard-to-define element
that you can ride out a pretty hard time with minimal payments assuming your house is paid off. I think investing before paying off your home is smart, but you have to be aware that you’re taking
a risk vs. a level of stability/certainty.
25. Michael says:
Plenty of people say they will take out a 30 year mortgage and be disciplined enough to pay it off early or invest the difference but only a very small percentage actually do it. Those who say
they can make a better return by investing in stock market should consider actual investor behavior, since the average investor only earned a market return of 4.5% between 1987 and 2007 while the
S&P 500 Index averaged 11.8% according to a Dalbar Inc study. If your priority is to have a paid off mortgage by a certain date, then you should get a mortgage that coincides with that goal
whether it is 25, 15 or 10 years. This will force you to be disciplined by paying more toward your mortgage each month as well as increasing your emergency fund to meet the payments in the event
that you can no longer make them for a period of time.
26. Jules says:
Strick, math is out to get me…my jr. high and high school math grades prove it.
Okay, seriously though, I do have a recently diagnosed learning disability in math, so maybe I just don’t get it. But, doesn’t it really only work out to be “fair” if the borrower stays in the
house all 30 years? Granted, I don’t know what the solution would be to make it more fair when it’s impossible to know how long someone will stay in their house.
Auntie Green, sorry I “irked” you. But really, I don’t have $200k in cash laying around, so borrowing for a mortgage was my only option…well, besides a yurt. And Jimmie DaGeek, my paycheck would
be what is preventing me from making more payments on the principal.
27. Kim says:
Bummer that I suggested this in earlier post comments regarding paying down principal and in email to author and didn’t even get a shout out…
28. PTnAZ says:
so my mortgage is for 219 K at 5.0 fixed (bought in march 08). and the county says my house is now only worth 149k. and my time horizon for moving is less than 7 years…. i should still “get
divorced, go bankrupt, and get forclosed on” while my “ex wife” uses the “settlement money” to purchase a new house, at which point we “reconcile our differences and give the marriage another
shot”… right? i mean that seems to be the only rational thing i can do here…
29. Victor says:
What if I’m planning on selling the house in 3 to 4 years? Is is still worth doing?
30. JimmyDaGeek says:
If you don’t have enough cash to prepay your mortgage on your own, how do you expect MMA to do it for you?
31. T-W says:
Can someone who has actually run the numbers answer the following question:
When comparing a 30 year mortgage to a 15 year mortgage with equal interest rates, would you end up paying the same amount of interest if the 30 year loan is paid off in 15?
Seems like a straightforward question, but when do the payments in a standard 30 year loan reach an equilibrium between the interest and the principle? If it is after the 15th year, wouldn’t that
mean that the front half of the loan contains more interest; thus resulting in a greater amount of interest being paid over those first 15 years of the 30 than during the life of the actual 15
year loan? (sorry for being so wordy, just trying to be specific)
Thanks for your help
32. JimmyDaGeek says:
With respect to paying off a 30 yr loan in 15 yrs. If you make the same monthly payment that you would have for a 15 year loan, then the loan will perform exactly like a 15 year loan because you
will pay off the principle at the same rate. You will pay the same amount of interest.
To answer your second question, here is a link where you can play to your heart’s delight: http://dinkytown.com/java/MortgageLoan.html
33. Jonathan says:
Actually, T-W, it all depends on your interest rate. If your 15-year and 30-year had the same interest rate, paying off the 30-year in 15-year would cost the same in total payments.
However, in most cases the 30-year has a higher interest rate so if you were to make the same monthly payments as the 15-year mortgage, it would take longer than 15 years to pay off the 30-year
mortgage. It might take something like 16 years.
34. Christine says:
man, this is a great post. didn’t get around to reading it til just now, but i love the way you explained it. i’ll definately be doing this once i purchase my first home. thanks Jonathan~
35. RJ says:
I’m late to this discussion too, but reading about the “fair to the borrower”, that’s not really the goal of amortization, it’s to provide equal payments to allow the borrower to budget.
The alternative would be to pay the monthly interest, plus a fixed principal payment. To keep the numbers a little simple, use $120K at 5% for 20 years. Amortization gives fixed TOTAL payment of
$795 every month for 20 years. The alternative would give fixed PRINCIPAL payment of $500 ($120K divided by 240 months).
Now it looks “fair”. But instead of $795 the first month you are paying $1,000. The next month $997.92 and it keeps getting smaller each month, but after 10 years you’re still at about $750/
You might consider this “fair” to the borrower but probably a lot harder to afford.
36. Aria says:
I have a $200k mortgage and $100k cash. Shall I use $100k cash to pay the principal, or shall I keep the cash in the bank or other investments?
37. JimmyDaGeek says:
It depends
Are you secure in your job? What are your thoughts about the economy for the next 20 years or so? You didn’t tell us how long your mortgage is now.
People who pay down their mortgage quickly like the idea of being debt-free. Some people are unsure about our economics and would prefer to go with a sure thing of paying off debt. There is no
sure thing in investing. If I had that chunk, I would be looking at good dividend-paying stocks to help pay the interest. Hopefully, it would be wash, tax-wise, with the dividend income offset by
the mortgage interest. I believe that dividend income get special tax treatment right now, so you might come out slightly ahead.
38. Aria says:
Thank you JimmyDaGeek for your comment. My mortgage actually consists of 2 loans – $150k @ 6.5% and $50k @ 3% (silent loan). I’m at payment 26 of my 30 year loan. I work for the city government,
and so far I think my job is secure. I cannot refinance if I don’t pay off the 2nd silent loan. My thinking is to pay down the principal to my 1st loan, so I don’t pay so much interest. But my
friend suggests me pay off the 2nd loan and do the re-finance. However, the 2nd loan’s interest rate is 3% and it’s a silent loan (I don’t need to pay till the 1st loan is all paid off). I wonder
whether I should pay down the principal of my 1st loan, or to pay off the 2nd loan & do the re-finance. Which one saves me more money (interest & tax wise)? I can’t seem to figure it out
mathematically. Help! Thanks a lot.
39. Cara says:
what I dont understand is why dont they just allow us to pay ALL principal first and then make payments for the interest based on how long it took us to pay off principal or they could add
$10,000 or a flat rate onto our mortgage. I would much rather pay $10000 than $175,000 for a $79,000 condo that I would rather not be paying on for the rest of my life. But as we all know the
banks run this country and will not likely change. In the meantime I will probably change to a 15 year mortgage.
40. Sam says:
You calculate the combine interest of two loans. You will know if refinance will save you money. Base on the number you provided, 150k @6.5% and 50k @3%, the combine interest in about 5.4%.
So, if you refinance with a better rate than 5.4%, which you can with the current market, you will come out ahead.
Hope it helps.
41. Adena says:
I was told that it is possible to pay off the principal at the end of the loan, #360 for a 30 year, to eliminate that interest. It is much cheaper, but as your income grows, and you pay from the
“end of the mortgage on principal”, it’s the same, but cheaper and will eventually end up in the middle at 15 years. True??
42. market_works says:
Would you reasonably expect to make more than 5 percent returns in the stock market. Most who do a little bit of homework and investing on their own comfortably make more than 10 percent with
small sums in the market.
Not a good idea to pre-pay.
[...] off your mortgage earlier may be a smart move. You’d be making what’s called principal pre-payment. If you have a 30 year mortgage, you can save thousands by making extra payments. One of the
best [...]
Speak Your Mind Cancel reply | {"url":"http://www.mymoneyblog.com/amortization-schedules-and-principal-prepayment-part-1-shortening-a-30-year-mortgage-into-15.html","timestamp":"2014-04-18T13:26:33Z","content_type":null,"content_length":"106271","record_id":"<urn:uuid:cfcc0ee6-458a-4c96-9b88-daba2014ba02>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Maxima] problem solving radical equations
Barton Willis willisb at unk.edu
Mon Jan 29 08:35:36 CST 2007
maxima-bounces at math.utexas.edu wrote on 01/29/2007 05:07:22 AM:
> Trying to solve a system of equations with radicals, i get an error
> (%i1) solve([x+3*y=5,sqrt(x+y)-1=y]);
> `algsys' cannot solve - system too complicated.
> -- an error. Quitting. To debug this try debugmode(true);
(%i1) load(topoly)$
(%i2) e : [x+3*y=5,sqrt(x+y)-1=y]$
(%i3) ep : map('topoly,e);
(%o3) [3*y+x-5=0,y^2+y-x+1=0]
(%i4) sol : algsys(ep,[x,y]);
(%o4) [[x=11-6*sqrt(2),y=2*sqrt(2)-2],[x=6*sqrt(2)+11,y=-2*sqrt(2)-2]]
(%i5) for si in sol do print(float(subst(si,e)));
The function topoly potentially makes the solution set larger. It seems
that sol[1] is a solution, but sol[2] isn't. The float method for
checking solutions is crude, but I think that a symbolic check
would involve denesting square roots. Maybe you can get
Maxima to check the solutions symbolically.
More information about the Maxima mailing list | {"url":"http://www.ma.utexas.edu/pipermail/maxima/2007/004970.html","timestamp":"2014-04-20T08:34:34Z","content_type":null,"content_length":"3730","record_id":"<urn:uuid:2fa86937-ceb6-4d92-907d-336087c22014>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
£299.99 in dollars
You asked:
£299.99 in dollars
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/%C2%A3299.99_in_dollars","timestamp":"2014-04-18T04:09:57Z","content_type":null,"content_length":"56466","record_id":"<urn:uuid:96693bbd-8adb-46d9-b1f0-86baeeddadc8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the Low Wave Number Behavior of Two-Dimensional Scattering Problems for an Open Arc
R. Kress
R. Kress: Inst. für Num. und Ang. Math. der Univ., Lotzestr. 16-18, D - 37083 Göttingen
Abstract: The low wave number asymptotics for the solution of the Dirichlet problem for the two-dimensional Helmholtz equation in the exterior of an open arc is analyzed via a single-layer integral
equation approach. It is shown that the solutions to the Dirichlet problem for the Helmholtz equation converge to a solution of the Dirichlet problem for the Laplace equation as the wave number tends
to zero provided the boundary values converge.
Keywords: Helmholtz equation, exterior boundary value problems, integral equation methods, low wave number limits, cosine substitution
Full text of the article:
Electronic fulltext finalized on: 31 Jul 2001. This page was last modified: 9 Nov 2001.
© 2001 Heldermann Verlag
© 2001 ELibM for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/ZAA/1802/9.html","timestamp":"2014-04-20T03:21:55Z","content_type":null,"content_length":"3709","record_id":"<urn:uuid:975f9473-29cc-4202-9d8e-019e33823279>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
AcceleRate Financial
Calculators Calculators
Projected Value of a
Deposit Deposits Needed for a Future Sum
Projected Value of a
Series of Deposits The "deposit needed for a future sum" simulation will calculate the deposit amount you need to make in order to reach a specified future sum. It can be used to calculate how
Deposits Needed for a much you should save in an RRSP in order to meet your future retirement needs or to determine how much you need to regularly save in order to make a down payment on a house or
Future Sum automobile.
This model assumes the interest rate remains constant, deposits are made at the end of each period and the compounding of interest takes place at the end of the first specified
period and each compounding period thereafter.
The Desired Future Amount $
Number of Deposits per Year
Annual Interest Rate
Compound Periods per Year (1,2,4, or 12)
Number of
Calculations performed by Member Solutions are for illustration purposes only and are not guaranteed. See your credit union for exact figures. | {"url":"http://www.acceleratefinancial.ca/rates/calculators/deposit_futureSum.aspx","timestamp":"2014-04-18T18:12:40Z","content_type":null,"content_length":"18696","record_id":"<urn:uuid:af2e5195-5867-4856-a375-7959f59b4ffd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
You may discuss homework problems with other students, but you have to prepare the written assignments yourself. Late homework will be penalized 10% per day.
Please combine all your answers, the computer code and the figures into one file, and submit a copy to your dropbox folder.
Grading scheme: 10 points per question, total of 30.
Due date: 11:59 PM January 21, 2014 (Tuesday evening). | {"url":"http://nbviewer.ipython.org/url/www.stanford.edu/class/stats191/notebooks/Assignment1.ipynb","timestamp":"2014-04-17T01:44:10Z","content_type":null,"content_length":"15246","record_id":"<urn:uuid:97b24a22-ff51-411b-b384-bf856e8f208d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area of a region
June 25th 2009, 03:52 PM
Area of a region
Hey all,
so i want the area in grey like the image shows, calculating it with a double integral, in polar coordinates.
So for the lower half i put:
r = [-2sin theta , 2cos theta]
theta = [-pi/4 , 0]
for the upper half its almost the same.
r = [2sin theta , 2cos theta]
theta = [0 , pi/4]
Then i set up the integral with those limits and r dr dtheta.
The solution according to the person who made the exercise is 2. But i somehow can't get that. Is there anything wrong here or just some mistake on the calculation of the integrals?
EDIT: my upper half's result is 1, so its the lower one that is screwed, unless this is some kind of coincidence.
EDIT2: Sorry to bother. I found the mistake. It was indeed on the lower half. "r" can't take negative values, so it takes the same values as the upper limit. Only the range of theta is different.
Thanks! Maybe this will help anyone anyway.
June 25th 2009, 04:41 PM
Ok, back here again =)
How to put the grey area in polar coordinates, to use as a double integral?
Theta will go [0, pi/4]
Then there should be two functions of "r", one for the left and one for the right side of the area. I'm not managing to find those.
June 26th 2009, 02:21 AM
It isn't hard. You just need to imagine some triangles.
You know that tan(theta) = y/x and r = sqrt(x²+y²). You know x (1 for the left side, 2 for the right side).
You can then express y as x * tan(theta), so r = sqrt(x² + (x*tan(theta))²) -> voilą
June 26th 2009, 04:30 AM
Thank you pedro, but I'm not sure that will do it though, since for the integral we can't have any x or y. R should only be a function of theta.
June 26th 2009, 06:21 AM
Ok, i found the solution. It's actually very simple.
The larger triangle is limited laterally by x = 2, the smaller one by x = 1.
x = rcostheta
so rcostheta = 2
this gives, r = 2 / costheta
(To simplify, 1/ costheta = sectheta)
So the values for each variable are
Theta [0, pi / 4]
R [ sectheta, 2sectheta] | {"url":"http://mathhelpforum.com/calculus/93737-area-region-print.html","timestamp":"2014-04-16T06:14:37Z","content_type":null,"content_length":"6577","record_id":"<urn:uuid:6dfe6f75-f18f-4c6e-b70e-4f85031edba6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
Please Help
a and b are ints, so a/b is rounded down to 0. Make them float or double or add a cast sum = float(a)/b;
i jsut tried that
if i enter the number of terms n to be odd
the output screen shows the sum = 0.22222
and if i enter the number of terms to be even
then the output screen shows= 0.162393
which is the sum of the first term only
Last edited on
That's because you never change the values of a and b. You just keep adding or subtracting 5/13 from sum.
I just noticed there is a bunch of mess in your code.
iostream header should be without ".h", math header in C++ is called <cmath> (also without ".h"). conio.h is not standard, nor is it needed. main must be int, not void, cin and cout are in std
namespace. Your loop runs n+1 cycles so n+2 terms are computed. count variable is useless as it is (in your code) equal to i+1.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/62316/","timestamp":"2014-04-21T07:25:55Z","content_type":null,"content_length":"9916","record_id":"<urn:uuid:b888a3b4-3aab-4c37-a485-9cc388aff330>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
This web site contains executable versions of test problem included in the book C.A.Floudas, P.M. Pardalos et al. "Handbook of Test Problems in Local and Global Optimization" published by Kluwer
Academic Publishers and in the book C.A. Floudas, P.M.Pardalos "A Collection of Test Problems for Constrained Global Optimization Algorithms".
These test problems are arising in literature studies and a wide spectrum of applications: pooling/blending operations problems, heat exchanger network synthesis problems, phase and chemical reaction
equilibrium problems, robust stability analysis problems, batch plant design under uncertainty, chemical reactor network synthesis problems, parameter estimation and data reconcilliation problems,
conformational problems in clusters of atoms and molecules, pump network synthesis problems, trim loss minimization problems, homogeneous azeotropic separation systems, dynamic optimization problems
in reactor network synthesis parameter estimation.
The algebraic test problems are available in the AMPL modeling language.
All test problems can be downloaded from this web site. The files are organized by chapter, and links to each chapter in the book(s) are included below.
│ Includes: Integer Programming problems, Quadratic Assignment problems, Maximum Clique problem │
│ Includes: Separable Quadratic constraints, Complementarity-type constraints, Integer-type constraints │
│ Includes: Pooling and Blending problems, Separation Sequencing problems, Heat Exchanger Network Design problems, Multicommodity Network Flow problems │
│ Includes:Phase and Chemical Equilibrium problems │
│ Includes:Process Design problems, Stability Analysis problems │
│ Includes:Process Design problems, Phase and Chemical Equilibrium problems, Computational Chemistry problems, VLSI Chip Design problems, Portfolio Optimization problems│
│ Includes: Various Economics, Civil Engineering and Chemical Engineering Problems │
│ Includes: Nash Equilibrium, Walrasian Equilibrium, and Traffic Assignment Problems │
│ Includes:Combinatorial Optimization problems, Control Theory problems │
│ Includes:Process and Network Synthesis problems, Molecular Design problems │
│ Includes:Satisfiability Problems, Traveling Salesman problem, Assignment problems, Graph Coloring problems, Maximum Clique problem, Steiner problems in Networks │
│ Includes:Multiple Steady State Identification problems, Locating All Azeotropes problems │
│ Includes:Optimal Control problems, Parameter Estimation for Dynamic Models, Reactor Network Synthesis problems │ | {"url":"http://www.mat.univie.ac.at/~oleg/ampl_titan.htm","timestamp":"2014-04-16T19:26:12Z","content_type":null,"content_length":"6369","record_id":"<urn:uuid:85a1e46c-d517-4c56-a945-481c6a864617>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
okay, so when i press my mc (random1) i would like it to go to a random frame (but only random between frames2, 18, and 24) for different maps in my game.
This is the formula i have right now but it doesnt work, (the formula works to go to any frame tho with a frame number where () is and not (math.random) and not sure if i was supposed to put
something else besides math.random
I've also tried other things like ((random (2)) , (random (18)), (random (24)))
and also (random ( 2, 18, 24)) and none of them seem to work =/
any help would be greatly appreciated!
thanks, Andrew
ActionScript Code:
this.random1.onRelease = function(){
gotoAndStop(math.random( 2, 18, 24 ));// want it to choose a random frame of 2, 18 or 24 | {"url":"http://www.actionscript.org/forums/showthread.php3?p=1081536","timestamp":"2014-04-19T06:55:34Z","content_type":null,"content_length":"81882","record_id":"<urn:uuid:017cd4b5-7f03-4f06-9edd-fa54caee1128>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference for complexity of primitive polynomials
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
What is the fastest known way to check if a given polynomial of degree $n$ in $F_{2}[X]$ is primitive?
In response to Greg Kuperberg's answer. If we known factorization of $2^{n} - 1$, then what is the complexity?
up vote 1 down vote favorite
add comment
What is the fastest known way to check if a given polynomial of degree $n$ in $F_{2}[X]$ is primitive?
In response to Greg Kuperberg's answer. If we known factorization of $2^{n} - 1$, then what is the complexity?
You first check that its roots lie in $\mathbb{F}_{2^n}$ by computing $X^{2^n}$ mod the polynomial $p(X)$ and checking that you get $X$. Then you want to know that the roots don't lie in a
subfield, i.e., that $p(X)$ is irreducible. So for each maximal divisor $d$ of $n$, compute $\text{gcd}(p(X),X^{2^d}-X)$ and check that you get 1. Then you want to know that a root of $p$
up vote has maximal order. So for each maximal divisor $d$ of $2^n-1$, check that $X^d$ mod $p(X)$ is not 1. The hardest step is to find the maximal divisors of $2^n-1$, which requires the prime
3 down factorization of $2^n-1$. If you don't know that, then you are probably sunk.
add comment
You first check that its roots lie in $\mathbb{F}_{2^n}$ by computing $X^{2^n}$ mod the polynomial $p(X)$ and checking that you get $X$. Then you want to know that the roots don't lie in a subfield,
i.e., that $p(X)$ is irreducible. So for each maximal divisor $d$ of $n$, compute $\text{gcd}(p(X),X^{2^d}-X)$ and check that you get 1. Then you want to know that a root of $p$ has maximal order. So
for each maximal divisor $d$ of $2^n-1$, check that $X^d$ mod $p(X)$ is not 1. The hardest step is to find the maximal divisors of $2^n-1$, which requires the prime factorization of $2^n-1$. If you
don't know that, then you are probably sunk. | {"url":"http://mathoverflow.net/questions/81084/reference-for-complexity-of-primitive-polynomials","timestamp":"2014-04-18T00:48:41Z","content_type":null,"content_length":"52634","record_id":"<urn:uuid:f62fe818-ab19-409f-b253-b4b1a457059a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fast and Automatic Ultrasound Simulation from CT Images
Computational and Mathematical Methods in Medicine
Volume 2013 (2013), Article ID 327613, 13 pages
Research Article
Fast and Automatic Ultrasound Simulation from CT Images
Key Laboratory of Photoelectronic Imaging Technology and System of Ministry of Education of China, School of Optics and Electronics, Beijing Institute of Technology, Beijing 10081, China
Received 17 July 2013; Accepted 28 August 2013
Academic Editor: Yunmei Chen
Copyright © 2013 Weijian Cong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Ultrasound is currently widely used in clinical diagnosis because of its fast and safe imaging principles. As the anatomical structures present in an ultrasound image are not as clear as CT or MRI.
Physicians usually need advance clinical knowledge and experience to distinguish diseased tissues. Fast simulation of ultrasound provides a cost-effective way for the training and correlation of
ultrasound and the anatomic structures. In this paper, a novel method is proposed for fast simulation of ultrasound from a CT image. A multiscale method is developed to enhance tubular structures so
as to simulate the blood flow. The acoustic response of common tissues is generated by weighted integration of adjacent regions on the ultrasound propagation path in the CT image, from which
parameters, including attenuation, reflection, scattering, and noise, are estimated simultaneously. The thin-plate spline interpolation method is employed to transform the simulation image between
polar and rectangular coordinate systems. The Kaiser window function is utilized to produce integration and radial blurring effects of multiple transducer elements. Experimental results show that the
developed method is very fast and effective, allowing realistic ultrasound to be fast generated. Given that the developed method is fully automatic, it can be utilized for ultrasound guided
navigation in clinical practice and for training purpose.
1. Introduction
The imaging principle behind an ultrasound is that the ultrasound wave generates a different amount of reflection or refraction when accounting for different tissues inside the human body. Given that
the shape, density, and structure of different organs vary, the amounts of wavelets that are reflected or refracted can be used to reconstruct the anatomical structure of human tissues. Based on the
wave pattern and image features, combined with personal anatomical and pathological knowledge, the texture and pathological characteristics of a specific organ can be quantified for medical
Over the past decades, the ultrasound imaging technique has played increasingly important role in clinical diagnosis. As a fast and safe method of imaging, ultrasound is the most ideal imaging
modality for real-time image-guided navigation in minimally intrusive surgery [1–3]. However, the ultrasound image is usually mixed with a high level of noise and the anatomical structure is not as
clear as that in CT and MRI [4]. Hence, a successful ultrasound doctor has to possess a huge amount of anatomical knowledge, as well as considerable clinical experience. Currently, ultrasound
clinical training is usually done under the guidance of experts who operate on real patients. Such training is time consuming and costly. Moreover, for some operations requiring careful
manipulations, such as abscess drainage and tissue biopsy, incorrectly performed operations can cause great suffering to the patient or even lead to a number of complications. Comparably, the
ultrasound simulation technique provides an economic and efficient way of observing and acquiring ultrasound images [5–7].
Currently, two categories of ultrasound simulation methods exist. The first involves the 3D ultrasound volume generated by a series of 2D ultrasound images, wherein the section slices of ultrasound
images are generated from the location and direction of the ultrasound detector. Henry et al. [8] constructed the ultrasound volume from real images of a typical patient in offline preprocessing. The
ultrasound image is then generated by considering both the position of the virtual probe and the pressure applied by this probe on the body. The system was successfully used to train physicians to
detect deep venous thrombosis of the lower limbs. Weidenbach et al. [9] calculated a 2D echocardiographic image from preobtained 3D echocardiographic datasets that are registered with the heart model
to achieve spatial and temporal congruency. The displayed 2D echocardiographic image is defined and controlled by the orientation of the virtual scan plane. Such a simulation method requires the 3D
ultrasound volume data to be acquired in advance, thus guaranteeing good image quality and high-speed scanning of the image slice. However, this method cannot simulate the image outside the 3D volume
data and 3D ultrasound images are also quite difficult to obtain using general ultrasound devices.
The second method involves the ultrasound being simulated from volume data, such as CT or MRI images. Shams et al. [10] simulated ultrasound images from 3D CT scans by breaking down computations into
a preprocessing and a run-time phase. The preprocessing phase generates fixed-view 3D scattering images, whereas the run-time phase calculates view-dependent ultrasonic artifacts for a given aperture
geometry and position within a volume of interest. Based on the method of Shams, Kutter et al. [11] used a ray-based model combined with speckle patterns derived from a preprocessed CT image to
generate view-dependent ultrasonic effects, such as occlusions, large-scale reflections, and attenuation. In his method, Graphics Processing Unit (GPU) was introduced for speed acceleration. Reichl
et al. [12] estimated ultrasound reflection properties of tissues and modified them into a more computationally efficient form. In addition, they provided a physically plausible simulation of
ultrasound reflection, shadowing artifacts, speckle noise, and radial blurring. Compared with the ultrasound volume-based method, the source image is easy to obtain and the calculation is comparably
robust for the CT- and MRI-based method [13]. However, given that the imaging principles are totally different for CT, MRI, and ultrasound, such kind of simulation is more complicated than the
ultrasound volume-based method. Moreover, the method is time consuming during preprocessing and intensity calculations. On the other hand, the CT- and MRI-based method can conveniently obtain the
ultrasound image at any angle and position and the simulated ultrasound can also be fused with the CT or MRI. Hence, the CT- and MRI-based method can provide a more comprehensive understanding of
In this paper, a novel method is developed for the simulation of an ultrasound image from CT volume datasets. A multiscale method is proposed to simulate blood flow and to enhance tubular structures
in the CT image [14]. The thin-plate spline [15–17] interpolation method is utilized to transform images between the sector and rectangle diagram. Differences of adjacent regions in terms of
radiation are subjected to weighted integration in the CT image to obtain a realistic simulation of the acoustic response of common tissues. Finally, based on reflection and attenuation principles of
ultrasound, the Kaiser window function [18] is used to overlay simulated images from different transducer elements and the rectangular diagram is mapped into the sector diagram to guarantee a
simulated ultrasound image with high validity and calculation speed.
The advantages of our algorithm are towfold: first, as the tubular structures in the CT image are strengthened by the multiscale enhancement method, the simulated vessel in the ultrasound is more
realistic than the commonly used method. Second, as the response coefficient of ultrasound is calculated by the intensity differences of adjacent regions in the ultrasound propagation path, the
complexity of the simulation procedure is greatly reduced.
2. Method
The developed method comprises the following four main parts.(1)Multiscale Vascular Enhancement. In this part, a multiscale method is employed to enhance tubular structures in the CT volume data.
Through this process, the intensities unlikely to belong to vascular trees are effectively removed. The output image then used for following processing is a weighted integration of the source and the
enhanced images.(2)Thin-Plate Spline Mapping. As ultrasound is generally presented as a sector diagram with a coordinate system that is different from the rectangular coordinate used for CT images,
the thin-plate spline interpolation method is used for the transformation between sector and rectangular diagrams to achieve smooth mapping of both the diagrams.(3)Acoustic Model Construction. In
this part, the acoustic model is constructed via the weighted function of adjacent regions on the ultrasound propagation path.(4)Kaiser Window Analysis. The ultrasound emitter is generally composed
of multiple transducer elements. The Kaiser window filter is utilized to obtain a realistic simulation effect and to simulate fusion effects of all independent elements. In order to guarantee the
clarity of the simulated ultrasound, a linear scaling method is applied to the final results to stretch the ultrasound intensity to a scale level of 256. The processing flow diagram is shown in
Figure 1.
(1)Multiscale Vascular Enhancement. When there is relative motion between the ultrasound source and the receiving body, the received signal frequency will be changed from the actual frequency
transmitted from the source. Therefore, the vessels can be clearly imaged in ultrasound. For any CT image, the difference of CT values for vasculature and its neighboring tissues is almost negligible
as no material is perfused in the focused part of the vasculature to enhance its visibility and subside the neighboring vessels as background during the whole procedure. Thus, it causes great
difficulty in distinguishing the vasculature to be focused on and the neighboring tissues to be removed. Therefore, direct simulation of an ultrasound sector from a CT image cannot achieve realistic
blood vessel visualization. In this paper, we utilize the multiscale enhancement method developed in [19] to strengthen vasculatures and then, by calculating intensify difference between adjacent
voxels in the ultrasound propagation path, the response coefficient can be quantified.
The multiscale enhancement approach basically filters the tube like geometrical structures. Since there is a large variation in size of the vessels, so we need to define a measurement scale with a
certain range. Basically to examine the local behavior of an image, , its Taylor expansion in the neighborhood of a point can be shown as where and are the gradient vector and Hessian matrix of the
image computed in at scale . To calculate these differential operators of , we use the concepts of linear scale space theory. Here the differentiation is defined as a convolution with derivatives of
Gaussians as where the -dimensional Gaussian is defined as
The parameter defines a family of normalized derivatives and helps in unbiased comparison of response of differential operators at various scales.
The idea behind eigenvalue evaluation of the Hessian is to extract the principal directions in which the local second order structure of the image can be decomposed. Three orthonormal directions are
extracted by eigenvalue decomposition that is invariant up to a scaling factor when mapped by the Hessian matrix. Let be the eigenvalue with the th smallest magnitude. In particular, a pixel
belonging to a vessel region will be denoted by being small (ideally zero), and for a large magnitude and equal sign (the sign states the brightness or darkness of the pixel). To conclude, for an
ideal tubular structure in a 3D image as The polarity is indicated by signs of and . In regions with high contrast compared to the background, the norm will become larger since at least one of the
eigenvalues will be large. The following combination of the components can define a vesselness function: where , , and are thresholds which control the sensitivity of the line filter to the measures
, , and . The vesselness measure is analyzed at different scales, . For 2D images, we propose the following vesselness measure which follows from the same reasoning as in 3D: where is the boldness
measure in 2D and accounts for the eccentricity of the second-order ellipse.
(2)Thin-Plate Spline Mapping. As ultrasound and CT images are present as polar and rectangular coordinate systems, respectively, transformation between these two diagrams is necessary for the
simulation processing. In this paper, the thin-plate spline interpolation method is utilized to achieve these transformations.
The basic idea of the thin-plate spline is that a space transformation can be decomposed into a global affine transformation and a local nonaffine warping component [20]. Assuming that we have two
sets of corresponding points and , , then, the energy function of the thin-plate spline can be defined as where is mapping function between point sets and . The first term in the previous equation is
the approaching probability between these two point sets, whereas the second term is a smoothness constraint, while indicates a different degree of warping. When is close to zero, corresponding
points are matched exactly. For this energy function, a minimizing term exists for any fixed , which can be formulated as: where is a affine transformation matrix and is a vector decided by the
spline kernel, while is a non-affine warping matrix. When we combine (7) and (8), we have where and are concatenated point sets of and and is a matrix formed from the . Thus, decomposition can be
utilized to separate the affine and non-affine warping space as follows: where is a matrix with , is a matrix, is matrix, and and both have orthogonal columns, whereas is an upper triangular matrix.
The final solution for and can be obtained as
Through thin-plate spline interpolation, the transformation between the polar and rectangular coordinate systems can be achieved. Although the thin-plate spline method is, to an extent, time
consuming compared to the commonly used bilinear or trilinear interpolation methods, however, it guarantees comparative homogeneity in both radial and tangential directions. One common problem for
the nonparametric mapping between polar and rectangular coordinate systems is that the resolution in tangential direction is homogeneous while it is reducing gradually radial direction from the
center to the out part of the sector. The main merit of the proposed thin-plate spine mapping method is that it can keep maximum uniformity of the whole diagram.
Figure 2 shows the mapping principle between sector and rectangle. The ultrasound image is generally presented as sector, as shown in Figure 2(a), and the intersection angle is defined as Field of
View (FOV) which is usually set as constant once the device is calibrated. The penetration depth of the ultrasound can be defined as the indepth distance between inner and outer circles with radius
of and , which is determined by the strength of acoustic wave. Figure 2(b) shows the rectangle image section extracted from the CT image. In this figure, and are the constructed correspondences and
and represent the number of sampling along radial and tangential directions. It is obvious that resolution of the simulated ultrasound is determined by .
(3)Acoustic Model for Construction. Large differences are observed in the acoustic resistances between different tissues. Thus, an ultrasound at interfaces of different tissues usually results in
the occurrence of reflection, refraction, and absorption. If the resistance difference between two tissues is greater than 0.1%, the reflection will be produced [21]. The acoustic resistance of a
certain organ can be calculated as , where is density and represents propagation speed of the ultrasound.
The reflection coefficient and transmission coefficient on the interface of two organs with acoustic resistance of and can be calculated by the following equations [22]: where , , and are wave
intensities of input ultrasound, reflected ultrasound, and transmitted ultrasound, respectively.
The reflection is generally produced on the interfaces of two organs. Hence, edge detection is imperative for acquiring boundary information. Currently, several stable edge detection methods exist,
such as Roberts, Sobel, Prewitt, and LOG operators, which have been widely used in medical image processing. For these methods, the detection of the edge is based on the analysis of the intensity
relationship of neighboring points. Moreover, if a certain angle exists between the propagation and edge directions, reflection will occur. If the propagation direction is parallel to the edge
direction, the ultrasound will transmit directly, and no reflection occurs. Hence, the propagation angle must be considered in the calculation of the acoustic response.
However, a considerable amount of random speckles occur in the ultrasound image, and the correct noise generation is important for the realistic simulation of ultrasound images. When the scatter
phenomenon of ultrasound is produced inside the human body, the backwaves with different phases generally interfere with one another. Hence, speckles are generated [23, 24]. Random noises are
generated and superimposed onto the simulated image. As for the CT image, several factors, including amount of radiation, performance of data acquisition unit, and image reconstruction procedure, can
also introduce noise in the simulated ultrasound [24].
Intensity differences of adjacent regions are used for the calculation of the response coefficient to obtain a realistic simulation of ultrasound. Specifically, the response coefficient of a certain
region is determined by consecutive regions on the ultrasound propagation direction. The following three conditions have to be considered.(1) Adjacent regions are not on the interface. For such a
condition, the calculation sample point is inside the same organ. Hence, the difference in the CT values of these two regions is small, yielding a small response coefficient.(2) Adjacent regions are
on the edge of the interface. If the propagation direction is parallel to the edge direction, the adjacent regions will both be located on the edge, thus yielding small CT value variations. If the
angle between the propagation and edge directions increases gradually, the CT value variation will increase and consequently increase the response coefficient. By this method, the interface effect of
the response coefficient can be calculated only by adjacent regions, and the imaging angle between edge and ultrasound propagation directions need not be calculated. (3) Adjacent regions are on the
noise area. In such situations, the difference in the CT values is usually large, thus yielding a large response coefficient. Therefore, the noises of the ultrasound can be simulated by the intensity
difference in the CT image.
Acoustic resistance is generally known to be proportional to the CT value [25]. Hence, the weight of adjacent regions and can be written as However, bone-tissue interfaces reflect 43% and air-tissue
interfaces reflect 99% of the incident beam [26]. Hence, (13) cannot be applied to tissues like bone and air.
(4)Kaiser Window Analysis. For the acoustic response model, the strength of sound wave increases with the decrease of the angle between incident sound wave and surface normal at the interface, as
can be shown by the Lambert cosine law [27] as follows: where and represent acoustic intensities before and after refraction at the medium interface. represents the refection coefficient and
represents the intersection angle between input ultrasound and normal vector of the interface. When ultrasound is transmitted in the media, its energy decreases with the propagation distance. Such
phenomenon is called ultrasound attenuation. As for ultrasound wave with given frequency, its energy attenuation follows the power law principle, which can be formulated as [28] where is the
propagation distance, while represents the acoustic intensity after it has been propagated in the medium for a distance of . According to the Lambert cosine law, the intensity of the acoustic
response can be calculated as where is the unit vector in the direction of the ultrasound beam, is the surface normal at the interface, is the absolute value operator. Then, the attenuation of the
ultrasound can be obtained by the following equation:
Suppose that multiple independent transducer elements are observed and the strength of each ultrasound is . The summary of the received ultrasound strength can be calculated as follows: where is the
minimum distance among all the transducer element and sampling region and is the distance interval of adjacent transducer elements. Meanwhile, is the angle between transducer element and the sampling
region , which can be written as where is the number of active elements of transducer and can be parameterized as , which can be calculated by Kaiser window. The discrete probability density of
Kaiser Window can be written as where represents the first zero-order modified Bessel function and is the parameter to determine shape of the window, while is an integer with length of .
3. Experimental Results
The developed method is applied to a series of CT images obtained from PLA General Hospital to investigate the performance and accuracy of the proposed simulation algorithm. The images were acquired
from a 64-slice CT scanner by Philips, and the resolution is 512 512 394. The algorithm is implemented in the C++ programming language.
3.1. Evaluation of Multiscale Enhancement
Figure 3 shows the effectiveness of the ultrasound simulation with multiscale enhancement, which is compared with the direct simulation of the CT image. Figure 3(a1) is the volume rendering of the
original image. The gray scales of vascular trees are very close to their surrounding tissues, especially for small vessel segments and bones. If ultrasound is directly simulated from this image,
vessels will mix with the neighboring tissues and will be difficult to detect visually. Figure 3(b1) is the volume rendering of the vascular structure processed by the multiscale enhancement method.
The vascular structures are effectively extracted from which small vessel segments can be visually inspected. Figure 3(c1) is the superimposing of the original image and the enhanced vascular
structure. Evidently, the vascular trees are effectively strengthened, and they can easily be separated from the surrounding tissues. Moreover, the vascular structures can be distinguished from
Figures 3(a2), 3(b2), and 3(c2) are selected section slices in the transverse direction of the original CT image, which correspond to Figures 3(a1), 3(b1), and 3(c1), respectively. Figure 3(a3) is
the direct simulation result of Figure 3(a1), whereas Figure 3(c3) is the simulation result of the enhanced image in Figure 3(c1). Based on Figure 3(a2), large vascular segments and the liver have
comparatively higher gray scales than their neighboring tissues, and small vessels in the liver boundaries mix with liver tissues. If the ultrasound image is directly simulated from this image, such
an intensity distribution can result in a large deviation the blood flow. In Figure 3(c2), the vasculatures are filled with low intensity values, which are shown as back circle areas compared with
Figure 3(c1). Figures 3(a3) and 3(c3) show the simulated results of Figures 3(a2) and 3(c2), respectively, whereas Figures 3(a3)(1), 3(c3)(1), 3(a3)(2), and 3(c3)(2) show two magnified regions of
interest corresponding to the same location in Figures 3(a2) and 3(c2). Evidently, blood vessels are effectively enhanced in Figure 3(c3), which are very close to the real ultrasound images.
Figure 4 shows a comparison of simulated ultrasound images of direct simulation and multiscale enhanced simulation. Figure 4(a) is the direct simulated ultrasound, whereas Figure 4(b) is the
simulated result with multiscale enhancement. Vascular structures are clearly enhanced in Figure 4(b), which are presented as a black hole in the image, and the size of the hole indicates the
dimension of the vasculature. The details of the enhanced ultrasound image are also clearer than those of the direct simulated image. Figures 4(a1) and 4(b1) show the magnified details of the
rectangle strip in Figures 4(a) and 4(b), respectively. Based on the ellipse areas shown in this figure, the differences between these two figures can be clearly observed. Figure 4(c) shows the
intensity distribution of the selected strips of Figures 4(a) and 4(b) in the horizontal direction. The intensity difference of these two images reaches nearly 35 gray scale levels, and the location
of the maximum exactly corresponds to that of the vascular structures on the -axis. Clearly, direct use of the CT image as a scattering map results in a repetitive scattering pattern through which
hardly any structures are recognizable. However, the tubular structure enhancement method can effectively strengthen vascular structures, and a realistic acoustic transmission pattern is simulated
and visualized.
3.2. Multiple Transducer Elements Simulation
The reflected signals of ultrasound are integrated along the active wavefront at a specified depth controlled by the Kaiser window function, which results in a more realistic reflection. Figure 5
shows the evaluation results of the multiple transducer element simulation. Figure 5(a) shows an extracted sector section of the CT image, Figure 5(b) gives the rectangle section image transformed by
the thin-plate spline, and Figure 5(c) is the simulated ultrasound with one active element based on the acoustic transmission model, whereas Figure 5(d) is the simulated result with multiple active
elements using the Kaiser window function. Figures 5(e1) and 5(e2) show two magnified regions of interest in Figure 5(e).
The thin-plate spline is very effective for the transformation of images between sector and rectangular shapes, for which smooth warping is achieved. Moreover, the highly reflective areas in the
ultrasound are located around the boundary of tissues. The vasculatures can be easily identified in booth ultrasounds with single Figure 5(c) and multiple Figure 5(d) transducer elements. The
difference between Figures 5(c) and 5(d) is that the edges between tissue boundaries of Figure 5(c) are significantly clearer than those of Figure 5(d). The realistic ultrasound is achieved by
multiple transducer element simulation. From Figures 5(e1) and 5(e2), the vascular structures in the liver can be identified explicitly.
3.3. Evaluation of Ultrasound Simulation
Although a series of calculations has been applied for the simulation of ultrasound, image generation is still very efficient in terms of computation. The calculation complexity of the proposed
method is decided by the sampling rate along radial and tangential directions, and it is not correlated to the FOV and the penetration depth. In order to evaluate the efficiency of the proposed
method, three low cost personal PCs with different processing capacity are employed to simulate ultrasound with different sampling rates. The sampling rates are taken as 150 × 100, 200 × 150, 300 ×
200, 350 × 250, 400 × 300, 450 × 350, 500 × 400, 550 × 450, and 600 × 500, while the processing platforms are as follows:(a) Intel Core i5-2410 4 × 2.3GHz, 8G RAM, Ubuntu 12.10 (64-bit),(b) Intel
Core i7-860 4 × 2.8GHz, 8G RAM, Ubuntu 12.10 (64-bit),(c) Intel Core i7-2600 4 × 3.4GHz, 8G RAM, Ubuntu 12.10 (64-bit).
Figure 6 compares the calculation of the frame rate of the above mentioned platforms and sampling rates. It can be seen that the calculation efficiency is reducing gradually with the increase in the
sampling rate for all the platforms. When the sampling rate is 200 × 100, the calculation frame rates reaches about 42.2, 37.9, and 33.8fps; however, when the sampling rate is about 600 × 500, the
calculation frame rates are about 11.4, 10.5, and 9.6fps. It is obvious that high performance PC can obtain fast simulation speeds.
In order to investigate the performance of the proposed ultrasound simulation algorithm, it is applied to the realistic brain phantom created from polyvinyl alcohol cryogel (PVA-C) by Chen et al. [28
]. PVA-C is a material widely used in validation of image processing methods for segmentation, reconstruction, registration, and denoising for its mechanical similarities to soft tissues. The phantom
was cast into a mold designed using the left hemisphere of the Colin27 brain dataset and contains deep sulci, a complete insular region, and an anatomically accurate left ventricle. The author
released the CT, MRI, and ultrasound images of the phantom. All the volume data is with the size of 339 × 299 × 115, and corresponding imaging angles of ultrasound. As ultrasound and the CT images
from the same imaging view can be obtained simultaneously, the fidelity of the proposed algorithm can be effectively evaluated by comparing the simulated ultrasound with the corresponding phantom.
Figure 7(a) provides photos of the elastic Colin27 based brain phantom mold and the PVA-C phantom. Figure 7(b) gives the volume rendering of the CT image of the phantom. Figures 7(c1) to 7(c4) give
the CT image slice from two different angles, while Figures 7(d1)–7(d4) provide the realistic ultrasound image of the phantom corresponding to the CT image slices. Figures 7(e1)–7(e4) give the
simulation results of the CT slices by the algorithm proposed in this paper. It can be seen that our method is very effective, which obtained realistic simulation of the ultrasound image.
3.4. Visualization System
In this paper, an application system is developed for displaying the simulated ultrasound in 2D and 3D using different visualization techniques. Figure 8 shows the screen shot of the visualization
area of the developed system. The three leftmost images in this figure illustrate the axial plane in Figure 8(a), coronal plane in Figure 8(b), and sagittal plane in Figure 8(c) on the normal
direction of the ultrasound transducer. The top right figure shows the volume rendering of the original CT image in Figure 8(d), whereas Figure 8(e) shows the extracted section plane of the CT image,
and Figure 8(f) is the simulated ultrasound. Based on this system, the ultrasound image is generated according to the location and direction of the transducer. The ultrasound and volume rendering of
the CT image can be displayed with the three orthographic views of the CT image. Based on this system, the ultrasound image is fast generated, and the parameters, including ultrasound simulation and
visualization, can be adjusted from user interface interaction.
The developed simulation system comprises four main visualization function modules, as follows. (1) The position and orientation of the virtual probe can be interactively set by dragging the mouse in
the 3D or the three orthogonal views, whereas the FOV, minimum, and maximum PD can be adjusted in the control panel. (2) The transparency and color mapping of volume rendering can be adjusted by
controlling the multipoint thresholds on the histogram distribution. (3) The window level and window width for the CT slice in different views can be adjusted simultaneously using the slider bar. (4)
Each view in the display window can be maximized to full screen model and reset to its default.
Figure 9 gives the final simulation results of three sections of abdominal CT images. The first row shows the extracted sector CT image, while the second row gives the corresponding simulated
ultrasound. It can be seen that the internal structure of the liver can be visualized clearly. In the CT slices, the spines can be visually detected in the left bottom parts, as marked in the
circles. In the simulated images, it can be seen that lower parts away from the spines are displayed as black empty areas. Obviously, the acoustic wave is absorbed by the bones and cannot be
transmitted to the lower parts of the simulated images. Our algorithm effectively simulated the ultrasonic propagation phenomenon.
4. Conclusion and Discussion
The ultrasound simulation technique not only provides a cheap and efficient way of training doctors in the study of the anatomic structure of human body but can also be used to validate the
registration efficiency of the ultrasound navigation system. In this paper, a novel framework is proposed for fast ultrasound simulation and visualization. A multiscale method is utilized to enhance
the tubular structure of the CT image and to obtain a realistic simulation of the vascular structure. Seamless transformations between sector and rectangle shapes are then achieved using the
thin-plate spline interpolation method. The parameters of acoustic response are based on the intensity difference ratio of adjacent regions for acoustic wave propagation in a piecewise homogenous
medium and are fast calculated. Moreover, the detected edge information on different tissues is combined with random noises to simulate the acoustic response rate of the interesting region. Speckle
noise and blurring are also added to the simulated ultrasound, resulting in an image that can be fast updated according to the user-defined parameters. Finally, the Kaiser window function is employed
to simulate integration effects of multiple transducer elements. Based on the experimental results, realistic simulation results are obtained. Aside from soft tissues and bones, vasculatures can be
clearly observed in the simulated ultrasound. Based on the efficiency evaluation experiments, the proposed simulation method is also very fast. The average frame rate of the proposed ultrasound
simulator is approximately 20fps (SM = 300, FOV = 75°), which is better than the 16fps rate commonly used in clinical radiology. However, the quantitative evaluation of the ultrasound simulation
techniques is very difficult so far because of three main reasons: first, it is difficult to obtain the accurate imaging angle of the handheld ultrasound probe. Second, it is very difficult to
control the pressure degree on soft tissues during the imaging procedures, for which a different pressure will lead to a different imaging depth. Third, the imaging quality of the ultrasound is
strictly correlated with the adjustable parameters of the transducer elements. Hence, it is very difficult to obtain the ultrasound with predefined imaging parameters, which hence can be evaluated
from the anatomic structures in CT image. Up to now, the commonly used evaluation method for ultrasound simulation is the visual comparison by physicians in clinical practice. In this paper, the
effectiveness of the developed method is quantified on realistic brain phantoms. And the experimental results are assessed by experts from the ultrasonic department at the General Hospital of
People’s Liberation Army, China.
The interesting application of the proposed method is its use in training for different ultrasound examinations or ultrasound-guided procedures. During a training session, the simulated ultrasound
can be displayed with the model constructed from the CT image to provide an anatomical context to the trainee. Vascular enhancement and scattering image simulation are time consuming and require a
cluster of CPUs to be practical. Hence, GPU implementation of the algorithm will considerably accelerate the simulation speed, which will meet the higher requirements of fine-resolution simulation.
In this paper, all acquisition parameters can be interactively adjusted during simulation processing, including ultrasound frequency, ultrasound intensity, FOV, PD, as well as speckle noise size.
Hence, the proposed simulation method is highly convenient for the simulation of different imaging conditions.
Conflict of Interests
The authors declare that they have no conflict of interests.
This work was supported by the National Basic Research Program of China (2010CB732505, 2013CB328806), Key Projects in the National Science & Technology Pillar Program (2013BAI01B01), and the Plan of
Excellent Talent in Beijing (2010D009011000004).
1. H. Maul, A. Scharf, P. Baier et al., “Ultrasound simulators: experience with the SonoTrainer and comparative review of other training systems,” Ultrasound in Obstetrics and Gynecology, vol. 24,
no. 5, pp. 581–585, 2004. View at Publisher · View at Google Scholar · View at Scopus
2. K. Cleary and T. M. Peters, “Image-guided interventions: technology review and clinical applications,” Annual Review of Biomedical Engineering, vol. 12, pp. 119–142, 2010. View at Publisher ·
View at Google Scholar · View at Scopus
3. S. Beller, M. Hünerbein, T. Lange, et al., “Image—guided surgery of liver metastases by three-dimensional ultrasound—based optoelectronic navigation,” British Journal of Surgery, vol. 94, no. 7,
pp. 866–875, 2007. View at Publisher · View at Google Scholar
4. C. Forest, O. Comas, C. Vaysière, L. Soler, and J. Marescaux, “Ultrasound and needle insertion simulators built on real patient-based data,” Studies in Health Technology and Informatics, vol.
125, pp. 136–139, 2007. View at Scopus
5. M. M. Knudson and A. C. Sisley, “Training residents using simulation technology: experience with ultrasound for trauma,” Journal of Trauma, vol. 48, no. 4, pp. 659–665, 2000. View at Scopus
6. C. Terkamp, G. Kirchner, J. Wedemeyer et al., “Simulation of abdomen sonography. Evaluation of a new ultrasound simulator,” Ultraschall in der Medizin, vol. 24, no. 4, pp. 239–244, 2003. View at
Publisher · View at Google Scholar · View at Scopus
7. M. P. Laguna, T. M. De Reijke, and J. J. De La Rosette, “How far will simulators be involved into training?” Current Urology Reports, vol. 10, no. 2, pp. 97–105, 2009. View at Publisher · View at
Google Scholar · View at Scopus
8. D. Henry, J. Troccaz, J. Bosson, and O. Pichot, “Ultrasound imaging simulation: application to the diagnosis of deep venous thromboses of lower limbs,” in Proceedings of the Medical Image
Computing and Computer-Assisted Interventation (MICCAI '98), pp. 1032–1040, 1998.
9. M. Weidenbach, C. Wick, S. Pieper et al., “Augmented reality simulator for training in two-dimensional echocardiography,” Computers and Biomedical Research, vol. 33, no. 1, pp. 11–22, 2000. View
at Publisher · View at Google Scholar · View at Scopus
10. R. Shams, R. Hartley, and N. Navab, “Real-time simulation of medical ultrasound from CT images,” in Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI '08), pp.
734–741, 2008.
11. O. Kutter, R. Shams, and N. Navab, “Visualization and GPU-accelerated simulation of medical ultrasound from CT images,” Computer Methods and Programs in Biomedicine, vol. 94, no. 3, pp. 250–266,
2009. View at Publisher · View at Google Scholar · View at Scopus
12. T. Reichl, J. Passenger, O. Acosta, and O. Salvado, “Ultrasound goes GPU: real-time simulation using CUDA,” in Proceedings of the Medical Imaging: Biomedical Applications in Molecular,
Structural, and Functional Imaging, vol. 7261, Lake Buena Vista, Fla, USA, February 2009. View at Publisher · View at Google Scholar · View at Scopus
13. A. Hostettler, C. Forest, A. Forgione, L. Soler, and J. Marescaux, “Real-time ultrasonography simulator based on 3D CT-scan images,” Studies in Health Technology and Informatics, vol. 111, pp.
191–193, 2005. View at Scopus
14. S. Mansurova, P. M. Zarate, P. Rodriguez, S. Stepanov, S. Köber, and K. Meerholz, “Non-steady-state photoelectromotive force effect under linear and periodical phase modulation: application to
detection of Doppler frequency shift,” Optics Letters, vol. 37, no. 3, pp. 383–385, 2012. View at Scopus
15. A. M. Bazen and S. H. Gerez, “Fingerprint matching by thin-plate spline modelling of elastic deformations,” Pattern Recognition, vol. 36, no. 8, pp. 1859–1867, 2003. View at Publisher · View at
Google Scholar · View at Scopus
16. A. Rosas and M. Bastir, “Thin-plate spline analysis of allometry and sexual dimorphism in the human craniofacial complex,” American Journal of Physical Anthropology, vol. 117, no. 3, pp. 236–245,
2002. View at Publisher · View at Google Scholar · View at Scopus
17. Y. P. Lin and P. Vaidyanathan, “A Kaiser window approach for the design of prototype filters of cosine modulated filterbanks,” IEEE Signal Processing Letters, vol. 5, no. 6, pp. 132–134, 1998.
18. A. Frangi, W. Niessen, K. Vincken, and M. A. Viergever, “Multiscale vessel enhancement filtering,” in Proceedings of the Medical Image Computing and Computer-Assisted Interventation (MICCAI '98),
pp. 130–137, 1998.
19. Y. Wang, Y. Shen, J. Zhou et al., “Dual-beam-reflection phenomenon due to leaky modes in a photonic band gap,” Optics Communications, vol. 284, no. 5, pp. 1123–1126, 2011. View at Publisher ·
View at Google Scholar · View at Scopus
20. H. Chui and A. Rangarajan, “A new point matching algorithm for non-rigid registration,” Computer Vision and Image Understanding, vol. 89, no. 2-3, pp. 114–141, 2003. View at Publisher · View at
Google Scholar · View at Scopus
21. K. Thangavel, R. Manavalan, and I. L. Aroquiaraj, “Removal of speckle noise from ultrasound medical image based on special filters: comparative study,” ICGST-GVIP Journal, vol. 9, pp. 25–32,
22. F. Palhano Xavier de Fontes, G. Andrade Barroso, P. Coupé, and P. Hellier, “Real time ultrasound image denoising,” Journal of Real-Time Image Processing, vol. 6, no. 1, pp. 15–22, 2011. View at
Publisher · View at Google Scholar · View at Scopus
23. J. M. Sanches, J. C. Nascimento, and J. S. Marques, “Proceedings of the Medical image noise reduction using the Sylvester-Lyapunov equation,” IEEE Transactions on Image Processing, vol. 17, no.
9, pp. 1522–1539, 2008. View at Publisher · View at Google Scholar · View at Scopus
24. W. Wein, A. Khamene, D. A. Clevert, et al., “Simulation and fully automatic multimodal registration of medical ultrasound,” in Proceedings of the Medical Image Computing and Computer-Assisted
Intervention (MICCAI '07), pp. 136–143, 2007.
25. K. K. Shung and G. A. Thieme, Ultrasonic Scattering in Biological Tissues, CRC, 1993.
26. B. Li, S. Wang, and Y. Huang, “Aluminum diffuser of Lambert scattering and high reflectivity based on abrasion and sodium hydroxide corrosion,” Journal of Modern Optics, vol. 57, no. 13, pp.
1189–1193, 2010. View at Publisher · View at Google Scholar · View at Scopus
27. H. Roitner and P. Burgholzer, “Efficient modeling and compensation of ultrasound attenuation losses in photoacoustic imaging,” Inverse Problems, vol. 27, no. 1, Article ID 015003, 2011. View at
Publisher · View at Google Scholar · View at Scopus
28. S. J. S. Chen, P. Hellier, J. Y. Gauvrit, M. Marchal, X. Morandi, and D. L. Collins, “An anthropomorphic polyvinyl alcohol triple-modality brain phantom based on Colin27,” in Proceedings of the
Medical Image Computing and Computer-Assisted Intervention (MICCAI '10), pp. 92–100, Springer, 2010. | {"url":"http://www.hindawi.com/journals/cmmm/2013/327613/","timestamp":"2014-04-17T17:00:16Z","content_type":null,"content_length":"262588","record_id":"<urn:uuid:34a698bc-602c-42c9-9702-4e70dc8f1d9c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does this series converge (1 - (log n)/n)^n
December 6th 2012, 09:16 AM
Re: Does this series converge (1 - (log n)/n)^n
Hmm.. How about this..
Take as a given that
$\left\{1 - \frac{\log n}{n}\right\}$
is an increasing sequence with
$\lim_{n \to \infty} \left(1 - \frac{\log n}{n}\right) = 1$
. (It is true:
see here
$|r| < 1$
we know that
$\sum_{n=0}^\infty r^n = \frac{1}{1-r}$
but that same sum diverges for any other r.
$r \in (0,1)$
. Then, by definition of limit, there exists
$N \in \mathbb{N}$
such that for each
$n \geq N$
$\left(1 - \frac{\log n}{n}\right) > r$
. Thus for any
$r \in (0,1)$
, there are an infinite number of terms such that
$\left(1 - \frac{\log k}{k}\right)^k > r^k$
. Hence,
$\sum\left(1 - \frac{\log n}{n}\right)^n$
is greater than any (monic) convergent geometric series, and is therefore divergent.
Anyone? I'm afraid that this post has had so many replies that nobody wants to read it, but I haven't got a usable answer. Plato gave a good hint maybe, but I don't follow it.. | {"url":"http://mathhelpforum.com/differential-geometry/209143-does-series-converge-1-log-n-n-n-2-print.html","timestamp":"2014-04-21T07:41:56Z","content_type":null,"content_length":"10174","record_id":"<urn:uuid:9b3cc087-7f6a-47e8-b348-737404493df1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
What methods exist to prove that a finitely presented group is finite?
up vote 15 down vote favorite
Suppose I have a finitely presented group (or a family of finitely presented groups with some integer parameters), and I'd like to know if the group is finite. What methods exist to find this out? I
know that there isn't a general algorithm to determine this, but I'm interested in what plans of attack do exist.
One method that I've used with limited success is trying to identify quotients of the group I start with, hoping to find one that is known to be infinite. Sometimes, though, your finitely presented
group doesn't have many normal subgroups; in that case, when you add a relation to get a quotient, you may collapse the group down to something finite.
In fact, there are two big questions here:
1. How do we recognize large finite simple groups? (By "large" I mean that the Todd-Coxeter algorithm takes unreasonably long on this group.) What about large groups that are the extension of finite
simple groups by a small number of factors?
2. How do we recognize infinite groups? In particular, how do we recognize infinite simple groups?
(For those who are interested, the groups I am interested in are the symmetry groups of abstract polytopes; these groups are certain nice quotients of string Coxeter groups or their rotation
gr.group-theory co.combinatorics at.algebraic-topology
add comment
9 Answers
active oldest votes
The theory of automatic groups may be a help here. There is a nice package written by Derek Holt and his associates called kbmag (available for download here: http://www.warwick.ac.uk/~mareg
/download/kbmag2/ ). A previous answer mentioned groebner bases. The KB in kbmag stand for Knuth-Bendix which is a string rewriting algorithm which can be considered to be a non-commutative
up vote generalization of groebner bases. There is a book "Word Processing in Groups" by Epstein, Cannon, Levy, Holt, Paterson and Thurston that describes the ideas behind this approach. It's not
7 down guaranteed to work (not all groups have an "automatic" presentation) but it is surprisingly effective.
4 You may be in luck. It turns out that finitely generated Coxeter groups are automatic: Brink and Howlett (1993). "A finiteness property and an automatic structure for Coxeter groups".
Mathematische Annalen – Victor Miller Feb 5 '10 at 15:56
add comment
If a discrete group is amenable and has Kazhdan's Property (T), then it is finite.
I'm not sure it would help you, but it's a technique.
up vote 6 This technique was used by Margulis in his original proof of his Normal Subgroup Theorem. It's since been used in a couple of other Normal Subgroup Theorems, which have been applied to
down vote prove simplicity of some infinite groups. See for example "Lattices in products of trees" by Burger and Mozes, and "Simplicity and superrigidity of twin building lattices" by Caprace and
add comment
This is sort of a sideways look at your question:
There's software called "Heegaard" by John Berge that takes as input a finite presentation and attempts to find a corresponding Heegaard splitting of a 3-manifold which has that fundamental
group. It seems to be fairly effective. There are algorithms to produce triangulations of Heegaard splittings available (Hall and Schleimer for example). So you could take the presentation,
find (if possible) the Heegaard splitting, produce the triangulation and then use software like Regina and SnapPea to analyze the geometry of those manifolds. There's a lot of heuristics
up vote there and also some serious algorithms. All the links the the various packages and their documentation is available here: http://www.math.uiuc.edu/~nmd/computop/
3 down
vote So for groups that are the fundamental groups of 3-manifolds at least, there's a decent toolkit to play with.
As an example, consider testing to see if a group is trivial. Step 1: Heegaard could get stuck. Step 2: if Heegaard finds a splitting, you triangulate it and pass it to Regina. Step 3:
Regina has an algorithm to recognise a triangulated 3-sphere, so it will tell you whether or not your group is trivial.
add comment
There is no algorithm to tell if a finitely presented group is finite, but in principle there is a procedure which will terminate if your group is finite, and tell you which group it is. You
can recursively list all finite groups (e.g. by group tables), and therefore presentations for them. You can recursively perform all Tietze transformations on your group presentation, and
up vote check at each stage whether it agrees with one of the finite group presentations you have computed (imagine doing this in parallel or alternating the steps of the two recursive procedures).
3 down This will eventually tell you whether your group is finite if it is. But of course this is completely impractical, and I realize this isn't what you want. The uncomputable thing is to prove
vote that a group is not finite.
add comment
One can also sometimes use Fox calculus, which describes the abelianization of a finite-index normal subgroup of $G$. If this abelianization is infinite, your group is infinite.
Johnson's "Presentations of Groups", chapter 12, describes this in detail.
up vote 3 down Also see this thread for some examples of other techniques: group-pub
add comment
Regarding part 2 of your question - "In particular, how do we recognize infinite simple groups?" - I think the answer is that it depends on which infinite simple group you're looking at!
Some famous examples:-
• Higman's original construction of an infinite simple group starts with a group with no non-trivial finite quotients. (Roughly, you construct one of these by building in a pair of
conjugate elements which would have to have different orders in a finite quotient.) You then proceed to take the quotient by a maximal proper normal subgroup. The result can't be
finite, because that would be a non-trivial finite quotient! (There was some discussion of this here.)
• Thompson's groups T and V contain elements of infinite order!
up vote 2
down vote • Tarski Monsters are infinite because of Sylow's Theorems. Every proper subgroup is of prime order p, so Sylow's Theorems tell you that if it were finite then it would be cyclic, which
it isn't by construction.
Do you have a particular reason to think that your groups are simple? What do you know about the kernel of the map from the Coxeter group?
EDIT: Just wanted to emphasize that of course, of the examples listed, only Thompson's Groups happen to be finitely presented. Finitely presented infinite simple groups are pretty special.
The groups I'm dealing with right now certainly aren't simple -- each one has at least one known finite quotient -- but I'm interested in the general question anyway. – Gabe Cunningham
Dec 4 '09 at 2:48
Right. I assumed from the "simple" flavour of your question that you weren't interested in answering your question by looking for infinite quotient/finite overgroups (which is the usual
way of approaching these things). Another possibility is that if your kernel satisfies some sort of "small cancellation" condition then you can sometimes prove that the quotient is
infinite. But I've no idea how you make that work on a Coxeter group. – HJRW Dec 4 '09 at 4:00
add comment
I suppose Groebner bases can be used to compute the size of a group with generators and relations, just as they can be used to compute the size of a commutative or noncommutative algebra
with generators and relations. This certainly would not work in all cases, but in some simple enough cases it will. In particular, when your group is actually finite, you will eventually
up vote 1 discover this with Groebner bases, though the computation time may be impracticable for a human, or even for a computer. When your group is infinite, Groebner bases will sometimes tell you
down vote it is, but sometimes they wouldn't.
Do you mean some non-commutative version of Groebner bases? To which algebra they belong? – mathreader Dec 4 '09 at 0:18
I am not sure that I understand you question. Of course, you need noncommutative Groebner bases, but you don't necessarily need any algebra, though you may think about one if it makes
1 you more comfortable. Just add the inverses of your group generators to your list of generators, and proceed with the standard Groebner basis algorithm based on the Diamond Lemma.
Actually, there might be a better way, possibly, with a special notion of a Groebner basis particularly suited for the group case. I would look for it in the literature, starting from
Teo Mora's papers. – Leonid Positselski Dec 4 '09 at 0:30
add comment
Your approach of finding infinite quotients is certainly a standard one. There is, however, a slight tweeking of it that helps in the event that this approach breaks down - search through
some low index subgroups. If any of these have infinite homomorphic images then your group must posses an infinite subgroup and thus must also be infinite. In Magma the command
"LowIndexSubgroups" can do this and I suspect somthing similar works in GAP, matlab etc.
up vote 1 As with the other techniques this is not a sure-fire 100% guaranteed method, but it is sometimes useful.
down vote
Simplicity of an infinite group is a much harder question to address. Needless to say that if I was a beting man then I would certainly put money on your group not being simple.
add comment
There is a software called Magnus (http://www.grouptheory.org/magnus) which gives You some insight how it may be implemented. I do not know anything about details, but it has some
up vote 1 down documentation, and works pretty well;-)
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory co.combinatorics at.algebraic-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/7721/what-methods-exist-to-prove-that-a-finitely-presented-group-is-finite?sort=votes","timestamp":"2014-04-20T08:57:51Z","content_type":null,"content_length":"88843","record_id":"<urn:uuid:3ed7c005-1197-4fee-9b6a-4adc6aa662ab>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
Increasing And Decreasing Function In Calculus
Sponsored High Speed Downloads
3.3 Increasing & Decreasing Functions and the 1st Derivative Test Calculus Home Page Problems for 3.3 Title: intro (1 of 10)
3.3 Increasing and Decreasing Functions and the First Derivative Test Calculus Guidelines for Finding Intervals on Which a Function is Increasing or Decreasing
A Increasing and Decreasing Functions A function f is increasing over the interval (a,b)if f (x1)< f (x2)whenever x1 <x2 in the interval(a,b). ... Calculus and Vectors – How to get an A+ 4.1
Increasing and Decreasing Functions
3.1 Increasing and Decreasing Functions A. Increasing and Decreasing Functions (Informal Definitions) A function is increasing if its graph is rising as you scan it from left to right.
Calculus Maximus Notes 5.3T: Inc, Dec, 1st Deriv Test Page 2 of 6 Example 1: The graph of a function f x defined on 3,5 is shown. List the open intervals over which the function is increasing,
decreasing, and/or constant.
1 Criteria for Increasing / Decreasing Functions If f0(x) >0 for all x in (a;b), then f is increasing on (a;b). Definition If f0(x0) = 0, we say that x0 is a critical pointof f.
3.1. Increasing and Decreasing Functions; Relative Extrema Increasing and Decreasing Functions Let f(x)be a function defined on the interval a <x <b, and let
Kuta Software - Infinite Calculus Name_____ Intervals of Increase and Decrease Date_____ Period____ For each problem, find the x-coordinates of all critical ... intervals where the function is
increasing and decreasing. 1)
Applied Calculus Lesson 5.1: Increasing and Decreasing Functions Increasing and Decreasing Functions Let f be a function defined on some interval.
10-9-2005 Increasing and Decreasing Functions A function f increases on a interval if f(a) < f(b) whenever a < b and a and b are points in the interval.
Increasing/Decreasing Functions First Derivative Test Second Derivative Test Concavity 1. The first derivative tells us where a function is increasing or decreasing on an interval
SECTION 3.3 Increasing and Decreasing Functions and the First Derivative Test 181 The First Derivative Test After you have determined the intervals on which a function is increasing or decreasing,
MTH 141 Applied Calculus Name:_____ Worksheet: Part 5 Segment 1: Solving Inequalities; Increasing, Decreasing Functions 1. For the function y 2x2 11x 5, solve the inequality y ≤0.
Increasing and Decreasing Functions Consider the following two graphs Increasing: Decreasing: ... If a function f has a local minimum or local maximum at x = c, ... Business Calculus Author: Brian E.
How can we say a function is increasing or decreasing at an x value which makes the derivative equal zero?” is a perennial favorite on the Advanced Placement Calculus listserve. From a logical point
of view,
Increasing and Decreasing Functions A function f is increasing on an interval if for any two numbers x 1 and x 2 in the interval, x 1 x 2 implies f x
Increasing, Decreasing, and Constant Functions Definition A ... and x 2 are two points in (a,b) with x 1 x 2. A function is decreasing on an open interval (a,b), if f x f x( ) ( ) 12! whenever and
are two points ... College Algebra is designed to begin preparing students for the calculus sequence.
AP® CALCULUS AB 2012 SCORING GUIDELINES Question 6 ... Is the speed of the particle increasing, decreasing, or neither at time t = 4? ... symbolic derivative for the given velocity function,
correctly using the chain rule.
To help you decide whether a function is increasing, decreasing, or constant on an interval, you can evaluate the function for several values of x. However, calculus is needed to determine, for
certain, all intervals on which a function is increasing, decreasing, or
Increasing and Decreasing Functions ... any two numbers x1 and x2 in the interval, x1 < x2 implies f (x1) < f (x2) Likewise, for decreasing How to prove a function is increasing or decreasing using
Calculus ... See Guidelines for Finding Intervals on Which a Function is Increasing or Decreasing ...
3.1 INCREASING AND DECREASING FUNCTIONS; RELATIVE EXTREMA 1) Increasing and Decreasing Functions If f(x) is a function defined on the interval (a,b), and x
I First derivative test for maxima and minima Definitions: Let c be a point in the domain of a function f(x). • f(x) has a local minimum (or relative minimum) at c
Increasing and Decreasing Functions, Min and Max, Concavity studying properties of the function using derivatives – Typeset by FoilTEX – 1
Calculus Chapter 4 2/59 4.1 Applications of the First Derivative • Definition: A function f is increasing (decreasing) on an interval (a,b)ifforanytwonumbersx1 and x2 in
Calculus Maximus WS 5.3: ... Determine the increasing and decreasing open intervals of the function fx x x( )=(−31)4/5 1/5( ) ... Function Sign of !gc( ) a) gx f x( ) ...
decreasing. Increasing means the function values are going up as x goes up. ... Calculus Activities © 2004 TEXAS INSTRUMENTS INCORPORATED 1. Draw vertical lines on the graph where the function
changes from increasing to decreasing and from decreasing to increasing. 2.
3.1. Increasing and Decreasing Functions; Relative Extrema Increasing and Decreasing Functions Let f(x) be a function defined on the interval a < x < b, and let
Calculus-Increasing/Decreasing Functions 1. Let a. Find where f is increasing, where it is decreasing and where it is constant. b. Sketch the graph of the function in the space at right. Note the
indicted window size. 2. ...
MA 131 Lecture Notes Chapter 4 Calculus by Stewart 4.1) Maximum and Minimum Values 4.3) How Derivatives Affect the Shape of a Graph A function is increasing if its graph moves up as x moves to the
right and is decreasing if its graph
dicating where the function is increasing and decreasing beneath the last number line. In particular, ... The calculus tells you what features you should expect to see in a graph. Without it, you’re
simply fooling around and hoping you get something
AP Calculus AB Chapter 5 Note Packet 5.1: Increasing and Decreasing Functions Review: If on an interval, then is _____ on the interval.
1 Sep 1910:25 AM Test for Increasing and Decreasing Functions Let f be a continuous function on the closed interval [a,b] and differentiable
I The derivative test for increasing and decreasing functions Method for finding where a function f is increasing/decreasing 1. Find all critical points of f. (That is, find all points in domain where
f0(x) = 0 or f0(x) does not exist.) 2.
Derivative and Concavity Tests.notebook 1 October 01, 2013 Test for Increasing and Decreasing Functions Let f be a continuous function on the closed interval [a,b] and differentiable
Title: Increasing/Decreasing Functions Subject: SMART Board Interactive Whiteboard Notes Keywords: Notes,Whiteboard,Whiteboard Page,Notebook software,Notebook,PDF,SMART,SMART Technologies ULC,SMART
Board Interactive Whiteboard
Increasing/Decreasing Functions Activity: Investigating a common quadratic model. ... the object above the earth is given by a quadratic function of the time t (in seconds) elapsed since the object
was shot or thrown into the air.
Intervals of Increasing & Decreasing . Find the first derivative of the function, f. Find critical numbers. Set the first derivative equal to zero
... A function is always increasing or decreasing Point of infl exion: A point at which the curve is neither concave upwards nor downwards, ... Chapter 2 Geometrical Applications of Calculus 61
Higher Derivatives A function can be differentiated several times:
4.3 Connecting f ' and f '' with the graph of f Calculus Example: Find where the function g()xxe= 2 x is increasing and decreasing, then find any local extrema and absolute
discussed using the ideas of calculus (limits and derivatives). A function f is a rule that assigns to each element x in a set D exactly one element, ... gives you information on whether a function
is increasing or decreasing without having to look at a sketch. Local and Absolute Extrema
Section 3.3 Increasing/Decreasing & The 1st Derivative Test Day 1 Investigation 1 ... Global min(s): Global max(s): Local min(s): Local Max(s): Intervals of Increasing: Intervals of Decreasing:
Derivative Function: Derivative Graph: Critical Numbers: Investigation 2 (Calculator): f(x) = 2sin(0 ...
Larson/Edwards Calculus/Brief Calculus: An Applied Approach, 8e Notetaking Guide IAE ... In order to find the intervals on which a function f is decreasing or increasing, you need to find the
critical numbers, the points where fx()
... This is the definition of the derivative of the function B : T ;cos T ... AB Calculus Multiple Choice Test Fall 2012 Page 2 ... and is: increasing decreasing increasing increasing ...
Going Up? By Lin McMullin One of the most frequently asked questions on the AP Calculus Electronic Discussion Group is, “If a function is increasing or decreasing on an interval should the endpoints
Increasing and Decreasing Functions Corollary Suppose that f a, bis continuous on [ ] ... Find the critical numbers and the open intervals on which the function is increasing or decreasing. 1. x4 32
2. x1 3 3. x1 4. 2 4 fx x x 5. 2 1 fx x x 6. x1 7. x2 3 8.
Objective: Determine intervals on which a function is increasing or decreasing. Find the open intervals on which the function 2 3 x fx x is increasing or decreasing.
Limits - Cornerstone of calculus ... increasing decreasing Walking uphill ) function is increasing ... How can we determine intervals where a function is increasing and decreasing from the equation
of the function? If f0(x) goes from positive to negative ...
Rahn © 2010 Material developed from Paul Forester’s Calculus Graphs of Functions For each of the following functions: ... C If the function is increasing or decreasing at x = 1, use the graph and/or
table values
Intervals on Which a Function is Increasing/Decreasing . Definition: A function is increasing on an interval (a, b) if, for any two numbers . x 1 ... decreasing. We can use calculus to determine
intervals of increase and intervals of decrease. A function can change from increasing to decreasing ...
92.131 Calculus 1 Geometry of Functions For Questions 1-4: i) On what intervals is the function f (x) increasing? Decreasing? ii) What are the relative | {"url":"http://ebookily.org/pdf/increasing-and-decreasing-function-in-calculus","timestamp":"2014-04-23T14:50:41Z","content_type":null,"content_length":"44328","record_id":"<urn:uuid:79258a02-69b1-4bef-9869-e93a7634859d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deriving Moment of Inetia using just linear dynamics
Yes, If you consider a mass being accelerated and rotates in a circle.
Then the acceleration is:
multiply both sides by r:
where [itex]\alpha[/itex] is the angular acceleration.
Take this sum of all masses:
[itex]\sum r^{2}dm[/itex]
Or another way:
The force on a small element dm is:
then the torque on this small mass dm is:
[itex]d\tau= rdF=r^{2}\frac{d\omega}{dt}dm[/itex]
integrating this over the total mass gives the total torque:
[itex]\tau=\int r^{2}dm\frac{d\omega}{dt}[/itex]
Hope it helps | {"url":"http://www.physicsforums.com/showthread.php?p=4258123","timestamp":"2014-04-20T23:41:14Z","content_type":null,"content_length":"23059","record_id":"<urn:uuid:ae6fab83-88fe-4e11-b2b4-2814730d8d89>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
GED Preparation
A new section of each course starts monthly. If enrolling in a series of two or more courses, please be sure to space the start date for each course at least two months apart.
Week 1
Wednesday - Lesson 01
How many subjects does the GED® test cover, and how much time do you get for each part of the test? Where do you go to take the test? Do you get to use a calculator? Will this class teach everything
from four years of high school? In this lesson, you'll learn the answers to all these questions and more.
Friday - Lesson 02
Reasoning Through Language Arts is the longest of the four GED® test modules, so we'll spend three lessons on it. In this lesson, I'll give you an overview of the RLA, and then we'll focus on your
reading skills. Together we'll map out a strategy for mastering the reading passages of the RLA.
Week 2
Wednesday - Lesson 03
Today we'll continue preparation for the Reasoning Through Language Arts test module, but we'll shift gears a bit. It's time to explore the parts of speech, punctuation, capitalization, and sentence
structure. What you learn today will prepare you for the editing part of the RLA test module and help you with the writing we'll do in later lessons.
Friday - Lesson 04
It's time to use what you learned about reading strategies in Lesson 2 and grammar in Lesson 3 to create an extended response, which is a type of essay. The extended response has a high point value,
so you'll want to ace it!
Week 3
Wednesday - Lesson 05
In this first of three math lessons, I'll introduce you to the test content and the online calculator. We'll also review the concept of "order of operations," which tells you what to do and in what
order when you have a multi-step problem to solve. More than half the math questions cover algebra topics, so we'll jump right into the language of algebra. By the end of this lesson, you'll be
simplifying and evaluating expressions and solving equations and inequalities that have one variable.
Friday - Lesson 06
Lesson 6 applies the algebra you learned in Lesson 5 to formulas and dimensions in geometry. You'll use formulas for perimeter, area, volume, and surface area. These lessons work in real life as well
as on the test: By the end of this lesson, you'll know how to figure your gas mileage, the cost per unit and the sale price of an item you're buying, and the area and perimeter of your yard.
Week 4
Wednesday - Lesson 07
In this lesson, you'll bring your basic algebra skills to a higher level and combine them with geometry concepts. You'll learn how to plot points, graph lines, and figure out the slope of lines on a
coordinate grid. Then you'll factor and solve a quadratic equation. Last, I'll show you how to use tables to solve word problems algebraically. There's a lot going on in this lesson, but I know
you're up to the challenge!
Friday - Lesson 08
In our first science lesson, we'll tackle the theme of energy and related systems. Besides investigating the cosmos and learning physics laws, we'll practice some math that relates to science. The
science test also features two short-answer items, and I'll help you prepare for those.
Week 5
Wednesday - Lesson 09
Ever wonder why you have blue eyes and your siblings have brown eyes? Ever watched TV news reports about the effects of hurricanes, tornadoes, or earthquakes? Then you're ready for this science
lesson. Today we'll tackle the first science theme: Human Health and Living Systems.
Friday - Lesson 10
Understanding your government is an important part of being a good citizen. As you prepare for the social studies test, you'll learn about events that shaped the history and economy of the United
Week 6
Wednesday - Lesson 11
Explore the second social studies theme as you learn to analyze political cartoons, master the basics of economics, and learn more about 20th-century history.
Friday - Lesson 12
We're almost done with this course! Use the checklists in this lesson to make sure you're ready for the GED® test. You'll also find tips and techniques to make test day less stressful. And we'll talk
about your plans for the future, whether that's college or another path. | {"url":"http://www.ed2go.com/online-courses/ged-preparation?site=bcccbii&tab=syllabus","timestamp":"2014-04-17T01:51:07Z","content_type":null,"content_length":"40823","record_id":"<urn:uuid:b1f01374-c35c-4f28-bd4a-64a2347343d2>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Geometry has been an essential element in the study of mathematics since antiquity. Traditionally, we have also learned formal reasoning by studying Euclidean geometry. In this book, David Clark
develops a modern axiomatic approach to this ancient subject, both in content and presentation.
Mathematically, Clark has chosen a new set of axioms that draw on a modern understanding of set theory and logic, the real number continuum and measure theory, none of which were available in
Euclid's time. The result is a development of the standard content of Euclidean geometry with the mathematical precision of Hilbert's foundations of geometry. In particular, the book covers all the
topics listed in the Common Core State Standards for high school synthetic geometry.
The presentation uses a guided inquiry, active learning pedagogy. Students benefit from the axiomatic development because they themselves solve the problems and prove the theorems with the instructor
serving as a guide and mentor. Students are thereby empowered with the knowledge that they can solve problems on their own without reference to authority.
This book, written for an undergraduate axiomatic geometry course, is particularly well suited for future secondary school teachers.
In the interest of fostering a greater awareness and appreciation of mathematics and its connections to other disciplines and everyday life, MSRI and the AMS are publishing books in the Mathematical
Circles Library series as a service to young people, their parents and teachers, and the mathematics profession.
Request an examination or desk copy.
Titles in this series are co-published with the Mathematical Sciences Research Institute (MSRI).
Undergraduate students interested in geometry and secondary education.
"An interesting and singular approach of the Euclidean geometry is contained in this book ... [The] book covers all the topics listed in the common core state standards for high school synthetic
geometry ... [T]he didactical approach of the large collection of problems, solutions and geometrical constructions is very important to consider it as a good textbook for teaching and learning
synthetic geometry."
-- Mauro Garcia Pupo, Zentralblatt MATH | {"url":"http://cust-serv@ams.org/bookstore?fn=20&arg1=mclseries&ikey=MCL-9","timestamp":"2014-04-16T13:36:53Z","content_type":null,"content_length":"17379","record_id":"<urn:uuid:05dc3774-2cb0-48d4-85a0-32e6c43952c0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 43
paula bought a ski jacket on sale for $6 less than half its original price she paid $88 for the jacket. What was the original price
Tickets to a school play were $5 for adults and $3 for students. A total of 247 tickets were sold for a profit of $1000. How many adult tickets an student tickets were sold?
The Achilles tendon, which connects the calf muscles to the heel, is the thickest and strongest tendon in the body. In extreme activities, such as sprinting, it can be subjected to forces as high as
13 times a person s weight. According to one set of experiments, the ave...
College Algebra
3x-1/x^2-10x+26 Find all (real) zeros of the function Find all (real) poles of the function The function has... A horizontal asymptote at 0? A non-zero horizontal asymptote? A slant asymptote? None
of the above
mass is 64.0 and volume is 16.0 what is the density?
What type of sentence pattern is "Traditional gas-powered cars are harmful to the environment." ?
describe a triangle with sides of 9in., 4in. and 6in.
I guess I just don't know what value to use for A in the Arrhenius equation.
At room temperature (about 20C) milk turns sour in about 64 hours. In a refrigerator at 3C, milk can be stored for about three times as long before turning sour. (i) Determine the approximate
activation energy for the reaction that causes milk to sour, and (ii) estimate how lo...
Nitrogen monoxide reacts with hydrogen gas to form nitrogen gas and water (vapor). Write a balanced equation for this reaction. the reaction is experimentally found to be (approximately) first-order
in H2 and second-order in NO. Write down the form of the experimentally-determ...
If your solving for Y your answer is y=x/2 +1 If your solving for X your answer is x=2y-2
what variable are you solving for?
I agree with that..
American Government
1.)unreasonable 2.)racial 3.)reasonable 4.)unnecessary These are the choices...
American Gov't
Sweet, thanks (:
American Gov't
_____ declared that "separate educational facilities are inherently unequal." Is it Brown VS.Board of education?
what are the next 3 numbers after -1,+4,-9,+16
Thank You both! =D
Rewrite each sentence below, replacing all clichés with more creative or straightforward expressions. The tried and true methods quite often can't be beat.
Well what is the formula for volume? Find out that, then plug in the numbers and you have your answer.
IF the scale model of an 82 inch long table is 10.25 inches long, what is the scale used to create the model?
find sin of theta if cos is less than 0 and cot equals 3
The banner will be 40ft plus half the banners length. How long will the banner be?
Define characteristics of living things
Nursing School
Also how many grams CO2 removed?
Nursing School
how many grams of lithium hydroxide are needed to remove 4500 grams carbon dioxide by a submarine atmospheric regulator and acts according to the following formula? 2LiOh+CO2>Li2CO3+H2O
y - - 8= -12 solve the equation please :( 5
simplify (-4)4 the second 4 is the exonent
write and equation in slope intercpt form using the folowing information : slope= 3 y intercept= -5
Simplify (-4)4 the second 4 is the coeffcient plzz HELP
solve 2(t-4)=3- (3t +1)
Is this a direct variation y= -2x + 6
Generate ordered pairs for the function for x= -2, -1, 0, 1, 2
so sorry didn't mean to put math. I think it's G
What change you should be made in this sentence:Her parents thought she might need help copeing with her loss of her arm, but Bethany proved them wrong. F. Change parents to parent's G. Change
copeing to coping H. Delete the comma J. Change them to her
538 divided by -63
Social Studies
Thank you for answering, MS. Sue. But, I found more details on the other page in my book after I asked this. They helped by spreading religon and learning about land resources so they knew where the
good places were to start settlements. Again thanks for answering. -Kenzie
Social Studies
How did missionairies help New France expand?
A job is shared by 4 workers, W, X, Y, and Z. Worker W does 1/4 of the total hours. Worker X does 1/3 of the total hours. Worker Y does 1/6 of the total hours. What fraction represents the remaining
hours allocated to person Z?
Simpily the following (4/7M)2(49M)(17p)(1/34p5)
algebra 1
5 = n over 2
in the addition problems below each leter represents the same digit in both problems. Replace each letter with a different digi, 1-9, so that both addition problems are true.(there are two possible
answers.) A B C A D G +D E F +B E H ------ ------ G H I C F I | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=kenzie","timestamp":"2014-04-16T08:28:25Z","content_type":null,"content_length":"13019","record_id":"<urn:uuid:ba84fd70-a41a-4650-9046-c3ca66fa0f10>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
A sample revised prelude for numeric classes
Dylan Thurston dpt@math.harvard.edu
Sun, 11 Feb 2001 22:27:53 -0500
Thanks for the comments!
On Mon, Feb 12, 2001 at 12:26:35AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> I don't like the fact that there is no Powerful Integer Integer.
Reading this, it occurred to me that you could explictly declare an
instance of Powerful Integer Integer and have everything else work.
> Then the second argument of (^) is always arbitrary RealIntegral,
Nit: the second argument should be an Integer, not an arbitrary
> > > class (Real a, Floating a) => RealFrac a where
> > > -- lifted directly from Haskell 98 Prelude
> > > properFraction :: (Integral b) => a -> (b,a)
> > > truncate, round :: (Integral b) => a -> b
> > > ceiling, floor :: (Integral b) => a -> b
> Should be RealIntegral instead of Integral.
Yes. I'd actually like to make it Integer, and let the user compose
with fromInteger herself.
> Perhaps RealIntegral should be called Integral, and your Integral
> should be called somewhat differently.
Perhaps. Do you have suggestions for names? RealIntegral is what
naive users probably want, but Integral is what mathematicians would
use (and call something like an integral domain).
> > > class (Real a, Integral a) => RealIntegral a where
> > > quot, rem :: a -> a -> a
> > > quotRem :: a -> a -> (a,a)
> > >
> > > -- Minimal definition: toInteger
> You forgot toInteger.
Oh, right. I actually had it and then deleted it. On the one hand,
it feels very implementation-specific to me, comparable to the
decodeFloat routines (which are useful, but not generally
applicable). On the other hand, I couldn't think of many examples
where I really wouldn't want that operation (other than monadic
numbers, that, say, count the number of operations), and I couldn't
think of a better place to put it.
You'll notice that toRational was similarly missing.
My preferred solution might still be the Convertible class I mentioned
earlier. Recall it was
class Convertible a b where
convert :: a -> b
maybe with another class like
class (Convertible a Integer) => ConvertibleToInteger a where
toInteger :: a -> Integer
toInteger = convert
if the restrictions on instance contexts remain. Convertible a b
should indicate that a can safely be converted to b without losing any
information and maintaining relevant structure; from this point of
view, its use would be strictly limited. (But what's relevant?)
I'm still undecided here.
Dylan Thurston | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2001-February/001584.html","timestamp":"2014-04-19T18:39:24Z","content_type":null,"content_length":"4889","record_id":"<urn:uuid:4257a2d3-c1f4-4b81-9066-bf1e23353711>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus/Polar Integration
Integrating a polar equation requires a different approach than integration under the Cartesian system, hence yielding a different formula, which is not as straightforward as integrating the function
In creating the concept of integration, we used Riemann sums of rectangles to approximate the area under the curve. However, with polar graphs, one can use sectors of circles with radius r and angle
measure dθ. The area of each sector is then (πr²)(dθ/2π) and the sum of all the infinitesimally small sectors' areas is :$\frac{1}{2} \int_{a}^{b} r^2\,d\theta$, This is the form to use to integrate
a polar expression of the form $r=f(\theta)$ where $(a, f(a))$ and $(b, f(b))$ are the ends of the curve that you wish to integrate.
Integral calculusEdit
Let $R$ denote the region enclosed by a curve $r = f(\theta)$ and the rays $\theta=a$ and $\theta=b$, where $0<b-a<2\pi$. Then, the area of $R$ is
$\frac12\int_a^b r^2\,d\theta.$
This result can be found as follows. First, the interval $[a,b]$ is divided into $n$ subintervals, where $n$ is an arbitrary positive integer. Thus $\theta$, the length of each subinterval, is equal
to $b-a$ (the total length of the interval), divided by $n$, the number of subintervals. For each subinterval $i=1,2,\ldots,n$, let $\theta_i$ be the midpoint of the subinterval, and construct a
circular sector with the center at the origin, radius $r_i=f(\theta_i)$, central angle $\delta\theta$, and arc length $r_i\delta\theta$. The area of each constructed sector is therefore equal to $\
tfrac12 r_i^2\delta\theta$. Hence, the total area of all of the sectors is
$\sum_{i=1}^n \tfrac12 r_i^2\,\delta\theta.$
As the number of subintervals $n$ is increased, the approximation of the area continues to improve. In the limit as $n\to\infty$, the sum becomes the Riemann integral.
Using Cartesian coordinates, an infinitesimal area element can be calculated as $dA$ = $dx\,dy$. The substitution rule for multiple integrals states that, when using other coordinates, the Jacobian
determinant of the coordinate conversion formula has to be considered:
$J = \det\frac{\partial(x,y)}{\partial(r,\theta)} =\begin{vmatrix} \frac{\partial x}{\partial r} & \frac{\partial x}{\partial \theta} \\ \frac{\partial y}{\partial r} & \frac{\partial y}{\partial
\theta} \end{vmatrix} =\begin{vmatrix} \cos\theta & -r\sin\theta \\ \sin\theta & r\cos\theta \end{vmatrix} =r\cos^2\theta + r\sin^2\theta = r.$
Hence, an area element in polar coordinates can be written as
$dA = J\,dr\,d\theta = r\,dr\,d\theta.$
Now, a function that is given in polar coordinates can be integrated as follows:
$\iint_R g(r,\theta) \, dA = \int_a^b \int_0^{r(\theta)} g(r,\theta)\,r\,dr\,d\theta.$
Here, R is the same region as above, namely, the region enclosed by a curve $r=f(\theta)$ and the rays $\theta=a$ and $\theta=b$.
The formula for the area of $R$ mentioned above is retrieved by taking $g$ identically equal to 1.
Polar integration is often useful when the corresponding integral is either difficult or impossible to do with the Cartesian coordinates. For example, let's try to find the area of the closed unit
circle. That is, the area of the region enclosed by $x^2 + y^2 = 1$.
In CartesianEdit
$\int_{-1}^1 \int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}} \, dy \, dx = 2\int_{-1}^1 \sqrt{1-x^2} \, dx$
In order to evaluate this, one usually uses trigonometric substitution. By setting $\sin\theta = x$, we get both $\cos\theta = \sqrt{1-x^2}$ and $\cos\theta\,d\theta = dx$.
\begin{align}\int\sqrt{1-x^2}\,dx &= \int \cos^2\theta\,d\theta\\ &= \int \frac{1}{2} + \frac{1}{2} \cos 2\theta\,d\theta\\ &= \frac{\theta}{2} + \frac{1}{4}\sin2\theta+c= \frac{\theta}{2} + \
frac{1}{2}\sin\theta\cos\theta+c\\ &= \frac{\arcsin x}{2}+\frac{x\sqrt{1-x^2}}{2}+c\end{align}
Putting this back into the equation, we get
$2\int_{-1}^1\sqrt{1-x^2}\,dx = 2\left[\frac{\arcsin x}{2}+\frac{x\sqrt{1-x^2}}{2}\right]_{-1}^1 = \arcsin 1-\arcsin(-1) = \pi$
To integrate in polar coordinates, we first realize $r = \sqrt{x^2 + y^2} = \sqrt{1} = 1$ and in order to include the whole circle, $a=0$ and $b=2\pi$.
$\int_{0}^{2\pi} \int_{0}^1 r\,dr\,d\theta = \int_0^{2\pi} \left[\frac{r^2}{2}\right]_0^1\,d\theta = \int_0^{2\pi} \frac{1}{2}\,d\theta = \left[\frac{\theta}{2}\right]_0^{2\pi} = \frac{2\pi}{2} =
An interesting exampleEdit
A less intuitive application of polar integration yields the Gaussian integral
$\int_{-\infty}^\infty e^{-x^2} \, dx = \sqrt\pi.$
Try it! (Hint: multiply $\int_{-\infty}^\infty e^{-x^2} \, dx$ and $\int_{-\infty}^\infty e^{-y^2} \, dy$.)
Last modified on 15 March 2011, at 22:09 | {"url":"http://en.m.wikibooks.org/wiki/Calculus/Polar_Integration","timestamp":"2014-04-20T08:25:45Z","content_type":null,"content_length":"26961","record_id":"<urn:uuid:146da40a-bd03-49f8-bca8-8d5346bec536>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compound inequality
A compound inequality is a statement in which two inequalities are connected by the word "and" or the word "or".
For example, x > 5 and x ≤ 7 and x ≤ -1 or x > 7 are examples of compound inequalities
When the word that connects both inequalities is "and", the solution is any number that makes both inequalities true
When the word that connects both inequalities is "or", the solution is any number that makes either inequalities true
For example, graph the following compound inequality: x ≥ 2 and x < 4
The graphs for x ≥ 2 and x < 4 should look like this:
Putting the two graphs together gives the following:
The solution is that portion of the graph where you see red and blue at the same time( Or the portion shaded twice)
If you pull this out from the graph above, we get:
This means that the solution is any number between 2 and 4, to include 2, but not 4.
Notice that the open circle (in red) means that 4 is not included
Graph x ≥ - 2 and x > 1
The graphs for x ≥ - 2 and x > 1 should look like this:
Putting the two graphs together gives the following:
The solution is that portion of the graph where you see red and blue at the same time( Or the portion shaded twice)
If you pull this out from the graph above, we get:
The solution is any number after 1
However, if we twist the same problem above and graph x ≥ - 2 or x >, it is a different story
Since the "or" means either, the solution will be the shaded area that include both inequalities.
The solution is thus any number after -2
Graph x > 2 or x < -3
Here it is!
However, if for the same problem right above, I replace "or" by "and", there will be no solutions for this compoud
Look carefully again at the graph right above and you will see that blue and red don't meet. That is why they have nothing in common and thus no solutions | {"url":"http://www.basic-mathematics.com/compound-inequality.html","timestamp":"2014-04-21T02:33:22Z","content_type":null,"content_length":"34661","record_id":"<urn:uuid:1aa0c55e-6577-4699-aaf1-9b2d0253c5ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marin Mersenne
Marin Mersenne was a French theologian who was also an accomplished amateur mathematician, scientist, and philosopher. He is best known for two things. First, his name has been attached to a class of
prime numbers called Mersenne primes. Second, he was indirectly responsible for many scientific achievements due to his extensive correspondence and collaboration with prominent scientists,
mathematicians, and philosophers.
In 1588, Mersenne was born into a family of laborers and attended grammar school at the College of Mans. He studied at the Jesuit College at La Fleche from 1604 to 1609. At La Fleche he befriended
fellow student Rene Descartes. The two would remain close friends and colleagues for the rest of their lives.
Beginning in 1609, Mersenne studied theology at the Sorbonne. He did so for two years, and then he joined the Minims, a religious order whose members focused on prayer, study, and scholarship. In
1611 he began his novitiate for the Minims at Nigeon, near Paris. The novitiate is a training and testing period that one had to go through before becoming a full member of the order. In 1612, in
Meaux, he completed his novitiate. He travelled to Paris and in October 1612 at the Place Royale he was made a priest.
The order sent him to be a professor of philosophy at the Minim convent in Nevers from 1614 until 1620, at which point he returned to Paris, where he would live for the rest of his life, other than a
few trips. The church supported him for the most part, although in later years, Jacques Hallé helped Mersenne out with money and by granting access to his library.
Mersenne's early publications were theological in nature, consisting of studies against Atheism and Scepticism. Almost all of his later work was scientific in nature, however, and it is for this work
that he is remembered.
In 1644, Mersenne published Cogitata Physico-Mathematica. In the preface of that document, Mersenne made a claim about a class of prime number that had been identified many years before. These primes
were of the form
2^n-1 (where n is a positive integer)
For a time it had been thought that if n was prime, then the resulting number would be prime. A counterexample of that (n = 11) had been published in 1536. Mersenne's claim was that for any positive
n less than 258, if n = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127 or 257, then the resulting number would be prime. Otherwise, he claimed, the number would be composite (that is, not prime). Mersenne was
not able to verify all of these via calculation, because his computer was in the shop, but he was up front about the fact that he obviously hadn't checked them. At the time, nobody else could easily
check them either. He was later shown to be wrong in more than one way. Not only did some of his values of n actually produce non-prime numbers (n = 67, for example), but also he had missed some
values of n that produced prime numbers (n = 61, for example). In any case, although his list turned out to be incorrect and incomplete, his name still somehow became attached to the numbers. This
type of prime number became known as a Mersenne prime.
Mersenne made far more significant contributions by collaborating with others, or by guiding them. During one part of Descartes' life, he was becoming less focused on serious pursuits. Mersenne
reigned him in and got him back to work on philosophy. Mersenne also defended both Descartes and Galileo from religious attacks, as well as translating some of Galileo's work into French. It was
these translations that made Galileo's work known outside of Italy. In addition, Mersenne continued some of Galileo's research in acoustics, which in turn prompted Galileo to make further advances in
the field. Mersenne did some experiments using a pendulum to keep time, and suggested this use to Christiaan Huygens, who went on to create the world's first pendulum clock. He also tried to reveal
alchemy and astrology for the unscientific practices that they were.
He corresponded with many of the people who would later make up the French Academy, as well as a host of other people. Because there were no journals or regular meetings of Europe's scientists,
mathematicians, or philosophers, Mersenne was playing a crucial role. He communicated with everyone, exchanged ideas, gave suggestions, and it is because of this that his indirect contributions are
difficult to gauge. He discussed such diverse subjects as mathematics, philosophy, music theory, physics, acoustics, and whatever else his associates happened to be pursuing. Among those he
associated and corresponded with are Fermat, Pascal, Gassendi, Roberval, Beaugrand, Descartes, Huygens, Pell, Galileo, Torricelli, Peiresc, Beeckman, van Helmont, Hobbes, and Battista. After his
death in 1648, letters from over 78 different people were found in his chambers in Paris. With his passing he gave one final gift, having previously asked that an autopsy be performed on his body in
the interest of science.
• http://galileo.imss.firenze.it/vuoto/imerse.html
• http://www.utm.edu/research/primes/mersenne/index.html
• http://www-gap.dcs.st-and.ac.uk/~history/Mathematicians/Mersenne.html
• http://www.newadvent.org/cathen/10209b.htm
• http://es.rice.edu/ES/humsoc/Galileo/Catalog/Files/mersenne.html | {"url":"http://everything2.com/title/Marin+Mersenne?showwidget=showCs1359377","timestamp":"2014-04-16T20:00:06Z","content_type":null,"content_length":"35303","record_id":"<urn:uuid:b10ca361-48d2-4fff-87be-3054c5d294e8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Largest numb, smallest num with sentinal
K I am getting one error in my code that just wont let me finish the program. So I am supposed to write a program with 3 choices, A,B,C. A print the largest number out of a defined set of numbers
entered by the user, B print the smallest number using a sentinal loop to exit the program, C Exit the program. Here is the code I have. Also how would I convert this to switch statements
//Write a program that displays a menu with the following choices to the user.
// A - Find the largest # with a known quantity of numbers
// B - Find the smallest # with an unknown quantity of numbers
// C - Quit
//Please enter your choice ___
//This menu needs to be repeatedly displayed until the user chooses to quit.
//If A is chosen, you should ask the user how many numbers he wants to enter. If he enters 5, you should read 5 numbers from him.
//You should then display the largest number he entered. Use a for loop to read these and display the largest.
//If B is chosen, you should keep reading numbers no matter what until the user enters -99.
//Then you should display the smallest number he entered not counting the -99.
//Hint - you only need to store the largest or smallest number, not every number that is input.
#include <iostream>
using namespace std;
const int SENTINEL = -99; //Sentinel to exit loop
int main()
int numbers; //variables
int large;
int small;
int smallest;
int counter;
int largest;
int next;
char letter;
cout << "A - Find the largest # with a set quantity of numbers. \n"; //program initiation
cout << "B - Find the smallest # with an unknown quantity of numbers. \n";
cout << "C - Quit Program. \n";
cout << "Please enter your choice: ";
cin >> letter; //input
while (letter != 'C'); //start of sentinel controlled while loop
cout << "The letter you entered is " << letter << endl;
if (letter == 'A') //First if loop for A
cout << "Enter the amount of positive numbers to be compared. \n";
cin >> numbers;
for (counter = 1; counter < numbers; counter++) //This counts +1 on the counter
cout << "Enter a number." << endl;
cin >> large; //stored first number
next = large; //stored number in case of largest
largest = large; //stored for largest overall
for (counter = 1; counter < numbers; counter++) //This counts +1 on the counter
cout << "Enter another number." << endl;
cin >> large;
if (large >= next) //loop to determine largest
largest = large;
largest = next;
cout << "The largest number entered was. \n";
cout << largest << endl; //output of largest number entered
if (letter == 'B') //Second if loop for B
//tried adding another open bracket here for this part of the loop and still got a syntax error for bottom else
cout << "Enter a number, the smallest number entered overall will be displayed. \n To exit enter -99" << endl;
cin >> small;
next = small; //spot to store input number
cout << " Please enter another number" << endl;
cin >> small; //input of next number entered
if (small > next) //looop for output of smallest number entered
smallest = small;
smallest = next;
while (small != -99); //control loop exit
else (letter == 'C') //Keep getting a syntax error here don't know why
//I have tried removing this line completly also still get syntax error on else
// tried brackets after the else if
return 0;
Last edited on
Here is one implement:
#include <iostream>
using namespace std;
const int SENTINEL = -99; //Sentinel to exit loop
int main()
int numbers; //variables
int large;
int small;
int smallest = 0;
int counter;
int largest = 0;
int next;
char letter;
bool quit = false;
cout << "A - Find the largest # with a set quantity of numbers. \n"; //program initiation
cout << "B - Find the smallest # with an unknown quantity of numbers. \n";
cout << "C - Quit Program. \n";
cout << "Please enter your choice: ";
cin >> letter; //input
switch (letter)
case 'A':
cout << "Enter the amount of positive numbers to be compared. \n";
cin >> numbers;
cout << "Enter numbers:";
cin >> large;
largest = large;
for (counter = 0; counter < numbers-1; counter++)
cin >> next;
if (largest < next)
largest = next;
cout << "The largest number entered was. \n";
cout << largest << endl; //output of largest number entered
case 'B':
cout << "Enter numbers, the smallest number entered overall will be displayed. \n To exit enter -99" << endl;
cin >> small;
smallest = small;
while (next != -99)
cin >> next;
if (smallest > next && next != -99)
smallest = next;
cout << "The smallest number entered was. \n";
cout << smallest << endl; //output of smallest number entered
case 'C':
quit = true;
cout << endl;
return 0;
I don't know if it can satisfy your requirement.
you my friend are wonderful I have been working on a solution for this for ummmmm 3 weeks now I couldn't figure out the complete system of it till I got it working and now I get it, much appreciated.
with pleasure, and if you have any question about it, I wish we can think it over together.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/1813/","timestamp":"2014-04-19T04:21:56Z","content_type":null,"content_length":"15812","record_id":"<urn:uuid:4d3a06c7-5fa2-4bf3-9ca9-fc6a55b67b7f>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fairview, NJ Algebra 1 Tutor
Find a Fairview, NJ Algebra 1 Tutor
...I'm currently in a Virology course at Columbia that has a large focus on the molecular level, mostly pertaining to the genomes of various viruses. I have worked in a service-based IT department
for the past four years. We focus on software support for both Macintosh and Windows based operating ...
25 Subjects: including algebra 1, chemistry, calculus, geometry
I am a certified Math Teacher with more than 8 years experience with NYCDOE. I have a high NYS Regents exam passing rate in Integrated Algebra and Geometry. Let me help your child become
successful in areas such as Pre-algebra, Algebra I, Algebra II, or Geometry!
4 Subjects: including algebra 1, geometry, algebra 2, prealgebra
...I also have experience tutoring middle school students in American and world history. Due to my after school tutoring experience, I have learned to adapt my tutoring techniques to a variety of
student's specific needs and subjects. I also have writing and editing experience from being an editor of my university's student paper and an editor for my university's literary magazine.
27 Subjects: including algebra 1, English, reading, writing
...I've helped undergraduate college students with all of these subjects, as well as adults returning to school during their careers: MBA at NYU, prerequisites for a course of study in Nursing,
and more. If you’re in a class that’s not on this list, ask me, I probably do that too! A little personal info: I like music, reading, and writing as hobbies.
12 Subjects: including algebra 1, calculus, physics, MCAT
...For 7 years, I worked as a case manager, outreach worker, and supervisor at the Hetrick-Martin Institute, providing services to runaway, homeless, and at-risk youth Currently, I am privately
tutoring a special needs child in piano. I am extremely patient and empathetic, and get along great wi...
29 Subjects: including algebra 1, reading, English, vocabulary | {"url":"http://www.purplemath.com/fairview_nj_algebra_1_tutors.php","timestamp":"2014-04-16T04:49:52Z","content_type":null,"content_length":"24292","record_id":"<urn:uuid:43ce252b-d3a3-4ae5-b41e-ae986717c73e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Stay Solvent Longer Than the Market Can Stay Irrational
COMMENTS (14 to date)
Latest Comment
This just turns out to be a hedge instead of a unique strategy, b/c you have to do something with the balance of your assets
August 28, 2009 10:38 AM
Isn't this a more conservative variant of Taleb's Black Swan story? He favors mostly low risk assets with a lot of small bets on far out of the money outcomes.
August 28, 2009 10:42 AM
[Comment removed pending confirmation of email address. Email the webmaster@econlib.org to request restoring your comment privileges. A valid email address is required to post comments on
EconLog.--Econlib Ed.]
August 28, 2009 10:46 AM
The problem is that bubbles are irrational. Just when you think the top is in, the security runs and runs. In your example, your 5 year figure will be wrong by 10 years. It takes 15 years to get back
to 50, and you are out of the market or became bullish by year 7 or 8.
August 28, 2009 11:17 AM
The real problem with this approach is margin. If your short position pops, you will need to use some of that assumed principal to cover the unrealized loss in your short (not double down), so that
your broker or clearing exchange will allow you to put on an even bigger position.
Under Reg T, you need 150% of the short position you want to take, at the time you want to take it. So in the example above, if you've got $5,000 in your account and you put on a $10,000 short
position, and the position is $2,000 underwater, you need to allocate another $7,000 in your account to put on another $10,000 short. In addition to paying margin interest.
There can be other issues, such as locate issues, so you are bought in because the party who loaned you the shares to short (even in ETFs, occasionally) wants them back. You probably know better than
me the negative convexity aspects of a callable asset.
August 28, 2009 11:37 AM
I see problems with this.
First, you assume that it goes back to 50, rather than a -50% decline. In practice, I would believe that the fundamental value would increase over time so that a -50% decline is more reasonable than
a return to 50. Hence you give yourself a greater return on the short side than is probably likely.
Second, you are selling short a fixed amount of money, rather than a fixed percentage of your portfolio. This means you are not operating a martingale strategy. Under your strategy, the more your
portfolio declines, the more you are willing to bet (as a percent of assets) that you there will be a correction. Since probabilities are binomial, the probability of getting a correction given that
you haven't had one yet after 4-5 years goes very high. The real world market is not necessarily like that.
Basically, the returns are going to be good the way you set up the problem.
August 28, 2009 11:40 AM
Not satisfied with a line a wrote. I know the events are independent so that in each period it is a 50% probability, but as the number of years increases, the probability that at least one of them
will have a correction increases very sharply.
August 28, 2009 11:45 AM
Inflation is a problem for such scheme. Like maybe things just stabilize for a few years.
Does anybody have a good scheme to capture dividends while trading capital risk for potential capital loss.
August 28, 2009 12:04 PM
"John Hall" (a real name? :-) stated the things rather clearly. My addition is a general remark: yes, if you are more prudent, you are more likely to outlast the market's irrationality, but, OTOH,
noone got famous for eking out a 5% return (your total capital is $100k, and if you are right the first year, your stated profit is $5k).
August 28, 2009 1:04 PM
Floccina I think you can do that with single stock futures.
They only pay off with the price of the stock at some future date but you don't get the dividends.
So you can buy a stock and short the future and you get the dividends. Of course you pay for this and the difference in the spot price and the futures contract is the discounted expected value of the
I think the futures market is pretty light, so if you can't short your individual stock you can short the S&P index, but then you have basis risk.
In general I think the "The market can stay irrational longer than you can stay solvent" critique is way over stated. There are tons of ways to short the market without blowing up.
The first and easiest is to move your current stocks into cash. You can hold that position forever.
You can also hold $100 in cash and short futures such that the market would have to go to zero before your capital was eaten up by margin calls. You can't get knocked out of that position either. The
key is to avoid big leverage.
August 28, 2009 1:33 PM
When I say cash I mean, short term interest bearing securiteis or accounts, not the actual green stuff.
August 28, 2009 1:36 PM
You have stumbled upon a crude version of the Kelly Formula:
August 28, 2009 2:35 PM
You just inspired me to sell a futures contract on my (otherwise underwater mortgage) house!
August 28, 2009 3:45 PM
As Ray mentioned above, there is a logically compelling formula for how much of your money you should bet each year, and it's called the Kelly formula or the Kelly bet, Kelly strategy, etc.
August 28, 2009 10:28 PM
Comments for this entry have been closed
Read comments | {"url":"http://econlog.econlib.org/archives/2009/08/how_to_stay_sol.html","timestamp":"2014-04-19T22:20:11Z","content_type":null,"content_length":"43571","record_id":"<urn:uuid:6da5e6a5-fb49-44fd-b226-8f22d367d175>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Power Tip 57: Design a flyback primary switch snubber | EE Times
Power Tips
Power Tip 57: Design a flyback primary switch snubber
How to best control the voltage stress on the primary switch in a single-ended flyback converter (shown in Figure 1) is a multi-faceted problem. You have to solve a combination of technical issues
while still keeping an eye on the overall cost. You have to:
• Limit the MOSFET voltage stress to an acceptable level
• Discharge the leakage inductance very quickly to maintain good efficiency (see Power Tip 17)
• Minimize circuit losses due to adding the snubber
• Avoid impacting the power supply dynamics
The lowest cost approach to solve these issues is shown in
Figure 1
of Power Tip 17 and consists of a standard recovery diode, capacitor and loading resistor. The circuit works by transferring excessive transformer leakage energy onto the snubber capacitor and
dissipating it over the switching period. Unfortunately in this approach there is always energy dissipated in the snubber resistor, regardless of output power. In each switching cycle, the voltage on
the capacitor always will be recharged to at least the reflected output voltage. This degrades the efficiency, particularly, at light loads.
Figure 1
of this power tip presents an alternative circuit approach, which replaces the resistor/capacitor with a resistor (R1) and zener diode (D1). When the FET turns off, the drain voltage rises to the
point that the diodes conduct to discharge the leakage inductance of the transformer. The rate at which the current discharges is set by the difference between the reflected output voltage and the
clamp voltage. Note that for best efficiency, as Power Tip 17 points out, it is critical to discharge the leakage inductance energy as quickly as possible. In choosing values, first consider the
MOSFET voltage rating and derating criterion to determine a suitable maximum voltage stress on the MOSFET. First choose the zener voltage to be above the reflected output voltage so that it does not
continue to conduct after the leakage inductance has been reset. Next size the resistor/zener combination so that you do not exceed the allowed MOSFET voltage stress at high-line and maximum current.
Click on image to enlarge.
Figure 1: This FET clamp provides good light load efficiency.
Now trade circuit ringing for efficiency. In
Figure 2
, resistor R1 has been shorted so that the zener solely sets the voltage stress on the MOSFET. At turn-off, the drain voltage flies up and the leakage inductance current is discharged with a constant
voltage, which provides the fastest discharge and best efficiency. However, once the leakage inductance is discharged, the drain voltage rings around the reflected output plus input voltage, which
creates a couple of concerns. Obviously one concern is electromagnetic interference (EMI), as this 4 MHz ringing creates common-mode currents in the power transformer and increases the power line
filtering need. The second issue is related to the choice of controllers. There are a number of integrated circuits (ICs) that eliminate secondary-side measurement of the output voltage and rely on
the primary bias winding voltage to provide a representative sample of the output. With this type of controller, the ringing can result in poor output voltage regulation accuracy.
Click on image to enlarge.
Figure 2: High voltage zener clamp discharges leakage inductance quickly to improve efficiency.
If the ringing is an issue, reduce the zener voltage to approximately the reflected output voltage and add series resistance to increase the peak drain voltage.
Figure 3
shows the waveforms from the circuit shown in
Figure 1
. The yellow trace is the drain voltage and the red is the voltage at the junction of D3 and R1. The difference between the two voltages is proportional to the leakage inductor current. The drain
voltage starts at a high voltage and reduces the differential voltage and, hence, leakage inductance current to zero. So when the diode turns off, there is little voltage difference between the drain
voltage and the reflected output voltage. Consequently, there is little ringing. Unfortunately, with this approach, you pay an efficiency penalty. In this case it was about two percent. As was
pointed out in Power Tip 17, the longer it takes to discharge the leakage inductance, the worse the efficiency will be. In
Figure 2
, the leakage was discharged in 70 nS while it took 160 nS in
Figure 3
Click on image to enlarge.
Figure 3: Series resistance reduces EMI.
To summarize, RCD clamps are the simplest way to snub a flyback. However, with an RCD clamp, the light-load losses suffer from continuous power dissipation. If light-load loss is an issue, consider a
snubber with a Zener, which only dissipates power when it is needed. An abrupt zener provides the best efficiency; but it can cause unacceptable ringing. The best trade-off may be using a reduced
zener voltage along with a series resistance.
Please join us next month when we take a look at some classic power supply layout mistakes.
For more information about this and other power solutions, visit: www.ti.com/power-ca.
About the author
Robert Kollman
is a senior applications manager and distinguished member of technical staff at Texas Instruments. He has more than 30 years of experience in the power electronics business and has designed magnetics
for power electronics ranging from sub-watt to sub-megawatt with operating frequencies into the megahertz range. Robert earned a BSEE from Texas A&M University and an MSEE from Southern Methodist | {"url":"http://www.eetimes.com/author.asp?section_id=183&doc_id=1280601","timestamp":"2014-04-17T04:05:16Z","content_type":null,"content_length":"132987","record_id":"<urn:uuid:0484e36c-3e97-4a9e-8a44-41429d1a4992>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find Cells That Sum Up To Given Values
I have a coloumn (i.e. A) with hundreds of numbers. I'd like to be able to write a number in a cell (i.e. B4), and have Excel find and tell the coordinates of those cells in coloumn A that sum up to
the result I wrote in B4. I also would like to be able to "hit a key" and see the next possible set of result-cells.
View Complete Thread with Replies
Sponsored Links: Related Forum Messages:
Column SUM Produces #N/A (Vlookup To Find Values That I Want To SUM)
I have a column using Vlookup to find values that I want to SUM.
Some of the look up values produce a #N/A and result in a total sum of #N/A.
How do I get the SUM of a column of numbers when all of the cell values are not in fact numbers.
View Replies! View Related
Macro: Find Duplicate Values & Sum Values. Per Day Basis
I have data that looks like this:
day# id amount
1 51810 0..............
How do you write an excel macro that looks at the number in the first column (day #) and finds all the duplicate id#s in the second column that are in day 1and adds the amounts together in the 3rd
column then writes the first column number (day#), second column number(id#) and the third column (sum of the amounts of duplicate Id#) to an new worksheet. Then the macro would loop through day #2
and do the same thing. Notice that the values in the id column are unique in this data set below this is how I would like the data to look. I have accomplished this in a pivot table but my problem is
I need a cvs file to export the final data into an external database which is why I need a macro.....
View Replies! View Related
Formula To Find The Sum Of Values That Were NOT Equal To My Quoted Values
Trying to find the sum of all cells in the array described in the formula that are equal to the values inside the quotations. I used this exact (as far as I can tell) formula to find the sum of
values that were NOT equal to my quoted values and it worked just fine. Any ideas why formula 'A' will not work but formula 'B' does work? I have a feeling I'm missing something simple here!
Formula A - Does not work:
=SUMPRODUCT(--('Master Lead Sheet'!$J$2:$J$10000=$B2),--('Master Lead Sheet'!$N$2:$N$10000="REJECTED"),--('Master Lead Sheet'!$N$2:$N$10000="CONDITIONED"),--('Master Lead Sheet'!$N$2:$N$10000=
Formla B - Works:
=SUMPRODUCT(--('Master Lead Sheet'!$J$2:$J$10000=$B2),--('Master Lead Sheet'!$N$2:$N$10000"No Answer"),--('Master Lead Sheet'!$N$2:$N$10000"Disconnected"),--('Master Lead Sheet'!$N$2:$N$10000"Wrong
Number"),--('Master Lead Sheet'!$N$2:$N$10000"EMAILED"),--('Master Lead Sheet'!$N$2:$N$10000"needs to be emailed"),--('Master Lead Sheet'!$N$2:$N$10000"Refund"),--('Master Lead Sheet'!
View Replies! View Related
Sum The Values In Cells Based On The Values Other Column
I want to sum the values in cells E2:P110 based on the values column D. The
values in D are formulas resulting in something that appears to match D112 in
some cases. I'm using the following equation:
My problem is that D2 :D10 have a formula in it and it's not matching. If
I enter the result of the formula, all is good. How should I deal with this?
View Replies! View Related
With Loop + Find Method And Sum The Negative Values
I have a very large worksheet (row count maxed in 2007, and then some), for which I need to do the following: search column A for a string that will occur many times, and then check the 10 cells that
follow in its row for negative values, dropping some sort of indicator in the 11th (shading it red or something would be fine). An additional bonus would be if the 10 cells that possibly contain a
negative could be summed (the sum could serve as the indicator?). If no negative is found, nothing need be done, and the macro should chug along searching A for the next reference to this string.
My hope was to do a sort of "With Range("A:A"), .Find("MyString")", save position as StartPos, do the 10-cell row checking in a nested IF or For (though the For would take a long while, checking each
cell individually), then doing a .FindNext after StartPos until = StartPos (does .FindNext loop back to the top?). The formatting of the indicator cell in the 12th cell in each relevant row doesn't
really matter, it's more just for jumping to critical rows.
View Replies! View Related
Forcing Cells To ALWAYS Find MIN And MAXIMUM Values From A Specific Range Of Cells
I'm working on a project for my company. We make plastic tanks and for quality control we want to start recording the thickness of the tanks in different areas/zones of each tank.
Attached to this message is an Excel sheet that I've been working on. From "Sheet 1", it records inputted thicknesses into WorkSheet "1098". On the top of "1098", it shows all of the recordings, and
just below that are the "10 Most Recent Entries".
Right below the "10 Most Recent Entries", there are formulas to calculate the Min and Max Values. Whenever a new entry is recorded, the selected cells for the Min and Max formulas change. Is there a
way to force the cells to always stay the same?
View Replies! View Related
How Do I Sum Up Values Only In Cells That Are Color Filled
This thing drives me crazy for the past few days. Please helpI can go to sleep.
I have several columns with numerical data. When certain criteria are met, a person manually makes some of the column fields a green fill color via the color fill button.
Let's say I have data in cells E4 to E14
Below, I have a total field (auto Sum function used to total all),
And, another row with The Colored Green totals.
How do I enter the appropriate code to total only the fields that are green?
I have found some info on verious sites but I have been unsuccesful to implement.
I attach a sample sheet, I was playing around a bit, you can ignore the fill color red, just deal with the green fields.
View Replies! View Related
Formula To Sum Values Based On 2 Cells
I have three columns of data. I want to give the average sales per person per year but I don't know how to combine two columns in the formula.
I want the output to be something like:
Average Sales Per Year
For example, I want to give 2006 sales for Sarah.
I know that I could add an extra column to my first table which concatenates the Manager and Year. HOWEVER, In my example it would be fine but in my sales report there is a lot more data with a lot
more going on. I don't want to have additional hidden columns that people might stumble upon and change).
I want to use formula and NOT code as other people will be using the sheet and making changes to it. I want them to simply be able to autofill any additional rows rather than me having to update
I have attached my example.
View Replies! View Related
Add The Sum Of Cells By Color- With Different Values
I have this fantasy footy comp I run and every week I enter player by player scores. To make it easier for me I need to a strange formula. Basically can you make a formulua to sum cell based on text
Every week the 3 players with the highest scores get votes. Eg highest gets 3 votes, second highest 2 votes and 3rd highest 1 vote. After each game I look thorough the players and change the colour
of the 3 best to make them stand out. Green on black for highest, yellow on black for 2nd highest and red on black for third highest.
Is there a way to at the end of each week to have a the cell (for example cell v43 in my sample) to update or add up all the 3pts 2pts and 1pts that player has accumulated throughout the season?? In
my example I have just hand counted them and inputed them.
View Replies! View Related
Determine Which Values/Cells Sum To A Total Amount
Is there a function, or how would I write a vba to figure out the following.
I have 86 items all with a different price, which come to a total of $348,359.33, is there a way to figure out which combination of the 86 entries will give me a total of $57,673.36
View Replies! View Related
Find And Replace Different Values In A Range Of Cells
I would like to be able to replace all cell values in a range of 20c by 20r (i.e. 400 cells). In all cases the condition would be the same (find all cells with a value greater than than zero), but
then replace with different values.
e.g. Cells with value >0 in range CX119:DQ138 replace with "NT", then cells with value of >0 in range DR119:EK138 replace with "NU"
I thought you could do it with find and replace by just selecting that range of cells but can't see how to set the conditional >0 bit.
View Replies! View Related
Find If Duplicate Values Exist In A Column, Concatenate Cells And Then Delete
I want to do, is search column A for claim numbers that match. When I do have a matching claim number, I want to concatenate the original cells ownership field with the said matching cells ownership
field (or move into a column in the same row, I can always concatenate later). Once that is complete, I want to delete the row I took the information out of.
I want to join this data in ArcGIS, but as of right now, it's not a 1-to-1 relationship, so only a relate works. That doesn't help me as I want to display claims by ownership, and this can vary per
claim. Company A may have 100% on one claim, and then split another claim 50% with Company B.
This causes a double entry on the claim field in this current spreadsheet I have, which requires me to clean it up by making multiple columns of ownership vs. an additional row for shared ownership.
My problem:
Column A Column B
1235555 Company A (50%)
1235555 Company B (50%)
1235556 Company A (100%)
1235557 Company A (33%)
1235557 Company B (33%)
1235557 Company C (33%)
What I would like to see
Column A Column B Column C Column D
1235555 Company A (50%) Company B (50%)
1235556 Company A (100%)
1235557 Company A (33%) Company B (33%) Company C (33%)
View Replies! View Related
Find Current Date On Several Sheets & Convert Surrounding Cells To Values
I keep track of values in a workbook. I accumulate them on a daily basis (business days) and keep track of the older values.
On the first sheet I have all current values automatically displayed.
All subsequent sheets contain the values for the different locations (>60) by one location per one sheet with multiple entries per location.
Most of the values do not change daily. So I copy the values from the previous day and paste them to the current day’s fields (the row below yesterday's values).
Today’s date (and prior dates as well as subsequent dates) are in column A, the values to be copied are in column B through AZ. With over 60 sheets this job becomes very tedious very quickly...
What I would like to be able to do, with a click of a button, is to go into each sheet (except the first one), go to the current date (in column A), select the field to the right of that date (in
column B), go up one field, select both fields (today and last business day) and go from B to AZ (or A to AY in relative terms) copy all those entries, go down one field (to the same row as today’s
date) and paste the content. Then repeat that for every following sheet…
As the date field that I am looking for goes down one field with each day I cannot use fixed points to copy and paste from, but have to use the date field as an anchor from whence to find the proper
I do have some values in the following day's fields, that is why I need to copy two rows and not just the values from the previous day...
View Replies! View Related
Find Dates Between Monthly Range And Sum Another Cells Results That Are In A Range
I'm trying to make a by month spreadsheet that has all twelve month ranges starting in for a3. in a3 it would have the start date and in a4 it would have the end date. I'm trying to locate all of the
dates between those two dates and pull in the profit ammounts from another sheet, the results would be in row 5. I would also like to pull in the loss amounts and have them in row 6. All
corresponding with the date range in rows 3 and 4.
View Replies! View Related
Find, Meet Condition, Sum Range, Deduce, Find Next
i have spent 40 hours, and still didn't find a solution. Please help is need it!!
i have to find all articles with same code (222). first one has Q =100 second one Q=250.
soled Q=150
(i am talking about 5000 rows, 400 different or same articles per month- 12 months)
222 ----------100---------0
333---------- and so on
first i have to deduct from the first one it finds (max. till 0 ...it can not be negative) ....after finding another one it deducts the rest---it means 50
Is there any kind of formula with this possibility.
if it is poorly writen please let me know for more info.
i am not an expert in excell, but i have tried variations of sumif, vlookup functions, but i always get stuck deducting the whole Quantity from all of the same (222) articles .
View Replies! View Related
Mutiply Values In Range With Values In Another & Sum Results
I have two named ranges 'wrkNRP' and 'wrkQTY'.
Instead of totalling each range seperately, I need a way (within VBA) to go through every value in both ranges and mutiply them together, then record the total- e.g.
wrkNRP has the values
wrkQTY has the values
Then I need a way to do (10*10)+(20*20)+(30*30)+(40*40)
Is this possible WITHOUT adding an additional column?
View Replies! View Related
Autofind & Sum (select The Respective Cells And SUM)
To fill in the ??? in attached file, I select the respective cells and SUM. Would there be a way to automate as:
For each entry in column D, Excel picks out itself the names in column-A where-ever they come, picks the corresponding values from column-B, sums them and reproduces the summation in column E.
View Replies! View Related
Sum Function: Sum Any Values
i have a column with the following values...45, 60, 35, 20, 10, 35, 28, & 17. in the next column, i want to put a "1" next to the values listed (45, 60, etc.) if their sum is less than 100...the
formula needs to be flexible enough so that it does not sum any values that another "1" above it has already summed.
using the values above, a "1" would be placed next to 45, 35, and 28.
View Replies! View Related
Sum- To Add Left Values And Right Values
I have got two values like this 63/59 innone cell and 18/11 in other cell.I want to add left values and right values ,I have done this using left function
but the problem is some time the values is in single digit , then the formula stops working bcoz of this "/".
View Replies! View Related
Sum Of Cummalative Values Of Several Column Values.
I have 2 columns of data. The following code compares the values in 2 cells from each column. If the value in column A cell 1 is higher than that in column B cell 1 then the function returns the
value of column A cell 1. If the value of column B cell 1 is higher then the value of column A cell 1 then the function adds the following cells in column A until the cummalative value of the values
in column A is higher than the value in column B cell 1.
Function sumofA(valueA, valueB)
AddValueAcounter = 1
ValueBcounter = 1
sumA = 0
sumofA = 0
cummalativeValueA = 0
finalValueA = 0
If valueA(AddValueAcounter) > valueB Then
sumA = ValueA(AddValueAcounter)
AddValueAcounter = AddValueAcounter + 1
The above example compare column A, cell 1 and column B, cell 1 and then add column A, cells 2, 3 and 4 to give the cummalative value of 16. As this is higher than 15 (cell 1, column B) the function
ends. I need it to contine until the end of column A. So, in this example I need the function to begin again with column A, cell 5 and column B cell 2 and so on. I need the cummalative values of A
for each time the function is run to be added together, along with the values in the A cells should the value be higher than that in cell B to produce one overall value.
View Replies! View Related
Add Sum To End Of Range To Sum All Cells Above
I'm getting a Type Mismatch (error 13) with the following:
Range("I1").End(xlDown).Offset(1, 0) = "= Sum(" & Range("I2", Range("I2").End(xlDown)) & ")"
Basically I want to do an autosum on the first blank cell for a list of numbers in a column
View Replies! View Related
Find Same Values In Different And Unequal Cell Ranges And Refer To Values
I have data similar as shown in the following:
The idea is to add compare the cells of the first column with the third column. Where same letters/words exist, the corresponding value of the first column should be added to the second column (where
no letter exists equally, the space remains empty), so it will look like this
the third column always will have at least the same letters as the first column, but new letters/entries can occur.
View Replies! View Related
Sum If, But The Sum Range Is Seperate Cells ?
I want to do a simple Sum if. My range is simple and so is my criteria, but the actual range of cells to sum is not in one continus row or column. Can I do the sumrange as seperate cells?
View Replies! View Related
Determine Values In Cells: Get The Lower Values Between To Cells And Have The Lower Valued Cell Highlighted
I M trying to get the lower values between to cells and have the lower valued cell highlighted,,,i have over 43 thousand lines of data to go throughand i was wondering if there was a quicker way to
do this,,,for example cellA1 is $4.25 and cellA2 is $5.25 i want cell A1 to be highlighted,,is there a way?
View Replies! View Related
Macro To Find And Sum
I have the sheet below and I am trying to have a code that will find all similar funds and sum their numbers in range ("C2:L17").
ABCDEFGHIJKL1Funds AMT111AMT120AMT121AMT131AMT161AMT111AMT999AMT170AMT179AMT1352100 981956787676657656726311453231359 73733426557776623238172234434100 9548456424545334332334388689898754534343451410 83839454533434654345564558087834345346100 839323121454534323222311212343271356 0112112154546633342 81356 92882543433665521334532312234223212191357 6373765544234576546613432334101421 3232112121211132232324345456767232553111359 6678114343421212232323433435534324543121487 78667432123456343354533131359 5909767656565454343534343141357 5756572223234453343343343151358 7272543321423122153423443443434161400 8264305435656475432165533434171410 97333277558876567551213464183890
Excel tables to the web >> Excel Jeanie HTML 4
The result of the code should look like this:
Sheet1 ABC20AMT11* 49227721AMT12* 18512322AMT13* 50195023AMT16* 16921224AMT17* 59939925AMT99* 2507913 Excel tables to the web >> Excel Jeanie HTML 4
View Replies! View Related
SumIf Function To Sum Cells When Other Cells Begin With Certain Characters
I want to use the SumIf function to sum cells when other cells begin with certain characters.
I've toyed with a few ideas of how this could work, but i don't know how to specify that the cells need to begin with certain characters. The cells that would be the criteria and the ones that would
be summed come out of an Oracle database (and i have no control over the way they're pulled out - yet) so the beginning characters are connected to extremely unique information, so i dont want that
to be included in the if part, for obvious reasons.
View Replies! View Related
Find And Sum Based On Criteria
I have a sheet with 3 columns - Line, Product and amount. I have to calculat the amount based on which line it is, and which product.
I. e I want to calculate all the amount in Line 1 except if the product is 3,23,31 (just an example - the range can be longer).
In workbook added i have shown the problem. I have tried the sumproduct formula, but cant get it right!
View Replies! View Related
Find Identical Lists And Sum
I need a formula to sum column C if data in column A and Column B are found anywhere else in the list. For Example
Column D would show
View Replies! View Related
Find Sum In List Of Of Numbers
I have a list of numbers in a column and I need to find which numbers
when summed together equal a figure. I have a list of invoice amounts
that I need to match up with payments (the payments are always made for
several invoices so I need to come up with sums of several invoices to
get to this payment amount).
An example would be I have this in the following section (A1:A10):
I need to find which combination of these figures would sum $1,173.76.
View Replies! View Related
Find Last Repeated Value & Sum Value
In the attached file column A has ID's. In the attached example i have used 2 ID's 141020061 & 151020062. I need to find the last entry of each ID and sum the value from the column F. that is the
last entry for the ID 141020061 is 40500 and for 151020062 is 0 so the total should be 40500.
View Replies! View Related
Find Sum Of Unique Entries
I am using excel 2007.
I am trying to find the sum of unique entries in a table such as below for each respective date.
The results should appear as
Count Unique1/01/201012/01/201023/01/201014/01/201035/01/20101
What formula would I use to calculate this unique count?
View Replies! View Related
Calculating Sum Of Cells Only Where Adjacent Cells Are Blank
I am constantly editing this (we currently have over 100 accounts) and therefore the totals are changing.I have a formula for Total but I need formulas for the other two, based on when the cells in
columns F and J are blank or have dates in them: For active, the total is the sum of all numbers in column M but only when there is a date in column F and a BLANK in column J. For yet to enter, the
total is the sum of all the numbers in column M but only when both column F and column J are blank. At the moment, my accounts run from row 6 to row 142, with the first line of totals in row 145,
however this is constantly expanding.
View Replies! View Related
SUM Some Values
I have two tables. Table 1:
Table two:
a | 10
a | 20
b | 50
b | 60
a | 5
c | 1000
c | 2
b | 8
d | 9
d | 2
a | 78
What I want is that the sum of the values in table 2 appear in table 1. So table 1 looks like this:
a | 113
b | 118
c | 1002
Somebody who can help me with this? It is much more complicated, but when somebody can tell me how to do this I can fix the complicated version by myself.
View Replies! View Related
Find And Match And Automatically Sum The Units
I have a formula to add the volume (units) for a customer. The formula is:
Is there a formula that I can use instead of the one above that will find the customer number and automatically sum the units?
View Replies! View Related
Find The Sum Of The Quotes Based On Each Brand
I'm looking to create a monthly sales report based on brand and month.
I have done it in the past using a load of if statements checking for monthly totals and then adding up the coloumn.
But there must be an easier way.
I have 4 brands. so In january I want to find the sum of the quotes based on each brand.
'A' contains the date
'J' contains the brand - Say A, B C or D
'M' contains the quote value
I want to firstly count the number of enquiries per brand per mont. (these are all entries without a qoute value.
Then I want to count the number of quotes per brand month (these are the entries with a quote value)
The I want to find the total value of the quotes per brand . per month.
View Replies! View Related
Find Last Cell In The Range And Sum It Via Vba
complete the following partial code.
I am trying to determine the last cell for Non-ILEC piece and then want to insert one row below the last Non-ILEC cell and insert "Total BS Non-ILEC" and sum "H" Range just like the way its done in
the partial code. The partial code I have posted is working perfectly fine with regard to ILEC piece. I need to further add the Non-ILEC piece in the partial code to accomplish my task ....
View Replies! View Related
Sum The Total And Find The Average Cost
I need a formula that will scan column A (Code)total the like items (also) add column B (Qty) if there is a number greater than 1. Then add the price ($) together and divide by the sum of A&B.
In other words find the average price for the total of each item..
A B C
Code Qty $
PH06003000 1 1504.8
PH06003000 1 1582.24
PH06003000 1 1606
PH06003000 1 1504.8
PH06003000 2 3009.6
PH06003000 1 1504.8
PH06003000 1 1504.8
PH06003000 1 1504.8
PH06024000 1 2499.2
PH06024000 1 2499.2
PH06024000 1 1896.07
PH06024000 2 3909.66
PH06024000 1 2240.7
PH06024000 1 2259.4
PH06024000 15 30030
PH06024070 1 2039.4
PH06024070 1 1958.66
PH06025670 1 2521.2
View Replies! View Related
Sorting Values: Find Points With Similar "y" Values
I have x coordinates in column 1 with coresponding y coordinates in column 2. From my data of x,y coordinates I want to find points with similar y values. In my data I have defined groups of numbers,
i.e. I have a set of numbers with y values around 30 (+-10), then a new group with y values around 60 (+-10), and so on... Sometimes the groups are not totaly separeted, there could be a few points
with y values between 30 and 60. These points can be grouped with the closest group of coordinates (30 or 60). Each group of x,y coordinates could be copypasted in the columns to the right (columns 3
and up).
So what I want to do is find a group of coordinates. This group will have at least 40 individual points +-10 from a group mean. The coordinates need to be sorted out from the data and put in seperate
View Replies! View Related
VARIABLE SUM Of Values
Using only basic formulas (no VBA then), I need to solve the problem of a VARIABLE SUM of values:
Starting always from the value 1:
- if “control” is “x”: the formula has to write in its “memory” the value 2 and in the next row the new value will be the sum of the two values (1+2), so 3;
- if “control” is “y”: in the next row we repeat 1.
From this point:
- if “x”: we add to the last value of our series another value that will be the last used value and in the next row and in the next row the new value will be the sum of the left value with the right
value of the series (in this moment 1+3), so 4;
- if “y”: the formula has to delete from its “memory” the two values that formed the previous sum (in other words the external values of the series) and in the next row the new value will be the sum
of the left remained value with the right remained value of the series maintained in “memory”. If remains only a value in the next row we’ll write that single value.
N.B.: if “control” is “z”: the formula must hold in “memory” the sum of values and write that sum in the next row but without considering because all the operation are “suspended”! When “control”
will return to “x” or “y” operations of summing or cancelling will start again.
At the end: when everything will restart from the beginning of a new session with the value 1 and “forgetting” entirely what happened above??
There’re 3 cases:
- when all the values are deleted, in the next row will restart a new session with the value 1 “forgetting” entirely what happened above;
- when the result of the sum is >= $A$2, in the next row will restart a new session with the value 1 “forgetting” entirely what happened above;
- when the value of the column “heart” is >= $B$2, in the next row will restart a new session with the value 1 “forgetting” entirely what happened above.
Excuse my english and if you need any clarification… just ask!
No need for this in only a column, you can use all the intermediate columns you may need.
After spending 3 weeks on this I really hope that someone could help me solving this VARIABLE SUM of values.
View Replies! View Related
Sum IF (2 Equal Values)
I've created the following function that chooses the maximum value from a set of cells then inserts theappropriate row number (within a table) into a new cell.
=IF(J27=0,"?",IF(J27=J19,1,IF(J27=J20,2,IF(J27=J21,3,IF(J27=J22,4,IF(J27=J23,5,IF(J27=J24,6,IF(J27=J 25,7,IF(J27=J26,8)))))))))
It's working fine until I have 2 cells with the highest value. The above statement is entering the first cell that meets the criteria in the new cell but ignores the fact there maybe 2 (or more) of
the same value.
How can I get both (or all) to be entered in the same cell? Is there a better way, maybe highlighting all the rows in the table that equal the max figure?
View Replies! View Related
Sum Based On Corresponding Values
would like to sum certain cells in a column based upon a specific value in a nearby column in the same row (e.g. 1, 2, or3). is there a formula for this?
View Replies! View Related
Sum Current Values
I would like to sum the column at the bottem but I get a circuler ref. error. I think this is because of the formulas but is there a way to sum based on the actual values in the column?
View Replies! View Related
Sum Of Several Vlookup Values
I need a macro to calculate the order value i.e when i fill in a qty against any code a macro would execute and get the rate of that code from (rate file worksheet) and multiply that value with the
fill in qty and display it and also, when i fill in a qty against another code the macro should perform the same procedure but in this case it would add the value to the last value and show the
combined total value for the order,
View Replies! View Related
Sum The Values Are Less Than To Specific Value
i want to sum the values less than 1y, 2y, 3y and so on. I define 1y to be values less than 18-Mar-09, 2y to be less than 16-Jun-10 but greater than 18-mar-09, 3y to be the 3y value, 4y the 4y value
and so on.
1D 09-Jan-0879
1W 15-Jan-08172
1M 08-Feb-08-288
2M 10-Mar-086,835
75D 19-Mar-082,945
H8 18-Jun-084,050
M8 17-Sep-081,160
U8 17-Dec-08-1,557
Z8 18-Mar-09-6,189
H9 17-Jun-09-5,868
M9 16-Sep-09-5,241
U9 16-Dec-09-3,655
Z9 17-Mar-10-3,525
H0 16-Jun-10-2,132
3Y 10-Jan-11-37,902
4Y 09-Jan-12-26,380
View Replies! View Related | {"url":"http://excel.bigresource.com/Find-Cells-That-Sum-up-to-Given-Values-DLYcEsZ0.html","timestamp":"2014-04-16T19:07:23Z","content_type":null,"content_length":"77710","record_id":"<urn:uuid:f2f4d8ca-ca0e-414d-9214-f78b4543a106>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Converting a Polar Equation to Rectangular
April 5th 2011, 08:29 AM
Converting a Polar Equation to Rectangular
Hello, I do need help with a particular problem in my homework.
The problem is theta = 4pi/3.
I need to convert this equation to a rectangular equation.
I get so antsy seeing theta = instead of r =. (Worried)
Thanks in advance to anyone who responds. (Happy)
April 5th 2011, 09:04 AM
April 5th 2011, 09:11 AM
April 5th 2011, 09:33 AM
April 5th 2011, 09:36 AM
Hopefully, you know that $x= r cos(\theta)$ and $y= r sin(\theta)$. From those, $\frac{sin(\theta)}{cos(\theta)}= tan(\theta)= \frac{y}{x}$ and so $y= tan(\theta)x$. | {"url":"http://mathhelpforum.com/pre-calculus/176898-converting-polar-equation-rectangular-print.html","timestamp":"2014-04-23T12:21:09Z","content_type":null,"content_length":"7562","record_id":"<urn:uuid:1da4787f-5f1d-447b-9ba1-a23aaf1374f3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |