content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
How to perform atomic sums on floats
05-15-2010, 04:52 AM #1
Junior Member
Join Date
May 2010
How to perform atomic sums on floats
I am trying to get the following kernel to properly add up the global ids. Of course this is pointless, but it illustrates something I am trying to make work in a larger kernel. Basically my
kernels perform a fair amount of calculations, but the end result that I want to get back is a small array of various totals. Performing these totals in a parallel fashion is that does not seem
to be working. If I execute the following kernel, with a fixed number of work units, I would like to always get the same result. Say for 100 work units, I would expect 0+1+2+3+ .... + 99.
However, every time I run the kernel, I get a different number.
Is this what mem_fence is attempting to solve? Or is there some other technique I need to use. The number I total needs to be floating point. I also tried putting the mem_fence
kernel void AtomicSum(
global write_only float* c )
int index = get_global_id(0);
c[0] += (float)index;
Re: How to perform atomic sums on floats
You are right, i would expect as well a 1+2+3......+99 ?
did you try barrier(CLK_GLOBAL_MEM_FENCE); instead of mem_fence() ?
Re: How to perform atomic sums on floats
The code is not right... and does not perform summation that you want. The problem is in te way that you perform de operation, you are telling to OpenCL that every workItem update the same
portion of memory, so when a workItem needs to do the operation, the copy of the portion of memory that reads it can be not the real one, in other words, the operation is not secuentially.
Basically, there are two forms of doing this operation, one is assign the operation to ONE workItem, and the other is using some method of reduction, in the last you perform a division of the
Re: How to perform atomic sums on floats
No, it is doing what I want it to... I want it to update the same area of memory. Can't OpenCL share memory, I thought thats what mem_fence was for.
Basically I am trying to write the equiv of:
float sum = 0;
for(int i=0;i<100;i++)
So thats why I am updating the same piece of memory. I perform a complex calculation thousands upon thousands of times. But I do not need to know the individual results. Just the sum. Are you
saying I need to allocate a very large buffer and never touch the same piece of memory twice? I got it to work that way, but it takes way too much memory.
Re: How to perform atomic sums on floats
No.... you don't understand me. let's put an example. You have one piece of memory for the result value, and a piece of memory with some values. When you have to sum all this values and return a
result value, each workItems read the peace of memory of the result and adds the corresponding value, but............ the memory that is reading at that time, was reading by another workItems
that are doing the same operation. So, everyone do the sum, and then each one stores his value with an incorrect result!
The atomics operation in OpenCL Specification do this operations in the correct way, in other words, secuentially adds the same portion of memory and update it. They are slow.
Another thing, the barriers are use for synchronize the work in workItems of the same workGroup. But the order that the workItems execute after the barrier it's not specified, so the barrier in
your code is wrong, it's telling that only can go on with the execution, only if all workItem in the same workGroup perform ALL the operations behind de barrier.
Re: How to perform atomic sums on floats
You can use atomic_cmpxchg function and C union to achieve it for floating point
you can implement several "reduce" steps in your program to aggregate large dataset in parallel manner to avoid concurrence and produce result on last "reduce" step http://en.wikipedia.org/wiki/
05-16-2010, 05:42 AM #2
Junior Member
Join Date
May 2010
05-16-2010, 08:50 AM #3
Junior Member
Join Date
Mar 2010
05-16-2010, 01:01 PM #4
Junior Member
Join Date
May 2010
05-16-2010, 03:47 PM #5
Junior Member
Join Date
Mar 2010
12-19-2011, 12:13 PM #6
Junior Member
Join Date
Dec 2011
|
{"url":"http://www.khronos.org/message_boards/showthread.php/6494-How-to-perform-atomic-sums-on-floats?p=26120","timestamp":"2014-04-18T18:31:48Z","content_type":null,"content_length":"50351","record_id":"<urn:uuid:fb11ab62-6c68-4879-be78-6652ae32529f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An Experimental and Theoretical Analysis of Ultrasound-Induced Permeabilization of Cell Membranes
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Biophys J. May 2003; 84(5): 3087–3101.
An Experimental and Theoretical Analysis of Ultrasound-Induced Permeabilization of Cell Membranes
Application of ultrasound transiently permeabilizes cell membranes and offers a nonchemical, nonviral, and noninvasive method for cellular drug delivery. Although the ability of ultrasound to
increase transmembrane transport has been well demonstrated, a systematic dependence of transport on ultrasound parameters is not known. This study examined cell viability and cellular uptake of
calcein using 3T3 mouse cell suspension as a model system. Cells were exposed to varying acoustic energy doses at four different frequencies in the low frequency regime (20–100 kHz). At all
frequencies, cell viability decreased with increasing acoustic energy dose, while the fraction of cells exhibiting uptake of calcein showed a maximum at an intermediate energy dose. Acoustic spectra
under various ultrasound conditions were also collected and assessed for the magnitude of broadband noise and subharmonic peaks. While the cell viability and transport data did not show any
correlation with subharmonic (f/2) emission, they correlated with the broadband noise, suggesting a dominant contribution of transient cavitation. A theoretical model was developed to relate
reversible and irreversible membrane permeabilization to the number of transient cavitation events. The model showed that nearly every stage of transient cavitation, including bubble expansion,
collapse, and subsequent shock waves may contribute to membrane permeabilization. For each mechanism, the volume around the bubble within which bubbles induce reversible and irreversible membrane
permeabilization was determined. Predictions of the model are consistent with experimental data.
One of the critical elements of medical therapy is effective and targeted delivery of drugs into cells and tissues. The lipid bilayer of cell membranes poses the primary barrier to transport of low-
as well as high-molecular weight molecules into cells (Stein, 1986). Among the methods proposed to enhance cellular drug delivery are biological approaches including viruses for gene therapy (
Johnson-Saliba and Jans, 2001), physical methods including electroporation (Canatella and Prausnitz, 2001), chemical methods such as cationic lipids (Brown et al., 2001), and drug conjugates (Fischer
et al., 2001). Another approach to enhancing cellular drug delivery involves the use of ultrasound to transiently disrupt cell membranes.
The primary advantage of ultrasound is that as a physical, rather than a chemical approach, the enhancement is likely to be broadly applicable to a variety of drugs and cell types. Furthermore, based
on the available methodologies to focus ultrasound in the body (Kremkau, 1998), ultrasound-mediated drug delivery may be targeted to designated regions. Ultrasound-enhanced delivery into cells has
been demonstrated in vitro by uptake of extracellular fluid, drugs, and DNA into cells (Bao et al., 1997; Fecheimer et al., 1987; Guzman et al., 2001; Kim et al., 1996; Koch et al., 2000; Miller and
Quddus, 2000; Miller et al., 1996, 1998; Saad and Hahn, 1992; Ward and Wu, 1999; Ward et al., 2000; Williams, 1973; Wu et al., 2002) and plant tissues (Zhang et al., 1991). Although exciting
applications of ultrasound in drug delivery have been demonstrated, there is limited information available to guide the selection of optimal ultrasound conditions and even less information is
available on the mechanism by which ultrasound achieves membrane permeabilization.
Effect of ultrasound on cell membrane permeability has been investigated under a variety of intensities (or pressure amplitudes) and frequencies (Bao et al., 1997; Guzman et al., 2001; Ward et al.,
2000). However, a systematic investigation of the dependence of transport on ultrasound frequency and intensity is yet to be done. This is one of the objectives of this study.
Ultrasound-mediated bioeffects are generally believed to be caused by cavitation (Miller et al., 1996). Acoustic cavitation involves the creation and oscillation of gas bubbles in a liquid (Leighton,
1997). Cavitation bubbles may exhibit sustained growth and oscillations over several acoustic cycles (stable cavitation) or violent growth and collapse in less than a cycle (transient or inertial
cavitation) (Leighton, 1997). Potentially, both stable and transient cavitation may induce membrane permeabilization. Liu et al. (1998) reported that disruption of red blood cell membranes by
ultrasound correlates better with the occurrence of stable cavitation. On the other hand, other investigators (Everbach et al., 1997; Miller et al., 1996) postulated that ultrasound-induced cell
damage results from inertial (transient) cavitation. However, a systematic dependence of membrane permeabilization on ultrasound or cavitation parameters is not yet known.
Since bio-effects related to acoustic cavitation are inversely related to ultrasound frequency (Mitragotri et al., 1995; Tezel et al., 2001), low-frequency ultrasound should be more effective in
enhancing membrane permeability. Accordingly, we designed a study focused on assessing the dependence of membrane permeabilization in low-frequency regime (20 kHz–100 kHz). In addition, we also
performed acoustic spectroscopy to determine two cavitation-related parameters (subharmonic peak amplitude indicative of violent stable cavitation and broadband noise indicative of transient bubble
collapse). A theoretical model to describe cavitation-mediated membrane permeabilization is also presented.
Cell preparation
Effect of low-frequency ultrasound on membrane permeabilization was assessed using 3T3 mouse cells (ATCC, Manassas, VA). Cells were cultured as a monolayer in a humidified atmosphere with 95% air and
5% CO[2] at 37°C in Dulbecco's modified Eagle's medium (Sigma-Aldrich, St. Louis, MO) with 10% fetal bovine serum and 100 μg/ml penicillin-streptomycin. Cells were harvested before each experiment
with versene followed by digestion using trypsin/EDTA (Cellgro, Herndon, VA). Cells were washed with DMEM medium and resuspended in Dulbecco's modified Eagle's medium in 12-well plates (Corning Inc.,
Corning, NY, well diameter of 2.3 cm) at concentrations varying between 7 × 10^5 cells/ml and 9 × 10^5 cells/ml. Two milliliters of cell suspension were used in each experiment.
Ultrasound application
Ultrasound was applied at four frequencies 20 kHz, 57 kHz, 76 kHz, and 93 kHz. For each frequency, a custom-built transducer was used to generate ultrasound (PiezoSystems Inc., Cambridge, MA). The
transducers were designed by sandwiching ceramic crystals between two metal resonators of appropriate lengths. A signal generator (Tektronix CFG-280, Beaverton, OR) along with an amplifier
(Krohn-Hite 7500, Avon, MA) was used to drive the transducers. The electric power applied to the transducer was measured using a sampling wattmeter (Clarke-Hess 2330, New York, NY). The frequency of
the electrical signal was matched with the resonant frequency of each transducer. Transducers were calibrated using laser interferometry and hydrophone measurements using methods described by Tezel
and co-workers (Tezel et al., 2001). The transducers were directly immersed in the cell suspension (Fig. 1 A). The tip of the transducer was located at the center of the well. The cross-sectional
area of all transducers was 0.78 cm^2. A 100% duty cycle was chosen for ultrasound application.
(A) A schematic representation of the setup used for ultrasound application to cell suspension. (B) A schematic representation of the setup used for acoustic spectroscopy.
Just before ultrasound application, a solution of a fluorescent dye, calcein (MW = 623 Da, Molecular Probes, Eugene, OR) was added to the wells. The amount of calcein was such that the final
concentration of calcein in the well was 50 μM. Ultrasound was applied to each well for times in the range of 10–180 s. At the end of ultrasound application, transducer was removed from the well and
the cell suspension was collected. Cells were centrifuged and washed several times with the medium to remove calcein from the extracellular space. These cells were then observed under a microscope to
determine the fraction of cells into which calcein had penetrated. For this purpose, a 20 μl cell suspension was placed on a microscope slide and was imaged using a fluorescence microscope (Axiovert
25 Inverted Microscope, Zeiss). The images were captured under a constant exposure, illumination and gain (charge-coupled device camera, Optronics, Goleta, CA). These images usually showed a
heterogeneous population.
To quantify the cells exhibiting transport, images of identical volumes of solutions containing various concentrations of calcein (0.5–50 μM in PBS) were captured under identical exposure and were
compared to images of cells. Using these images, cells were classified into three categories: those exhibiting minimal transport (intracellular calcein concentrations between 0 and 0.5 μM; that is,
between 0 and 1% of equilibrium concentration), moderate transport (intracellular calcein concentrations between 0.5 μM and 5 μM; that is, between 1 and 10% of equilibrium concentration), and high
transport (intracellular calcein concentrations between 5 μM and 50 μM; that is, between 10 and 100% of equilibrium concentration). Although calcein standards were made in PBS and not in the cell
cytoplasm, a comparison of calcein fluorescence in two media is feasible to a first approximation. The confidence in using PBS solution as a standard is supported by the observation that the
fluorescence in the cell population exhibiting highest transport is comparable to that of a 50 μM calcein solution. Under an ultrasound condition where relatively high transport is observed (for
example, 20 kHz, 0.8 W/cm^2, 30 s), ~38% of cells exhibited low transport, 27% cells exhibited moderate transport, and 8% cells exhibited high transport. The total number of cells exhibiting
transport was thus 73%, while the remaining 27% cells were nonviable. For the purpose of quantifying transport, we report cells exhibiting high transport (intracellular calcein concentration in the
range of 10–100% of equilibration). Under each ultrasound condition, at least 500–600 cells were counted to determine the fraction of cells exhibiting transport.
Under each ultrasound condition, viability of cells was also assessed using trypan blue. At the end of ultrasound exposure, cells were stained with trypan blue. These cells were observed under the
microscope and the fraction of dead cells was counted. Each measurement was performed based on 20 μl of cell suspension.
Acoustic spectroscopy
Cavitation generated by ultrasound application was measured using acoustic spectroscopy. This method for monitoring cavitation involves the detection of bubble activity through measurement of the
pressure spectrum of the acoustic field (Liu et al., 1998; Neppiras, 1968; Tezel et al., 2002). If the driving acoustic field is a continuous wave of frequency f, the acoustic pressure field
scattered by the bubble contains special components of harmonic frequency (2f, 3f, etc.), subharmonic frequency (for example, f/2) and ultraharmonic frequency (for example, 3f/2) (Neppiras, 1968;
Shankar et al., 1999). At higher ultrasound intensities, transient cavitation is induced and results in the elevation of broadband noise. Measurements of subharmonic pressure amplitude as well as
broadband noise were performed using a hydrophone (Model TC 4013, Reson, Goleta, CA). The bandwidth of the hydrophone is 1 Hz–170 kHz (−10 dB). The hydrophone diameter is 0.5 cm and the length is ~2
cm. The transducer diameter is 0.8 cm. Due to its large size, the hydrophone cannot be placed in the well. Accordingly, a separate chamber was used for measuring acoustic spectrum (Fig. 1 B). The
diameter of this chamber was comparable to the well diameter, but the height was ~5 cm. The hydrophone was placed directly underneath the transducer and the chamber was filled with the cell culture
medium. The transducer was completely immersed under the liquid. The output of the hydrophone was analyzed using a dynamic signal analyzer (Hewlett-Packard 3562A, Everett, WA). Detailed methods of
measurements of acoustic spectrum are described by Tezel et al. (2002). Analysis of subharmonic emission in this article was performed using f/2 component. Peak amplitude of subharmonic component was
measured by continuously averaging the acoustic spectrum until a steady value (within 10%) was reached. Broadband noise was measured over a frequency range of 1 Hz–100 kHz using the same hydrophone
and method. The spectrum was averaged until a steady value (within 10%) was reached.
Effect of low-frequency ultrasound on cell viability and calcein transport
Intracellular calcein concentration among the entire cell population after ultrasound application was heterogeneous and ranged from 0 to ~50 μM. Such heterogeneity of transmembrane transport upon
ultrasound application is consistent with literature reports (Guzman et al., 2001; Kodama and Takayama, 1998). Investigation of the origin of the heterogeneity is beyond the scope of this article. To
quantify transport data under such heterogeneous conditions, Guzman and co-workers divided the cell population into three categories, cells exhibiting minimal transport (~1% equilibration), cells
exhibiting high transport (close to equilibration), and cells exhibiting intermediate transport (~10% equilibration) (Guzman et al., 2001). On the other hand, some investigators (Kodama et al., 2000)
quantified transport in terms of fraction of cells exhibiting any detectable fluorescence.
In our study, as stated earlier, a large fraction of cells exhibited minimal transport (<1% of equilibration) and about the same fraction exhibited moderate transport (between 1% and 10% of
equilibration). A small fraction exhibited high transport (intracellular concentration between 10 and 100% of equilibration). We performed quantitative data analysis based on this fraction of cells.
Although this choice of intracellular concentration is somewhat arbitrary, we believe that the general conclusion of the dependence of transport on ultrasound parameters is insensitive to the choice
of this threshold. With this choice of the threshold, the highest fraction of cells exhibiting transport under the range of ultrasound parameters explored was ~6–8%. A choice of a higher
concentration threshold reduced the fraction of cells deemed permeable, thereby increasing the error in the analysis. On the other hand, a reduction of the threshold decreased the sensitivity of the
dependence of transport on ultrasound parameters. Incorporation of concentration threshold in data analysis is discussed later in the manuscript. It is important to note that the cells exhibiting the
presence of intracellular calcein correspond to the fraction of the cell population that was reversibly permeabilized. Calcein delivered into cells which were irreversibly permeabilized is removed
during the washing procedure.
Fig. 2 A shows the variation of cell viability, V, with ultrasound energy density, E (E = It, where, I is ultrasound intensity in W/cm^2 and t is total application time in seconds) at four
frequencies. The data at each frequency were obtained at a variety of intensities in the range of 0–3 W/cm^2 and application times in the range of 0–180 s. Scaling of bio-effects of ultrasound with
total energy dose has been previously reported for ultrasound-mediated skin permeability and cell membrane permeabilization (Guzman et al., 2001; Mitragotri et al., 2000a,b; Tezel et al., 2001).
Specifically, Tezel and co-workers reported that the effect of low-frequency ultrasound on skin permeability scales with the total energy density (Tezel et al., 2001). Similarly, Guzman et al. (2001)
showed that the effect of ultrasound on cell membrane permeabilization also scaled with ultrasound energy density. This relationship between ultrasound-induced bio-effect and energy density
facilitates the analysis since it allows for the combination of the dependence of bio-effect on three ultrasound parameters: intensity, application time, and duty cycle, into a single parameter—that
is, energy density.
(A) Variation of cell viability (V) with ultrasound energy density at four frequencies (♦, 20 kHz; B) Variation of ...
At each frequency, cell viability, V, decreased with increasing energy dose. The energy density at which viability drops below 50% is ~10 J/cm^2, 45 J/cm^2, 40 J/cm^2, and 60 J/cm^2, respectively at
20 kHz, 57 kHz, 76 kHz, and 93 kHz. The absolute values of these energy densities are likely to depend on various parameters including transducer geometry and liquid volume that were held constant in
this study. Hence, relevance of the absolute values of these energy densities to membrane permeabilization should be carefully performed. However, the data clearly show that the energy density
required for achieving low viability increases with increasing frequency.
Fig. 2 B shows the dependence of the fraction of cells exhibiting calcein uptake (that is, reversibly permeabilized, T) on ultrasound energy density at the same four frequencies. At each frequency,
the fraction of fluorescent cells exhibited an optimum with respect to ultrasound energy. The highest fraction of cells exhibiting transport was between 6 and 8% for most frequencies. While this
might represent a low level of transport efficiency, it has to be remembered that a high threshold was set for determining transport. Furthermore, the fraction of cells exhibiting transport may be
further optimized.
The energy density corresponding to maximum calcein delivery increased with increasing frequency. For ultrasound at 20 kHz, 57 kHz, 76 kHz, and 93 kHz, the energy density corresponding to peak
delivery was respectively 25 J/cm^2, 40 J/cm^2, 40 J/cm^2, and 75 J/cm^2. Once again, these energy values are likely to be system-specific. It is interesting that although the dependence of viability
and intracellular calcein delivery on ultrasound energy density is clearly different for different frequencies, the maximum fraction of cells reversibly permeabilized (~6–8%) is nearly independent of
the frequency. The absolute fraction of cells exhibiting transport will change if the threshold concentration is changed. However, the dependence of transport on ultrasound parameters will remain
qualitatively the same (data not reported).
Dependence of viability and calcein transport on cavitation parameters
Acoustic cavitation has been shown to play an important role in several ultrasonically-mediated bio-transport problems (Liu et al., 1998; Miller et al., 1996; Mitragotri et al., 1995; Suslick, 1989).
Cavitation manifests itself in at least two modes; stable cavitation (slow, periodic oscillations of gas bubbles) and transient cavitation (rapid, violent growth and collapse of gas bubbles; Suslick,
1989). The first step in identifying the detailed mechanisms of ultrasound-mediated membrane permeabilization is to identify which type of cavitation is responsible for this phenomenon. Cavitation
generated by ultrasound application was measured using acoustic spectroscopy. Energy density associated with each type of cavitation (E[bb] for broad band noise or E[sh] for subharmonic emission) was
determined using the equation, E = (P^2/ρc)t, where P is the amplitude of subharmonic in case of stable cavitation (P[sh]) in Pa and broadband noise (P[bb]) in the case of transient cavitation and t
is the ultrasound application time in seconds.
Fig. 3 shows the dependence of viability on transient cavitation energy for all four frequencies shown in Fig. 2 A. As expected, the cell viability decreases with increasing transient cavitation
energy. However, interestingly, the dependence of viability on transient cavitation energy appears to be given by a single function regardless of the frequency. It is important to remember that the
absolute value of the broadband energy measured in our experiments is highly likely to depend on the experimental system. Accordingly, interpretation of the absolute value of cavitation energy should
not be attempted at this stage. The most important conclusion of the data shown in Fig. 3 is that cell viability is close to unity when no broadband noise is observed and decreases with increasing
broadband noise. This result supports the hypothesis that ultrasound-mediated membrane permeabilization is mediated by transient cavitation. This result is consistent with the data of Everbach and
co-workers, who reported that hemolysis by 1 MHz ultrasound correlates with transient cavitation (Everbach et al., 1997).
Variation of cell viability (V) with broadband energy density at four frequencies (♦, 20 kHz; ...
Fig. 4 shows the variation of fraction of cells reversibly permeabilized as a function of transient cavitation energy. Once again, transport of calcein correlates well with broadband noise. Fig. 5
shows the fraction of viable cells that exhibited transport (T/V) as a function of transient cavitation energy density at four frequencies. At each frequency, this fraction increased monotonically
and approached unity at high energy densities. This is understandable, inasmuch as at higher energies we expect that the entire cell population is affected by the ultrasound. Accordingly, the entire
cell population would be divided into only two categories, the population that is reversibly permeabilized and the population that is irreversibly permeabilized. Since the fraction of cells
permeabilized irreversibly is deemed nonviable, the fraction of viable cells exhibiting calcein transport should approach unity.
Variation of cell population fraction exhibiting calcein transport (T) with broadband energy density, E[bb], at four frequencies (♦, 20 kHz; ...
Variation of the ratio of cell population fraction exhibiting calcein transport to viable cell fraction (T/V) with broadband energy density, E[bb], at four frequencies (♦, 20 kHz; ...
Fig. 6, A and B respectively, show the dependence of viability (V) and fraction of cells reversibly permeabilized (T) as a function of subharmonic energy for four frequencies. There appears to be no
unique correlation between either viability or transport with subharmonic energy density.
(A) Variation of the cell viability (V) with subharmonic energy density at four frequencies (♦, 20 kHz; ...
Inertial or transient cavitation corresponds to violent collapse of bubbles leading to high local pressures and temperatures (Suslick, 1989). Inertial cavitation has been suggested to play an
important role in ultrasound-induced membrane permeabilization (Miller et al., 1996). However, the precise mechanisms through which inertial cavitation affects membrane permeability are not known.
Two possible mechanisms, including shock waves produced upon bubble collapse and membrane deformation induced due to radial bubble velocities, are considered in the following analysis.
Membrane disruption due to shock waves
Shock waves with amplitudes in the range of 10–1000 bar have been shown to induce membrane disruption and other biological effects (Delius, 1997; Kodama et al., 2000, 2002; Mayer et al., 1990;
Williams et al., 1999). Critical amplitudes for cell and tissue damage due to shock waves have been found to vary based on the experimental system (Kodama et al., 2002; Raeman et al., 1994; Sonden et
al., 2000). Single shock waves of amplitudes of up to 3 kbar have been found sufficient to induce reversible membrane permeabilization but not lethal disruption (Kodama et al., 2000).
It is also known that shock waves with amplitudes approaching or exceeding 1000 bar are generated at the end of bubble collapse in an ultrasound field (Pecha and Gompf, 1999). Pressures inside a
collapsing bubble and the subsequent amplitude of the shock wave have been determined through theoretical calculations as well as through experiments. Theoretical estimates assuming adiabatic
collapses of bubbles have yielded pressures of >10 kbar inside bubbles at the minimum bubble radius (Vichare et al., 2000). However, experimental measurements of these pressures have proved
challenging. This reflects the fact that the high pressures are observed in a narrow space and time domain (Pecha and Gompf, 1999). Nonetheless, direct or indirect experimental measurements of
maximum pressures in collapsing bubbles have yielded values in the range 1.7–73 kbar (Matula et al., 1998; Pecha and Gompf, 1999; Wang et al., 1999). Such high pressures are accompanied by shock
waves that propagate spherically around the center of bubble collapse. The precise mechanisms by which shock waves affect the cells and tissues are not known, although a number of attempts have been
made to gain a better understanding. Lokhandwalla and Sturtevant performed a theoretical analysis of shock wave-induced cell membrane disruption (Lokhandwalla and Sturtevant, 2001). They argued that
the spatial and temporal gradients in shock wave amplitude induce membrane deformation and subsequent disruption. Role of stress gradient in shock wave-mediated membrane disruption has also been
stated by Doukas et al. (1995). Howard and Sturtevant also argued that shock waves induce membrane strain and the magnitude of the strain is directly proportional to the shock wave amplitude and the
duration of the shock wave (Howard and Sturtevant, 1997). On the other hand, Kodama and co-workers argued that shock waves permeabilize membrane by inducing relative displacement between the cell and
the surrounding fluid (Kodama et al., 2000).
Cell membranes possess relatively low tolerance to membrane stretching. The critical value of area strain, ΔA/A, where A is the original membrane area, and ΔA is the stress-induced increase in area
necessary for membrane disruption, has been reported to be ~0.02–0.03 for red cell membranes (Evans et al., 1976; Netz and Schick, 1996). Critical strain of membranes may vary depending on the
loading rate. However, in the absence of this information, a range of 0.01–0.03 was used as a representative range. As will be shown later, these strains are easily exceeded during exposure of cells
to cavitation-mediated shock waves. Accordingly, shock wave-induced membrane disruption is highly likely to play an important role in cellular delivery.
Membrane disruption due to bubble wall motion
Shear stresses have also been suggested to play a significant role in ultrasound-mediated membrane permeabilization (Wu, 2002; Wu et al., 2002). Lokhandwalla et al. theorized that membrane
deformation induced by radial bubble motion plays a dominant role in membrane deformation (Lokhandwalla and Sturtevant, 2001). An estimate of membrane deformation induced by bubble motion can be
performed following their approach. During the expansion stage of transient cavitation, bubbles grow rapidly from an initial radius, R[o], to a radius, R[max] in less than half the acoustic cycle and
violently collapse thereafter. For example, Wu and Roberts calculated that transient cavities in water exposed to ultrasound at 26.5 kHz grow to a radius of 37 μm in ~16 μs (starting from an initial
radius of ~5 μm) and collapse in ~3 μs (Wu and Roberts, 1993). Thus, the average bubble velocities during bubble growth and collapse in this case are respectively 2 m/s and 12 m/s. Membrane
deformation induced by these velocities in a cell located at a distance, r, from the center of the bubble has been described by Lokhandwalla as U[b] is the bubble wall velocity, R[b] is the bubble
radius, and τ is the time of expansion or collapse (Lokhandwalla and Sturtevant, 2001). As will be shown later, the critical area strains of 0.03 can be exceeded during exposure of cells to
velocities generated by bubble wall motion. Accordingly, membrane deformation due to bubble wall motion also needs to be considered in describing membrane disruption.
Other mechanisms, including interactions of cells with stable cavities, collisions of bubbles with cells, transducer-induced microstreaming in the absence of bubbles, and chemical effects of
cavitation, are not considered in this analysis. Analysis of the importance (or lack thereof) of these mechanisms in membrane permeabilization has been discussed in the literature (Miller et al.,
1996). These mechanisms were excluded from this analysis primarily due to the preliminary evidence presented by the acoustic spectroscopy data that transient cavitation is responsible for transport
under conditions used in this study.
Permeabilization of cell membranes (either due to shock waves or due to bubble motion) may occur from interaction with a single bubble or a series of bubbles. We first analyze the scenario that
membrane permeabilization is induced by a single collapse.
Single bubble interaction model
Let us assume that during application of ultrasound at a given frequency and intensity for a certain time, a total of M transient cavitation events take place. We now introduce two radii, r[1] and r
[2] (r[1] < r[2]). The value of r[1] is chosen such that cells located within a sphere of radius r[1] around the bubble are irreversibly permeabilized due to high shock wave amplitude or high
deformations induced by bubble wall motion. The value of r[1] is likely to be different in both cases. The value of r[2] is chosen such that for the cells located outside a sphere of radius r[2]
around the bubble are not affected by bubbles. Cells located in a region within the radii r[1] and r[2] are assumed to be reversibly permeabilized.
Consider the first collapse of a transient bubble in a volume, v, of liquid, in which N cells are suspended The number of cells, r[1] around the bubble is given as follows:
where R[b] is the bubble radius. Hence, the cell viability at the end of the 1st collapse, α[1], is given as follows:
Similarly, the fraction of cells exhibiting reversible permeabilization at the end of the first cavitation event, β[1], is given by the following equation:
Assuming that distribution of bubbles and cells in suspension is random, it can be shown that the cell viability, V[M], and fraction of cells exhibiting reversible permeabilization, T[M], after M
cavitation events, are respectively given by the following equations:
where, Eqs. 4 and 5 can be respectively simplified to the following:
Equation 6 can be substituted in Eq. 7 to arrive at the following equation:
Equations 6 and 8 offer simple equations to relate viability and transport to the number of cavitation events. Equation 6 predicts that cell viability decreases monotonically with the number of
cavitation events, while the transport exhibits a maximum with respect to the number of cavitation events. This is apparent by differentiating Eq. 7 as follows:
The number of cavitation events, M[max], for which T[M] is maximum is given by the following:
The parameters, λ and μ, may depend on several parameters including ultrasound frequency. Determination of these parameters is discussed later. Before comparing the model predictions with
experimental data, we evaluated the possibility of membrane permeabilization due to multiple cavitation events. This calculation is necessary to confirm that the assumption made in the earlier
analysis, that cells are permeabilized by a single cavitation event, is valid.
Consider a suspension of N cells in a liquid volume, v. The probability that a cell is located within a radius r[1] of a collapse is given by the following equation:
Accordingly, after the occurrence of M transient cavitation events the probability, p[1], that a cell experiences at least one event is given by the following equation:
Assuming that the cavitation events occur randomly in the cell suspension and that the cells are well mixed, the probability, p[j], that a cell is located within a radius r[1] of j cavitation events,
when a total of M events have taken place during the entire period of ultrasound application, is given by the following equation:
As will be shown later, under a typical ultrasound condition used in this study (for example, 20 kHz and 10 J/cm^2, application time of 10 s) the value of λ is ~10^−7 and M of ~10^6. With these
parameters, values of p[j] are ~10^−1, 10^−3, and 10^−5, respectively for j = 1, 2, and 3. Thus, the probability of a cell residing within a radius of r[1] of multiple bubbles simultaneously or
sequentially during an application of 10 s is significantly lower than that for a single event. Accordingly, reversible or irreversible permeabilization of cell membranes is hypothesized to occur
through interaction with a single bubble.
To make quantitative predictions based on the model equations, information is required on two parameters, λ and μ, which in turn depend on r[1] and r[2]. Furthermore, Eqs. 6 and 7 relate V[M] and T
[M] to the number of cavitation events, M. However, experimental data in Figs. 1 and and22 show the dependence of viability and transport on ultrasound energy density, E. Accordingly, to directly
compare the model predictions with experimental data, a relationship between E and M is necessary.
We assume that an approximate relationship between the number of cavitation events and energy density can be written as follows:
where M is the number of transient cavitation events, E is the energy density (J/cm^2), and A[t] is the transducer area (cm^2). κ is a constant (number of bubbles per Joule of acoustic energy).
Energy density, E, is related to intensity, I, and application time, t as E = It. Since no system-specific information (for example, liquid volume, nuclei concentration, etc.) in included in Eq. 14,
the parameter κ is system-specific and not a universal constant. Eq. 14 simply states that the number of cavitation events per unit time is proportional to ultrasound intensity. The validity of Eq.
14 can be justified by previous reports of Mitragotri and co-workers who showed that the number of pits on aluminum foil per unit time at a constant frequency increases proportionally to ultrasound
intensity (higher than cavitation threshold; Mitragotri et al., 2000a). Our direct measurements of the number of cavitation events using hydrophone measurements also support a direct relationship
between the number of cavitation events and ultrasound intensity when the intensity is well beyond the cavitation threshold (unpublished data). By using Eq. 14, Eqs. 4 and 5 can be modified to the
T[M] and V[M] in Eqs. 4 and 5 have been changed to T and V respectively, to reflect the fact that these parameters are now a function of energy and not M. Fig. 7, A–D show fits of Eqs. 15 and 16 to
experimental data. Both equations correctly predict the trends shown in experimental data (Fig. 7, A–D). Specifically, cell viability decreases exponentially with energy density while transport
exhibits a maximum with respect to energy density. By fitting Eqs. 15 and 16 to experimental data in Fig. 7, A–D, values for λκ and μκ were obtained. The values of λκ at 20 kHz, 57 kHz, 76 kHz, and
93 kHz were respectively 0.065, 0.03, 0.031, and 0.012. The values of μκ at 20 kHz, 57 kHz, 76 kHz, and 93 kHz were respectively 0.088, 0.036, 0.038, and 0.014. Equations 15 and 16 fit well to the
experimental data (r^2 > 0.9 for Eq. 15 and r^2 > 0.7 for Eq. 16). Estimated errors in fitted parameters were <20%. Plots of V against λκA[t]E and V + T against μκA[t]E showed that data for all four
frequencies can be defined by the same trend, that is, Eqs. 15 and 16 respectively (data not plotted).
(A–D) Experimental data on viability of 3T3 cells at four frequencies and various energy densities between 0 and 150 J/cm^2 (A: 93 kHz, B: 76 kHz, C: 57 kHz, and D: 20 kHz). ○ corresponds to
transport fraction and • corresponds ...
To further understand the relevance of λ, μ, and κ in membrane permeabilization, individual determination of these parameters is necessary. Note that the model described so far allows determination
of λκ and μκ but not λ, μ, and κ individually. Once one of the three parameters (λ, μ, or κ) is independently determined, the others can be calculated. We chose to determine λ through independent
Lambda is related to the radius of the sphere within which cell membranes are irreversibly disrupted during a single collapse, r[1] depends on the mechanism by which bubbles disrupt the membranes. As
discussed earlier, two mechanisms of membrane disruption are considered. The first mechanism includes disruption mediated by the shock wave originating at the end of bubble collapse and the second
mechanism includes disruption mediated by radial bubble motion during expansion and collapse of transient cavities. If r[1] can be independently determined, individual values of λ, μ, and κ can be
determined. Estimation of r[1] for shock wave-mediated membrane disruption is discussed in Appendix 2. Estimation of r[1] for bubble-motion mediated membrane disruption is discussed in Appendix 3.
Estimation of r[1] in both cases requires a knowledge of important cavitation parameters including minimum and maximum bubble radii (R[min] and R[max]), and collapse pressure, P[o]. Determination of
these parameters is discussed in Appendix 1. Cavitation parameters determined through analysis in Appendix 1 are summarized in Table 1.
Model parameters related to cavitation bubble collapse
Table 2 shows values of λ, μ, and κ for shock wave-mediated membrane permeabilization for three representative values of [c], 0.03, 0.02, and 0.01. These values illustrate several interesting
features. The values of λ and μ are close to each other (at the same value of [c]), which is consistent with the hypothesis that the mechanisms responsible for reversible and irreversible membrane
permeabilization are similar, and the difference in the pressure necessary to induce reversible and irreversible membrane disruption is small. It is important to note that the value of μ (and hence r
[2]) is determined based on the transport as determined by an intracellular calcein concentration of at least 5 μM. Since this choice of critical intracellular concentration is somewhat arbitrary,
the value μ is subject to change based on the choice of the threshold. By using various threshold concentrations in the range of 0–50 μM, a set of μ values can be determined. As the threshold
concentration increases from 0 to 50 μM, the value of μ decreases from infinity to λ. Determination of the functional relationship between μ and threshold concentration is beyond the scope of this
Model parameters for shock wave-mediated membrane disruption assuming three critical values of ΔA/A (or [c]) for membrane disruption
The model predicts that the rate of transient cavitation events in cell suspensions is in the range of mid 10^3 to high 10^4 collapses per second per Joule of acoustic energy at 20 kHz depending on
the choice of value of critical strain. Although the range of κ reported in Table 2 appears very high, it should be realized that uncertainties in the estimation of cavitation parameters are usually
high owing to the extreme sensitivity to parameters. This is also true for experimental characterization of cavitation events, where small changes in system parameters yield substantial variability
in experimental results. Using a representative number of κ as 5 × 10^4, the ratio of cells:number of collapses per second is ~20:1 (total number of cells in suspension of ~10^6 and κ ~5 × 10^4
collapses per second). It is difficult to independently confirm whether the number of collapses determined by the model is accurate. However, an analysis based on the energies of bubble expansion and
collapse (see Appendix 4) yields numbers that appear reasonable. Kappa decreases with increasing frequency. At 93 kHz, κ values are predicted to be in the range of low 10^2 to low 10^3, depending on
the value of critical strain.
Table 3 shows λ, μ, and κ, assuming that bubble wall motion is responsible for membrane disruption. Once again, κ decreases with increasing frequency. The predicted number of collapse events is
generally higher than that in the case of shock wave-mediated membrane disruption. The model predicts that the number of cavitation events per unit energy density is about low 10^4 to low 10^5/s at
20 kHz. In this mode of membrane disruption, the effectiveness of a cavitation bubble in inducing reversible or irreversible membrane permeabilization decreases inversely with the cube of the
distance between the cell and bubble (Appendix 3). Interestingly, such a dependence of membrane permeabilization on distance has been reported by Ward et al. (2000) based on experimental
Model parameters for membrane disruption by bubble expansion or early moments of collapse assuming three critical values of ΔA/A (or [c]) for membrane disruption
The values r[1] and r[2] respectively depict the “destructive zone” and “working zone” around a transient cavitation bubble. For values of 0.02, the values of r[1] and r[2] are O(100 μm). Values of r
[1] and r[2] are close to each other, suggesting a narrow window of space within which the cells are reversibly permeabilized. Furthermore, the values of r[1] and r[2] are smaller than the average
distance between the bubble collapse and cell, which may explain the heterogeneity in transport, at least in part.
Based on the agreement of the theory with experimental data, it is difficult to ascertain whether shock waves or radial bubble velocities are entirely responsible for membrane permeabilization by
themselves. The effective distances around the bubble and number of collapses are comparable in both cases. Since the effects of bubble expansion, collapse, and shock wave on cell membranes are
occurring at different stages of collapse, their effects can be additive and collectively responsible for membrane permeabilization. The stresses acting during shock waves are exceedingly high
although their effective time is very short (nanoseconds). On the other hand, the stresses encountered during bubble motion in the expansion and collapse are comparatively low, but the times over
which these stresses act are relatively long (microsecond). The membrane strain induced during both stages is predicted to be of the same order of magnitude. This is also clear from Eqs. A6 and A8,
which show that the maximum strain induced in each case (that is, at r = R[min] for shock waves and r = R[b] for bubble motion), is ΔA/A ~1. Based on the values of parameters reported in Tables 2 and
and3,3, the contribution of shock wave-mediated permeabilization is likely to be higher than bubble-mediated permeabilization for smaller values of [c]. This originates from the fact that the strain
induced by shock waves decreases as 1/r, whereas the strain induced by bubble wall decreases as 1/r^3. To resolve the role of shock waves versus bubble wall motion within one-order-magnitude, a more
accurate determination of model parameters R[max], R[min], and [c] is necessary.
The model presented here provides two outcomes. First, it correlates cell viability and transport to fundamental parameters including number of collapses, collapse pressure, and bubble wall
velocities. Second, it provides an analysis of the importance of various stages of cavitation in membrane permeabilization. The model parameters are physical and can be directly related to bubble
dynamics. The model parameters also allow quantification of the “destruction” zone and “working” zone around a cavitation bubble.
Effect of low-frequency ultrasound on viability and calcein transport of 3T3 cells was investigated. Viability decreased monotonously with increasing energy density at each frequency. At a given
energy density, viability increased with increasing frequency. At each frequency, transport efficiency exhibited a maximum with respect to energy density. The energy density corresponding to maximum
transport increased with increasing frequency. Viability as well as transport efficiency correlated with the energy density of broadband noise energy regardless of the frequency. These results
support the role of transient cavitation in ultrasound-mediated membrane permeabilization. A mathematical model was developed to relate the effect of ultrasound with the number of transient
cavitation events. The model also allowed assessment of the role of various stages of transient cavitation, including bubble expansion, collapse, and subsequent shock wave formation, in reversible as
well as irreversible membrane permeabilization. Bubble expansion and collapse, as well as shock wave, were found to contribute toward membrane permeabilization.
This work was supported by a grant from the Whitaker Foundation.
Mechanics of bubble collapse and determination of related parameters
During its growth, the bubble radius increases isothermally and reaches a value of R[max] before collapsing adiabatically. Assuming that the pressure inside the bubble just before adiabatic collapse
is P[i], the pressure inside the bubble (including gas and vapor pressure) at the end of the collapse, P[o] (assumed equal to the amplitude of the emitted shock wave), is given by the following
where R[max] is the bubble radius just before the initiation of collapse and R[min] is the bubble radius at the end of the collapse (that is the radius just before the initiation of bubble rebound).
Gamma is the ratio of specific heats. Both R[max] and R[min] may vary with ultrasound frequency and intensity. Measurements of R[max] have been challenging, inasmuch as the bubbles exist at this
radius only transiently. Among the few measurements of R[max] that have been reported in the literature include those of Ashokkumar and co-workers, who reported a R[max] value of 56 μm at 23 kHz at a
driving pressure of 1.3 bar (Ashokkumar et al., 2002), and those of Didenko and Suslick, who reported a value of 28.9 μm reported for 52 kHz and a pressure amplitude of 1.5 bar (Didenko and Suslick,
R[max] has been related to the frequency and pressure amplitude by the following approximate equation (Colussi et al., 1998; Mason and Lorimer, 1988):
where, R[max] is in μm and f is in kHz. P[a] is the acoustic pressure amplitude in bar. Although R[max] may be calculated for each frequency and pressure amplitude, we used average pressure
amplitudes for calculations. This was feasible inasmuch as the pressure amplitudes used in this study were in a relatively narrow range. The average pressure amplitudes, P[a], used in this study at
20 kHz, 57 kHz, 76 kHz, and 93 kHz, are respectively 1.2 (+/− 0.35) bar, 1.8 (+/− 0.28) bar, 2.3 (+/− 0.6) bar, and 2.7 (+/− 0.6) bar. Using these pressure amplitudes and Eq. A2, calculated values of
R[max] for 20 kHz, 57 kHz, 76 kHz, and 93 kHz are respectively 30 μm, 38 μm, 42 μm, and 41 μm. These numbers are consistent with available literature data. Specifically, an R[max] value of 30 μm at
20 kHz and a pressure amplitude of 1.2 bar is consistent with that reported by a number of investigators (Hilgenfeldt and Lohse, 1999; Matula, 1999; Storey and Szeri, 2000). Calculations based on Eq.
A2 are also consistent with an R[max] value of 28.9 μm reported for 52 kHz and a pressure amplitude of 1.5 bar (Didenko and Suslick, 2002), a value of 8.3 μm reported for 300 kHz and a pressure
amplitude of 2 bar (Colussi et al., 1998), as well as other experimental measurements (Ohl et al., 1999).
The pressures inside the bubble just before beginning of the collapse, P[i], are substantially smaller than the surrounding pressure due to expansion. Assuming the bubble expansion to be isothermal,
the pressure in the bubble just before the beginning of the collapse is given by the following equation (Prosperetti and Hao, 1999):
where σ is the surface tension, and R[o] is the initial bubble radius before the expansion phase. Utilization of Eq. A3 is limited by the lack of experimental data on R[o]. Furthermore, Eq. A3
assumes that the mass of the bubble remains the same during the expansion. However, water vapor enters the bubble during expansion and at the maximum bubble radius, the bubble may contain nearly 90%
water vapor (Storey and Szeri, 2000). Accordingly, the pressure inside the bubble just before collapse can be assumed to be equal to the vapor pressure of water at 25°C (0.03 bar (Perry and Green,
1973). Calculations of Brenner and co-workers for cavitation bubbles at 26.5 kHz and 1.2 bar are in excellent agreement with this assumption (Brenner et al., 2002).
Values of R[min] may be theoretically estimated by solving Rayleigh-Plesset equation. Vichare and co-workers estimated the minimum bubble radius, assuming that the limiting radius is reached when
bubble wall velocity reaches the speed of sound in water (~1500 m/s; Vichare et al., 2000). Numerical simulations have yielded a R[min] value of ~1 μm for 20 kHz and a pressure amplitude of 1.2 bar (
Brenner et al., 2002). Hilgenfeldt and Lohse (1999) reported numerical calculations on the dependence of R[min] on ultrasound frequency. Using their data, predicted values of R[min] for 20 kHz, 57
kHz, 76 kHz, and 93 kHz are respectively 1 μm, 1.4 μm, 1.6 μm, and 1.8 μm. These values of R[min] have been determined at a constant pressure amplitude of 1.2 bar. Although use of these R[min] values
are strictly applicable to a pressure amplitude of 1.2 bar, approximate values of P[o] can be determined using these values of R[min]. Collapse pressures determined using Eq. A1 are 48 kbar, 31 kbar,
27 kbar, and 22 kbar at 20 kHz, 57 kHz, 76 kHz, and 93 kHz, respectively. These pressures have been calculated assuming that the vapor does not condense in the bubble during collapse. The pressures
would be lower if the vapor does condense. The collapse pressures determined by Eq. A1 compare well with the available experimental data for frequencies ~20 kHz. Specifically, at a frequency of 20
kHz, Pecha and Gompf experimentally determined the collapse pressure of ~40–60 kbar (Pecha and Gompf, 1999). Other indirect measurements of collapse pressures have yielded values in the range of
1.7–73 kbar (Holzfuss et al., 1998; Matula et al., 1998; Wang et al., 1999; Weninger et al., 1997). The error in P[o] at other frequencies is difficult to estimate inasmuch as the errors in R[min]
are difficult to estimate. Furthermore, experimental reports of collapse pressures at these frequencies were not found in the literature. Due to the uncertainty in determining collapse pressures at
frequencies other than 20 kHz, we considered an alternative approach. We assumed that the minimum bubble radius for a given value of R[max] is determined primarily by gas compressibility; that is,
the ratio R[max]/R[min] is nearly independent of frequency and acoustic pressure amplitude for a given value of R[max]. This assumption is likely to be valid when the collapse time is much smaller
than the acoustic period. As shown in Appendix 2, the collapse time of a bubble possessing an R[max] value of 40 μm at an acoustic pressure amplitude, P[a], of 1 bar, is ~2.6 μs. This time is much
smaller than the acoustic time periods (50 μs, 17 μs, 13 μs, and 10 μs at 20 kHz, 57 kHz, 76 kHz, and 93 kHz, respectively). This assumption is also valid in this study since the values of R[max] are
close to each other (see Table 1).
Since the confidence in the estimated value of collapse pressure at 20 kHz is higher than that at other pressures (due to independent confirmation of R[max], R[min], and P[o] with literature data
under conditions identical to those used in this study), we assume that the estimated collapse pressure at 20 kHz is more accurate. Using 48 kbar as a reference value, the corresponding value of R
[max]/R[min] is 30. Accordingly, R[min] values at 57 kHz, 76 kHz, and 93 kHz under pressure amplitudes shown in Table 1 are 1.3 μm, 1.4 μm, and 1.5 μm, respectively. These values are listed in Table
1 and are used in further calculations. The P[o] and R[min] values determined in both methods are comparable (P[o] = 48 kbar versus 22–48 kbar, and R[min] = 1–1.5 μm versus 1–1.8 μm).
Determination of r[1] for shock wave-mediated membrane disruption
Several reports of shock wave-mediated cell lysis can be found in the literature (Delius, 1997; Kodama et al., 2000, 2002; Sonden et al., 2000; Williams et al., 1999; Zhong et al., 1999, 1998). The
amplitudes of shock waves used in these studies range from 100 to 1000 bar and the pulse duration was typically on the order of microseconds.
Membrane damage upon exposure to shock waves may occur through shock-induced relative particle displacement, compressive failure, tensile loading, or shear strains. While all these mechanisms may
potentially responsible for membrane disruption, the first mechanism provides the simplest explanation for it. The damage potential of the shock wave depends on the spatial gradient of pressure and
duration of the pulse (Lokhandwalla and Sturtevant, 2001). It can be shown that the strain, , in a section of the material of thickness, Δr, exposed to a shock wave is given by the following equation
(unpublished data; this equation can also be derived from the analysis presented by Lokhandwalla and Sturtevant, 2001):
where Δτ is the duration of the shock wave, ρ is the liquid density, and c is the velocity of sound. By choosing Δr such that it approximately corresponds to spatial width of the shock wave, Δτ can
be related to Δγ by Δτ = Δr/c and Eq. A4 can be rewritten as follows:
The amplitude of the shock wave decreases rapidly during its radial propagation. The amplitude of the shock wave decreases as 1/r during spherical propagation (Matula et al., 1998) Accordingly, Eq.
A5 can be modified as follows:
where P[o] is the shock wave amplitude at its origin, that is, r = R[min]. By equating ΔA/A to and defining r[1] as r at which ΔA/A = [c] (critical strain necessary to irreversibly disrupt the
membrane), r[1] can be calculated as follows:
Values of P[o] and R[min] are listed in Table 1. The r[1] values calculated using Eq. A7 for three values of [c] are listed in Table 2. The remaining parameters of the model—that is, μ and κ—can now
be determined.
Determination of r[1] for membrane disruption mediated by bubble wall motion
Lokhandwalla and co-workers performed an analysis of membrane deformation due to bubble wall motion (Lokhandwalla, et al., 2001). Membrane deformation was related to the bubble radius, R[b], and
radial velocity, U[b], by the following equation,
where r is the distance of the bubble from the cell and τ is the time for which cells are exposed to bubble motion. Using a critical strain for membrane disruption of [c] (Evans et al., 1976), Eq. A8
can be rewritten to describe r[1] as
Since bubble motion during expansion and collapse periods is very different, r[1] values for two cases are separately determined. Furthermore, since the bubble radius as well as bubble wall velocity
are continuously changing throughout the lifetime of the cavity, values of r[1] are determined using average values of bubble radius and wall velocity. During the expansion period, the bubble grows
from an initial radius R[o] to R[max] in approximately one half acoustic cycle or faster. Accordingly, average bubble velocity may be described by the following equation,
where τ[a] is the acoustic time period. The average bubble radius, R[b], during this expansion phase and the time of expansion, is given by Eqs. A11 and A12, respectively:
While deriving Eqs. A11 and A12, it is implicitly assumed that R[max] R[o]. Using Eqs. A10–A12, Eq. A9 can be rewritten as follows:
Using Eq. A13, λ-values associated with bubble expansion were calculated and are shown in Table 3. The range corresponds to the limiting values estimated using [c] = 0.01 and 0.03.
A similar analysis can be performed for bubble collapse. Since the bubble velocities during the final stage of the collapse are drastically different than those during the most of the collapse,
separate analysis is performed during both cases. The collapse time for a bubble is related to R[max] by the following equation (Mason and Lorimer, 1988):
where ρ is the liquid density (1000 kg/m^3) and P[s] is the surrounding pressure (P[s] = P[a] + 1 bar). Using U[b] ~ R[max]/τ and R[b] ~ (R[max] + R[min])/2, average value of λ was calculated (note
that inasmuch as R[max] R[min], the latter has been neglected). With these assumptions, r[1] is given by the following equation:
The r[1]- and λ-values for this condition are comparable to those determined for bubble expansion, and are not separately shown.
During the final stage of collapse, where R[b] ~ R[min], the bubble wall velocity approaches 1500 m/s. A bubble may exist in this stage for ~50 ns (Brenner et al., 2002). With this information,
values of r[1] and λ were calculated using Eq. A9. These calculations yielded r[1] values of typically 15 μm. Although these values are significant, they are much smaller than the r[1]-values
associated with shock waves and bubble motion before final stages. Accordingly, these are not discussed in detail.
Analysis of the energies associated with bubble expansion and collapse can be performed to justify the κ-values determined by the model. The energy necessary for isothermal expansion of a cavity from
a radius, R[o], to a radius of R[max] and the energy available upon adiabetic collapse of the cavity are given by Eqs. A16 and A17, respectively (Vichare et al., 2000):
where R[o] is the initial bubble radius, and P[i] is the bubble pressure before collapse. At an intensity of 0.5 W/cm^2 and a frequency of 20 kHz (where, the pressure amplitude is 1.2 bar and R[max]
is 30 μm, assuming R[o] ~ 2 μm), W[iso] is ~3 nJ/bubble and W[adi] is ~29 nJ/bubble. Noting that W[iso] is the work done by the cavity on the surrounding W[adi] is the work done on the cavity by the
surrounding, the net work done on the cavity is ~26 nJ. This value, in combination with a κ of 1 × 10^5 per J, predicts that ~0.3% of the acoustic energy is converted into creating transient
cavitation energy. Even lower conversion efficiencies are predicted for κ of 5 × 10^3.
List of model parameters
Unstretched area of cell membranes (cm^2)
Fraction of cells irreversibly permeabilized during the first cavitation event
Fraction of cells reversibly permeabilized during the first cavitation event
Velocity of sound (m/s); 1500 m/s, unless otherwise mentioned
Increase in cell membrane area (cm^2)
Number of cavitation events per unit energy dose (J^−1)
Volume fraction around a bubble within which cells are irreversibly permeabilized
Volume fraction around a bubble within which cells are reversibly permeabilized
Number of cells irreversibly permeabilized during the first cavitation event
Probability that a cell is located in a sphere of radius r[1] around a bubble
Probability that a cell is located within a sphere r[1] of at least one cavitation event
Acoustic pressure amplitude (bar or Pa)
Bubble pressure before collapse, (bar or Pa)
Probability that a cell is located within a sphere r[1] of j cavitation events
Total pressure around the bubble just before collapse initiation (bar or Pa)
Pressure far away from the bubble (bar or Pa)
Liquid density (kg/m^3); 1000 kg/m^3, unless otherwise mentioned
Radius of sphere around the bubble such that the cells located in this sphere are irreversibly permeabilized (μm)
Radius of sphere around the bubble such that the cells located between spheres of radii r[1] and r[2] are reversibly permeabilized (μm)
bubble-liquid surface tension (N/m)
Fraction of cells reversibly permeabilized after exposure to an energy dose, E
τ or Δτ
Time for which a cell experiences shock wave or shear stress (seconds)
Ultrasound application time (seconds)
Acoustic time period (second)
Bubble collapse time (second)
Fraction of cells reversibly permeabilized after M cavitation events
Radial velocity of bubble wall (m/s)
Cell viability after exposure to an energy dose E
Cell viability after M cavitation events
Work of adiabatic bubble collapse
Work of isothermal bubble expansion
• Ashokkumar, M., J. Guan, R. Tronsen, T. Matula, J. W. Nuske, and F. Grieser. 2002. Effect of surfactants, polymers, and alcohols on single bubble dynamics and sonoluminescence. Phys. Rev. E. 65
:046310. [PubMed]
• Bao, S., B. Thrall, and D. Miller. 1997. Transfection of a reporter plasmid into cultured cells by sonoporation in vitro. Ultrasound Med. Biol. 23:953–959. [PubMed]
• Brenner, M. P., S. Hilgenfeldt, and D. Lohse. 2002. Single-bubble sonoluminescence. Biophys. J. 74:425–484.
• Brown, M. D., A. G. Schatzlein, and I. F. Uchegbu. 2001. Gene delivery with synthetic (nonviral) carriers. Int. J. Pharm. 229:1–21. [PubMed]
• Canatella, P. J., and M. R. Prausnitz. 2001. Prediction and optimization of gene transfection and drug delivery by electroporation. Gene Ther. 8:1464–1469. [PubMed]
• Colussi, A. J., L. K. Weavers, and M. R. Hoffmann. 1998. Chemical bubble dynamics and quantitative sonochemistry. J. Phys. Chem. 102:6927–6934.
• Delius, M. 1997. Minimal static excess pressure minimises the effect of extracorporeal shock waves on cells and reduces it on gall stones. Ultrasound Med. Biol. 23:611–617. [PubMed]
• Didenko, Y. T., and K. S. Suslick. 2002. the energy efficiency of formation of photons, radicals, and ions during single-bubble cavitation. Nature. 418:394–397. [PubMed]
• Doukas, A. G., D. J. McAuliffe, S. Lee, V. Venugopalan, and T. J. Flotte. 1995. Physical factors involved in stress-wave-induced cell injury: role of stress gradient. Ultrasound Med. Biol. 21
:961–967. [PubMed]
• Evans, E. A., R. Waugh, and L. Melnik. 1976. Elastic area compressibility modulus of red cell membrane. Biophys. J. 16:585–595. [PMC free article] [PubMed]
• Everbach, E. C., I. R. Makin, M. Azadniv, and R. S. Meltzer. 1997. Correlation of ultrasound-induced hemolysis with cavitation detector output in vitro. Ultrasound Med. Biol. 23:619–624. [PubMed]
• Fecheimer, M., J. F. Hoylan, S. Parker, J. E. Sisken, F. Patel, and S. Zimmer. 1987. Transfection of mammalian cells with plasmid DNA by scrape loading and sonication loading. Proc. Natl. Acad.
Sci. USA. 84:8463–8467. [PMC free article] [PubMed]
• Fischer, P. M., E. Krausz, and D. P. Lane. 2001. Cellular delivery of impermeable effector molecules in the form of conjugates with peptides capable of mediating membrane translocation.
Bioconjug. Chem. 12:825–841. [PubMed]
• Guzman, H. R., D. X. Nguyen, S. Khan, and M. R. Prausnitz. 2001a. Ultrasound-mediated disruption of cell membranes. I. Quantification of molecular uptake and viability. J. Acoust. Soc. Am. 110
:588–596. [PubMed]
• Guzman, H. R., D. X. Nguyen, S. Khan, and M. R. Prausnitz. 2001b. Ultrasound-mediated disruption of cell membranes. II. Heterogenous effects on cells. J. Acoust. Soc. Am. 110:597–606. [PubMed]
• Hilgenfeldt, S., and D. Lohse. 1999. Predictions for upscaling sonoluminescence. Phys. Rev. Lett. 82:1036–1039.
• Holzfuss, J., M. Ruggerberg, and A. Billo. 1998. Shock wave emissions of a sonoluliminescing bubble. Phys. Rev. Lett. 81:5434–5437.
• Howard, D., and B. Sturtevant. 1997. In vitro study of the mechanical effects of shock-wave lithotripsy. Ultrasound Med. Biol. 23:1107–1122. [PubMed]
• Johnson-Saliba, M., and D. A. Jans. 2001. Gene therapy: optimising DNA delivery to the nucleus. Curr. Drug Targets. 2:371–399. [PubMed]
• Kim, H. J., J. F. Greenleaf, R. Kinnick, J. Bronk, and M. Bolander. 1996. Ultrasound-mediated transfection of mammalian cells. Hum. Gene Ther. 7:1339–1346. [PubMed]
• Koch, S., P. Pohl, U. Cobet, and N. G. Rainov. 2000. Ultrasound enhancement of liposome-mediated cell transfection is caused by cavitational effects. Ultrasound Med. Biol. 26:897–903. [PubMed]
• Kodama, T., A. G. Doukas, and M. R. Hamblin. 2002. Shock wave-mediated molecular delivery into cells. Biochem. Biophys. Acta. 1542:186–194. [PubMed]
• Kodama, T., M. R. Hamblin, and A. G. Doukas. 2000. Cytoplasmic delivery with shock waves: importance of impulse. Biophys. J. 79:1821–1832. [PMC free article] [PubMed]
• Kodama, T., and K. Takayama. 1998. Dynamic behavior of bubbles during extracorporeal shock-wave lithotripsy. Ultrasound Med. Biol. 24:723–738. [PubMed]
• Kremkau, F. W. 1998. Diagnostic Ultrasound: Principles and Instruments. W. B. Sanders, editor. Philadelphia.
• Leighton, T. 1997. The Acoustic Bubble. Academic Press, San Diego.
• Liu J., T. Lewis, and M. Prausnitz. 1998. Non-invasive assessment and control of ultrasound-mediated membrane permeabilization. Pharm. Res. 15(6). [PubMed]
• Lokhandwalla, M., J. A. McAteer, J. C. Williams, and B. Sturtevant. 2001. Mechanical hemolysis in shock wave lithotripsy (SWL). II. In vitro cell lysis due to shear. Phys. Med. Biol. 46
:1245–1264. [PubMed]
• Lokhandwalla, M., and B. Sturtevant. 2001. Mechanical haemolysis in shock wave lithotripsy (SWL). Phys. Med. Biol. 46:413–437. [PubMed]
• Mason, T. J., and J. P. Lorimer. 1988. Sonochemistry: Theory, Applications and Uses of Ultrasound in Chemistry. John Wiley & Sons, New York.
• Matula, T., I. M. Hallaj, R. O. Cleveland, and L. A. Crum. 1998. The acoustic emissions from single-bubble sonoluminescence. J. Acoust. Soc. Am. 103:1377–1382.
• Matula, T. J. 1999. Inertial cavitation and single-bubble sonoluminescence. Phil. Trans. R. Soc. Lond. A. 357:225–249.
• Mayer, R. E., S. Schenk, S. Child, C. Norton, C. Cox, C. Hartman, C. Cox, and E. Carstensen. 1990. Pressure threshold for shock wave induced renal damage. J. Urol. 144:1505–1509. [PubMed]
• Miller, D., and J. Quddus. 2000. Sonoporation of monolayer cells by diagnostic ultrasound activation of contrast-agent gas bodies. Ultrasound Med. Biol. 26:661–667. [PubMed]
• Miller, D., A. R. Williams, J. Morris, and W. Chrisler. 1998. Sonoporation of erythrocytes by lithotripter shockwaves in vitro. Ultrasonics. 36:947–952. [PubMed]
• Miller, M. W., D. L. Miller, and A. A. Brayman. 1996. A review of in vitro bioeffects of inertial ultrasonic from a mechanistic perspective. Ultrasound Med. Biol. 22:1131–1154. [PubMed]
• Mitragotri, S., D. Edwards, D. Blankschtein, and R. Langer. 1995. A mechanistic study of ultrasonically enhanced transdermal drug delivery. J. Pharm. Sci. 84:697–706. [PubMed]
• Mitragotri, S., J. Farrell, H. Tang, T. Terahara, J. Kost, and R. Langer. 2000a. Determination of the threshold energy dose for ultrasound-induced transdermal drug delivery. J. Control. Rel. 63
:41–52. [PubMed]
• Mitragotri, S., D. Ray, J. Farrell, H. Tang, B. Yu, J. Kost, D. Blankschtein, and R. Langer. 2000b. Synergistic effect of ultrasound and sodium lauryl sulfate on transdermal drug delivery. J.
Pharm. Sci. 89:892–900. [PubMed]
• Neppiras, E. 1968. Subharmonic and other low frequency emission from bubbles in sound irradiated liquids. J. Acoust. Soc. Am. 46:587–601.
• Netz, R. R., and M. Schick. 1996. Pore formation and rupture in fluid bilayers. Phys. Rev. E. 53:3875–3885. [PubMed]
• Ohl, C.-D., T. Kurz, R. Geisler, O. L. Lindau, and W. Lauterborn. 1999. Bubble dynamics, shock waves, and sonoluminescence. Phil. Trans. R. Soc. Lond. A. 357:269–294.
• Pecha, R., and B. Gompf. 1999. Microimplosions: cavitation collapse and shock wave emission on a nanosecond time scale. Phys. Rev. Lett. 84:1328–1330. [PubMed]
• Perry, R. H., and D. W. Green. 1973. Chemical Engineering Handbook. McGraw-Hill Book Company, New York.
• Prosperetti, A., and Y. Hao. 1999. Modelling of spherical gas bubble oscillations and sonoluminescence. Phil. Trans. R. Soc. Lond. A. 357:203–223.
• Raeman, C. H., S. Z. Child, D. Dalecki, R. Mayer, K. J. Parker, and E. L. Crstensen. 1994. Damage to murine kidney and intestine from exposure to the fields of a piezoelectric lithotripter.
Ultrasound Med. Biol. 20:589–594. [PubMed]
• Saad, A. H., and G. M. Hahn. 1992. Ultrasound-enhanced effects of adriamycin against murine tumors. Ultrasound Med. Biol. 18:715–723. [PubMed]
• Shankar, P. M., P. D. Krishna, and V. L. Newhouse. 1999. Subharmonic backscattering from ultrasound contrast agents. J. Acoust. Soc. Am. 106:2104–2110. [PubMed]
• Sonden, A., B. Svenssin, N. Roman, H. Ostmark, B. Brismar, J. Palmblad, and B. Kjellstrom. 2000. Laser-induced shock wave endothelial cell injury. Lasers Surg. Med. 26:364–375. [PubMed]
• Stein, W. D. 1986. Diffusion and Transport Across Cell Membranes. Academic Press, Orlando, Florida.
• Storey, B. D., and A. J. Szeri. 2000. Water vapor, sonoluminescence and sonochemistry. Proc. R. Soc. Lond. A. 456:1685–1709.
• Suslick, K. S. 1989. Ultrasound: Its Chemical, Physical and Biological Effects. VCH Publishers, New York. [PubMed]
• Tezel, A., A. Sens, and S. Mitragotri. 2002. Investigations of the role of cavitation in low-frequency sonophoresis using acoustic spectroscopy. J. Pharm. Sci. 91:444–453. [PubMed]
• Tezel, A., A. Sens, J. Tuscherer, and S. Mitragotri. 2001. Frequency dependence of sonophoresis. Pharm. Res. 18:1694–1700. [PubMed]
• Vichare, N., P. Senthilkumar, V. Moholkar, P. R. Gogate, and A. B. Pandit. 2000. Energy analysis of acoustic cavitation. Ind. Eng. Chem. Res. 39:1480–1486.
• Wang, Z. Q., R. Pecha, B. Gompf, and W. Eisenmenger. 1999. Single bubble sonoluminiscence: investigations of the emitted pressure wave with a fiber optic probe hydrophone. Phys. Rev. E. 59
• Ward, M., and J. Wu. 1999. Ultrasound-induced cell lysis and sonoporation enhanced by contrast agents. J. Acoust. Soc. Am. 105:1951–2957. [PubMed]
• Ward, M., J. Wu, and J.-F. Chiu. 2000. Experimental study of the effects of optison concentration on sonoporation in vitro. Ultrasound Med. Biol. 26:1169–1175. [PubMed]
• Weninger, K. R., B. P. Barber, and S. J. Putterman. 1997. Pulsed MIE scattering measurements of the collapse of a sonoluminescencing bubble. Phys. Rev. Lett. 78:1799–1802.
• Williams, A. R. 1973. A possible alteration in the permeability of ascites cell membranes after exposure to acoustic microstreaming. J. Cell Sci. 12:875–885. [PubMed]
• Williams, J. C., J. F. Woodward, M. A. Stonehill, and A. P. Evan. 1999. Cell Damage by lithographer shock waves at high pressure to preclude cavitation. Ultrasound Med. Biol. 25:1445–1449. [
• Wu, C. C., and P. H. Roberts. 1993. Shock-wave propagation in a sonoluminescing gas bubble. Phys. Rev. Lett. 70:3424–3427. [PubMed]
• Wu, J. 2002. Theoretical study on shear stress generated by microstreaming surrounding contrast agents attached to living cells. Ultrasound Med. Biol. 28:125–129. [PubMed]
• Wu, J., J. P. Ross, and J.-F. Chiu. 2002. Reparable sonoporation generated by microstreaming. J. Acoust. Soc. Am. 111:1460–1464. [PubMed]
• Zhang, L., L. Cheng, N. Xu, M. Zhao, C. Li, J. Yuan, and S. Jia. 1991. Efficient transformation of tobacco by ultrasonication. Biotechnology. 9:996–997.
• Zhong, P., I. Ciaonta, S. Zhu, and F. Cocks. 1998. Effects of tissue constraint on shock wave-induced bubble expansion in vitro. J. Acoust. Soc. Am. 104:3126–3129. [PubMed]
• Zhong, P., H. Lin, and E. Bhogte. 1999. Shock wave-inertial microbubble interaction: methodology, physical characterization, and bioeffect study. J. Acoust. Soc. Am. 105:1997–2009. [PubMed]
Articles from Biophysical Journal are provided here courtesy of The Biophysical Society
• Compound
PubChem Compound links
• PubMed
PubMed citations for these articles
• Substance
PubChem Substance links
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1302870/?tool=pubmed","timestamp":"2014-04-18T21:03:10Z","content_type":null,"content_length":"177208","record_id":"<urn:uuid:39773f39-6302-44aa-8769-41e5d5f8f42e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Gamma_0
T.Forster@dpmms.cam.ac.uk T.Forster at dpmms.cam.ac.uk
Mon Dec 13 13:33:53 EST 2010
As Nick says, there are indeed other issues, but my question did not
concern them, and was rather: can someone explain to me in what respect the
omega-sequence of gammas whose sup is gamma_0 is less predicative than the
sequence of iterates of lower gammas that one creates in order to obtain
later lower gammas? Specifically it is alleged that one needs the second
number class to be a set in order to prove the existence of worders of N of
length gamma_0. I would be grateful to anyone who can shed light on this
2010, Nik Weaver wrote:
>Thomas Forster forwarded the comment
>> I can see how to prove the existence of wellorders of the naturals
>> of all smaller lengths without using power set (or Hartogs'..) but
>> it seems to me that i can do the same for Gamma_0 itself.
>Is the writer really sure about the first point? Uncontroversially
>predicative systems typically don't get very far beyond \epsilon_0.
>The usual error in this kind of claim involves failing to distinguish
>between different versions of the concept "well-ordered".
>Let's say we have a total ordering < of the natural numbers that we
>know is well-ordered: every nonempty subset has a smallest element
>for <. Suppose also that the property P is progressive for <, i.e.,
>for every number a we have that [(forall b < a)P(b)] implies P(a).
>Can we conclude that P(a) holds for all a? Yes, we can prove this
>by considering the set {a: P(a) is false}. If this set is nonempty
>then it has a smallest element, which contradicts progressivity of <.
>Therefore the set must be empty.
>*However*, this argument requires some sort of comprehension axiom in
>order to form the set {a: P(a) is false}. If the property P involves
>quantification over the power set of the natural numbers, then
>predicativists would reject the relevant comprehension axiom and so
>would not be able to make the stated inference.
>This error is so common, I am willing to bet that the writer has made
>an illegitimate inference of this type in his well-ordering proof.
>I explain this issue in more detail in Sections 7 and 8 of my paper
>"What is predicativism?", and I discuss it at great length throughout
>the entire first half of the paper "Predicativity beyond Gamma_0".
>Both are available on my website at
>Nik Weaver
>Math Dept.
>Washington University
>St. Louis, MO 63130 USA
>nweaver at math.wustl.edu
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2010-December/015174.html","timestamp":"2014-04-20T18:49:05Z","content_type":null,"content_length":"5331","record_id":"<urn:uuid:35b20b19-47cb-4b1d-96d7-d53ef234b646>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Implementing the CountSummary Procedure
In my last post, I described and demonstrated the
procedure to be included in the
package that I am in the process of developing.
This procedure generates a collection of graphical data summaries for a count data sequence, based on the
, and
functions from the
function generates both the
Poissonness plot
and the
negative-binomialness plot
discussed in Chapters 8 and 9 of
Exploring Data in Engineering, the Sciences and Medicine
These plots provide informal graphical assessments of the conformance of a count data sequence with the two most popular distribution models for count data, the Poisson distribution and the
negative-binomial distribution.
As promised, this post describes the
code needed to implement the
procedure, based on these functions from the
The key to this implementation lies in the use of the
package, a set of low-level graphics primitives included in base
As I mentioned in my last post, the reason this was necessary - instead of using higher-level graphics packages like lattice or ggplot2 - was that the vcd package is based on grid graphics, making it
incompatible with base graphics commands like those used to generate arrays of multiple plots.
package was developed by Paul Murrell, who provides a lot of extremely useful information about both
graphics in general and grid graphics in particular on his
home page
, including the article “Drawing Diagrams with R,” which provides a nicely focused introduction to grid graphics.
The first example I present here is basically a composite of the first two examples presented in this paper.
Specifically, the code for this example is:
pushViewport(viewport(width = 0.8, height = 0.4))
grid.text("This is text in a box")
The first line of this R code loads the grid package and the second tells this package to clear the plot window; failing to do this will cause this particular piece of code to overwrite whatever was
there before, which usually isn’t what you want. The third line creates a viewport, into which the plot will be placed. In this particular example, we specify a width of 0.8, or 80% of the total plot
window width, and a height of 0.4, corresponding to 40% of the total window height. The next two lines draw a rectangular box with rounded corners and put “This is text in a box” in the center of
this box. The advantage of the grid package is that it provides us with simple graphics primitives to draw this kind of figure, without having to compute exact positions (e.g., in inches) for the
different figure components. Commands like grid.text provide useful defaults (i.e., put the text in the center of the viewport), which can be overridden by specifying positional parameters in a
variety of ways (e.g., left- or right-justified, offsets in inches or lines of text, etc.). The results obtained using these commands are shown in the figure below.
The code for the second example is a simple extension of the first one, essentially consisting of the added initial code required to create the desired two-by-two plot array, followed by four
slightly modified copies of the above code. Specifically, this code is:
grid.roundrect(width = 0.8, height=0.4)
grid.text("Plot 1 goes here")
grid.roundrect(width = 0.8, height=0.4)
grid.text("Plot 2 goes here")
grid.roundrect(width = 0.8, height=0.4)
grid.text("Plot 3 goes here")
grid.roundrect(width = 0.8, height=0.4)
grid.text("Plot 4 goes here")
Here, note that the first “pushViewport” command creates the two-by-two plot array we want, by specifying “layout = grid.layout(nrow=2,ncol=2)”. As in initializing a data frame in R, we can create an
arbitrary two-dimensional array of grid graphics viewports – say m by n – by specifying “layout = grid.layout(nrow=m, ncol=n)”. Once we have done this, we can use whatever grid commands – or
grid-compatible commands, such as those generated by the vcd package – we want, to create the individual elements in our array of plots. In this example, I have basically repeated the code from the
first example to put text into rounded rectangular boxes in each position of the plot array. The two most important details are, first, the “pushViewport” command at the beginning of each of these
individual plot blocks specifies which of the four array elements the following plot will go in, and second, the “popViewport()” command at the end of each block, which tells the grid package that we
are finished with this element of the array. If we leave this command out, the next “pushViewport” command will not move to the desired plot element, but will simply overwrite the previous plot.
Executing this code yields the plot shown below.
The final example replaces the text in the above two-by-two example with the plots I want for the CountSummary procedure. Before presenting this code, it is important to say something about the
structure of the resulting plot and the vcd commands used to generate the different plot elements. The first plot – in the upper left position of the array shown below – is an Ord plot, generated by
the Ord_plot command, which does two things. The first is to generate the desired plot, but the second is to return estimates of the intercept and slope of one of the two reference lines in the plot.
The first of these lines is fit to the points in the plot via ordinary least squares, while the second – the one whose parameters are returned – is fit via weighted least squares, to down-weight the
widely scattered points seen in this plot that correspond to cases with very few observations. The intent of the Ord plot is to help us decide which of several alternative distributions – including
both the Poisson and the negative-binomial – fits our count data sequence better. This guidance is based on the reference line parameters, and the Ord_estimate function in the vcd package transforms
these parameter estimates into distributional recommendations and the distribution parameter values needed by the distplot function in the vcd package to generate either the Poissonness plot or the
negative-binomialness plot for the count data sequence. Although these recommendations are sometimes useful, it is important to emphasize the caution given in the vcd package documentation:
“Be careful with the conclusions from Ord_estimate as it implements just some simple heuristics!”
In the
procedure, I use these results both to generate part of the text summary in the upper right element of the plot array, and to decide which type of plot to display in the lower right element of this
Both this plot and the Poissonness reference plot in the lower left element of the display are created using the
command in the
I include the Poissonness reference plot because the Poisson distribution is the most commonly assumed distribution for count data – analogous in many ways to the Gaussian distribution so often
assumed for continuous-valued data – and, by not specifying the single parameter for this distribution, I allow the function to determine it by fitting the data.
In cases where the Ord plot heuristic recommends the Poissonness plot, it also provides this parameter, which I provide to the
function for the lower right plot. Thus, while both the lower right and lower left plots are Poissonness plots in this case, they are generally based on different distribution parameters.
In the particular example shown here – constructed from the “number of times pregnant” variable in the Pima Indians diabetes dataset that I have discussed in several previous posts (available from
UCI Machine Learning Repository
) – the Ord plot heuristic recommends the negative binomial distribution.
Comparing the Poissonness and negative-binomialness plots in the bottom row of the above plot array, it does appear that the negative binomial distribution fits the data better.
Finally, before examining the code for the CountSummary procedure, it is worth noting that the vcd package’s implementation of the Ord_plot and Ord_estimate procedures can generate four different
distributional recommendations: the Poisson and negative-binomial distributions discussed here, along with the binomial distribution and the much less well-known log-series distribution. The distplot
procedure is flexible enough to generate plots for the first three of these distributions, but not the fourth, so in cases where the Ord plot heuristic recommends this last distribution, the
CountSummary procedure displays the recommended distribution and parameter, but displays a warning message that no distribution plot is available for this case in the lower right plot position.
The code for the CountSummary procedure looks like this:
CountSummary <- function(xCount,TitleString){
# Initial setup
# Set up 2x2 array of plots
# Generate the plots:
# 1 - upper left = Ord plot
OrdLine = Ord_plot(xCount, newpage = FALSE, pop=FALSE, legend=FALSE)
OrdType = Ord_estimate(OrdLine)
# 2 - upper right = text summary
OrdTypeText = paste("Type = ",OrdType$type,sep=" ")
if (OrdType$type == "poisson"){
OrdPar = "Lambda = "
else if ((OrdType$type == "nbinomial")|(OrdType$type == "nbinomial")){
OrdPar = "Prob = "
else if (OrdType$type == "log-series"){
OrdPar = "Theta = "
OrdPar = "Parameter = "
OrdEstText = paste(OrdPar,round(OrdType$estimate,digits=3), sep=" ")
TextSummary = paste("Ord plot heuristic results:",OrdTypeText,OrdEstText,sep="\n")
# 3 - lower left = standard Poissonness plot
distplot(xCount, type="poisson",newpage=FALSE, pop=FALSE, legend = FALSE)
# 4 - lower right = plot suggested by Ord results
if (OrdType$type == "poisson"){
distplot(xCount, type="poisson",lambda=OrdType$estimate, newpage=FALSE, pop=FALSE, legend=FALSE)
else if (OrdType$type == "nbinomial"){
prob = OrdType$estimate
size = 1/prob - 1
distplot(xCount, type="nbinomial",size=size,newpage=FALSE, pop=FALSE, legend=FALSE)
else if (OrdType$type == "binomial"){
distplot(xCount, type="binomial", newpage=FALSE, pop=FALSE, legend=FALSE)
Message = paste("No distribution plot","available","for this case",sep="\n")
This procedure is a function called with two arguments: the sequence of count values, xCounts, and TitleString, a text string that is displayed in the upper right text box in the plot array, along
with the recommendations from the Ord plot heuristic. When called, the function first loads the vcd library to make the Ord_plot, Ord_estimate, and distplot functions available for use, and it
executes the grid.newpage() command to clear the display. (Note that we don’t have to include “library(grid)” here to load the grid package, since loading the vcd package automatically does this.) As
in the previous example, the first “pushViewport” command creates the two-by-two plot array, and this is again followed by four code segments, one to generate each of the four displays in this array.
The first of these segments invokes the Ord_plot and Ord_estimate commands as discussed above, first to generate the upper left plot (a side-effect of the Ord_plot command) and second, to obtain the
Ord plot heuristic recommendations, to be used in structuring the rest of the display. The second segment creates a text display as in the first example considered here, but the structure of this
display depends on the Ord plot heuristic results (i.e., the names of the parameters for the four possible recommended distributions are different, and the logic in this code block matches the
display text to this distribution). As noted in the preceding discussion, the third plot (lower left) is the Poissonness plot generated by the distplot function from the vcd package. In this case,
the function is called only specifying ‘type = “poisson”’ without the optional distribution parameter lambda, which is obtained by fitting the data. The final element of this plot array, in the lower
right, is also generated via a call to the distplot function, but here, the results from the Ord plot heuristic are used to specify both the type parameter and any optional or required shape
parameters for the distribution. As with the displayed text, simple if-then-else logic is required here to match the plot generated with the Ord plot heuristic recommendations.
Finally, it is important to note that in all of the calls made to Ord_plot or distplot in the CountSummary procedure, the parameters newpage, pop, and legend, are all specified as FALSE. Specifying
“newpage = FALSE” prevents these vcd plot commands from clearing the display page and erasing everything we have done so far. Similarly, specifying “pop = FALSE” allows us to continue working in the
current plot window until we notify the grid graphics system that we are done with it by issuing our own “popViewport()” command. Specifying “legend = FALSE” tells Ord_plot and distplot not to write
the default informational legend on each plot. This is important here because, given the relatively small size of the plots generated in this two-by-two array, including the default legends would
obscure important details.
No comments:
|
{"url":"http://exploringdatablog.blogspot.com/2012/09/implementing-countsummary-procedure.html","timestamp":"2014-04-20T06:22:20Z","content_type":null,"content_length":"95600","record_id":"<urn:uuid:f2f87ab1-3ae5-4a90-ae1a-713517208018>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Laszlo's Valve Output Stage with Lundahl transformer - diyAudio
diyAudio Member
Join Date: Mar 2004
Location: Budapest, Hungary
Laszlo's Valve Output Stage with Lundahl transformer
I start a new thread with details of my Transformer/Tube Output Stage, not wanted to spoil the "Passive-Active I/V stage" thread any longer.
The topology is based on the SRPP (Shunt Regulated Push Pull) arrangement. Both tube halves operate in grounded cathode mode, and are driven push-pull. The low input impedance of the cathodes seen by
the secondary is transformed back to the primary. The impedance transformation is the turns ratio squared, so the DAC output can see very low load impedance (around 5 ohms).
The circuit is designed for two TDA1541A DACs, one is driven with inverted binary input, so they give push-pull output currents of 8 mA peak-to-peak.
The bias currents in the primary windings flow in opposite direction, so the transformer core will not be magnetized. Same is true for the secondaries.
I built the circuit with Lundahl LL1678 transformers and Mullard E88CC tubes. The measurement results at 2 V RMS output signal are (Left and Right channel):
THD = 0.03% / 0.018%
IMD (250 Hz + 7 kHz, 4:1) = 0.038% / 0.02%
S/N linear = 69 dB / 62 dB
S/N A weighted = 75.3 dB / 72.6 dB
Frequency response:
10 Hz = -0.4 dB
100 kHz = -1.5 dB
Only one half of the primary is connected to a single DAC currently (there is 2 mA bias current flowing through the coil). The 0 dB output signal from a test CD is 2.5 V RMS with the DAC connected.
The listening tests are very positive so far.
|
{"url":"http://www.diyaudio.com/forums/digital-source/100297-laszlos-valve-output-stage-lundahl-transformer.html","timestamp":"2014-04-17T03:32:09Z","content_type":null,"content_length":"77807","record_id":"<urn:uuid:7c1f1a20-f6e6-49ef-87c7-270944807d08>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
You Mean There's Math After Calculus?
0 Comments
As a mathematician and advocate for mathematics education, there are several reactions that I often get whenever I discuss what I do. One of the most interesting of which is the following statement,
some version of which I have heard an numerous occasions: "You do college math. Is that like really hard calculus?"
This question is interesting to me because it reflects several aspects of our cultural understanding of mathematics. The average person in our society has very little knowledge of what math looks
like after high school. We all live through the progression from Algebra I to Geometry to Algebra II. Some take a class called Pre-calculus or Trigonometry or Advanced Functions. Even fewer make it
through Calculus in high school. In any event, the high school math student lives in a world where Calculus is the pinnacle of all mathematics. Only the smartest seniors take Calculus. How could
there be math that is harder than Calculus? What would that math even look like?
The reality is that Calculus is just one more step in a long progression of subjects that together make up the body of modern mathematical knowledge. These subjects are usually taken in the
designated order because understanding the concepts taught in one course requires mastery of the concepts and techniques taught in all of the preceding courses. You have to have mastered the tools
from Geometry and Algebra II in order to have any hope of really understanding Pre-calculus. Similarly, you have to have completed a couple semesters of calculus before jumping into Differential
Equations, which often follows Calculus.
Below is a diagram laying out the progression of math courses that a typical math major will take before finishing college. Some schools will offer additional classes or call these ones by different
names, but the basics are shown below. Most graduate schools offer deeper versions of each of the top rung classes listed here, as well as very specific research courses that delve into narrow
corners of Algebra, Analysis, and Topology.
Linear Algebra is the study of vector spaces, which is a fancy name for the usual 1-, 2-, and 3- dimensional spaces that we learn about in high school. We discuss the properties inherent to such
spaces, and how they can be generalized to higher dimensions and new types of "vectors". The tools of the trade are matrices. We use matrices to efficiently solve very large systems of equations and
to create linear approximations for more complicated functions. Computer programmers use matrices for numerous applications, including most three dimensional animations. Any animation in which you
see the scenery move as if you are turning, looking up, etc. is using matrices to simulate those changes in perspective.
Differential Equations is the intersection of Calculus and the solving of equations. The equations that we learn to solve in differential equations have variables that themselves represent functions
or the rates of change of functions.
Consider a 50 gallon tank full of pure water. Suppose that we open a spigot that pours into the top of the tank a mixture that is 10 % salt by volume, at a rate of 5 gallons per minute. From a second
spigot at the bottom of the tank we extract the salt mixture at a rate of 5 gallons per minute. The volume in the tank remains constant, but the salt content changes with time. How much salt is in
the tank after 10 minutes? After 50?
Questions such as these are very easy to ask, but very difficult to answer without a solid understanding of calculus. Differential equations are the bread and butter of modern engineering. Without
the tools and strategies learned in this course we could have no fancy bridges, jet airplanes, or small hand-held electronic devices.
Proof-writing is the course in which students learn the art of writing truly rigorous math proofs. While the math content covered in a proof-writing course will change from school to school, the
emphasis is always on helping students make the transition from computation-oriented courses such as Calculus and Differential Equations to the proof-based nature of the higher level courses. This
course is generally where students first encounter math research in any meaningful way.
Abstract Algebra is the study of number systems and how they work. We discuss the structures that different types of number systems must have in order to to be consistent. We also look at exotic
examples of number systems that behave very strangely. For instance, we might examine a number system in which the multiplication operation fails to be commutative.
Real Analysis is the technical study of the details behind Calculus. So yes, this is like really hard calculus. In this course we develop rigorous proofs of the concepts of limits, derivatives, and
integrals. We then develop versions of these concepts for more abstract number systems.
Complex Analysis brings the same tools of limits, derivatives, and integrals to bear in the study of the complex numbers. This is also like really hard calculus. Complex calculus.
Topology is the study of shapes when we remove the notion of distance. Put another way, topology studies the properties of shapes that remain when we make them of rubber that we can stretch and smash
to our hearts content. We also define in very rigorous terms what we mean by geometric notions such as open and closed sets, connectedness, compactness, etc. We then have fun applying all of these
notions to exotic spaces where things do not turn out to work like we might expect them to.
Non-Euclidean Geometry is the study of geometric structures in which our basic postulates from Euclidean geometry are tweaked slightly. Exotic spaces result, and many of the theorems that we take for
granted in Euclidean geometry begin to fail. For instance, we no longer get the sum of angle in a triangle adding up to 180 degrees.
|
{"url":"http://www.brainmaker.net/1/post/2012/07/you-mean-theres-math-after-calculus.html","timestamp":"2014-04-16T13:12:16Z","content_type":null,"content_length":"22613","record_id":"<urn:uuid:9d9d9cbc-2389-44cd-ba09-48fcb5cd2d7b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Queens, NY Algebra 2 Tutor
Find a Queens, NY Algebra 2 Tutor
...I supervised a department of more than 30 mathematics teachers in one of New York City's largest high schools for more than 20 years. I've been involved in mathematics education for my entire
career and have had more than 25 years of successful experience as a tutor of middle school, high school...
9 Subjects: including algebra 2, geometry, algebra 1, precalculus
...I have experience tutoring pre-algebra and have a bachelor's degree in physics. I have tutored pre-calculus both privately and for the Princeton Review. I have a bachelor's degree in physics.
20 Subjects: including algebra 2, English, algebra 1, grammar
I have over 20 years experience in tutoring, primarily in high school and introductory college level physics. I know that physics can be an intimidating subject, so I combine clear explanations
of the material with strategies for how to catch mistakes without getting discouraged. Keeping a good attitude can be a key part of mastering physics.
18 Subjects: including algebra 2, reading, physics, calculus
...You may wonder what is the relationship between engineering and education (such as tutoring). I will say "Yes", there is a potential relation in these two areas. As an engineer, I had to do
lots of mathematics and science from the very beginning of my life. Since then, I have studied in a group and helped others to solve difficult problems and help them with their homework and
7 Subjects: including algebra 2, English, reading, algebra 1
...My overall grade in Russian and Russian literature in high school was 5 (the highest possible grade), and I received a Gold Medal for outstanding academic performance. I continue to actively
use Russian in my everyday life. Besides, my daughter attends the School at Russian Mission in the Unite...
24 Subjects: including algebra 2, physics, GRE, Russian
Related Queens, NY Tutors
Queens, NY Accounting Tutors
Queens, NY ACT Tutors
Queens, NY Algebra Tutors
Queens, NY Algebra 2 Tutors
Queens, NY Calculus Tutors
Queens, NY Geometry Tutors
Queens, NY Math Tutors
Queens, NY Prealgebra Tutors
Queens, NY Precalculus Tutors
Queens, NY SAT Tutors
Queens, NY SAT Math Tutors
Queens, NY Science Tutors
Queens, NY Statistics Tutors
Queens, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Queens_NY_Algebra_2_tutors.php","timestamp":"2014-04-18T15:54:25Z","content_type":null,"content_length":"24106","record_id":"<urn:uuid:f2907b8b-cd20-4920-aaa4-a7277fc3fd8b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Zero, the Hero & Zerona Puppet
with Treat Bag
Zero, the Hero Puppet is a great, little guy who comes to school every ten days to help us in learn how to count to 100! He also helps us with learning the concept of place value. We start on the
very first day of school. I purchased a 1-100 number poster from the teacher's supply store, and we marked off each day until we reached the 100th day. This was included as part of our opening,
calendar activities. I found that it really helped the children understand the concept of zero being a place holder, counting to 100 and place value concepts.
We also used little sticks to count the days and had a place value pocket chart to store the sticks until we had 10 sticks. There was a ones, tens and hundreds pocket on this chart (three different
pockets.) You can make your own pocket chart or use one that is available at teacher's supply stores. You can use straws or popsicle sticks, too. We would bundle the sticks with a rubber band each
time we had 10 and move them to the next pocket. When we had 10 sets of sticks in the tens pocket (100) we would celebrate the 100th day of school. We all looked forward to the 100th day of school
when Zero, the Hero needed a friend to help him since there are two zeros in 1-0-0! So, he brought his friend, Zerona--the girl puppet.
Zero, the Hero puppet
When Zero, the Hero puppet comes to school we use the poem:
Zerona Puppet
Zerona is the puppet who helps Zero! She comes to help him when he needs two zeros on the 100th day of school. She can be the wife, friend or sister for Zero, the hero. She has a crown with jewels
and a sparkly cape! Both puppets are available for purchase. These hand puppets are made with felt and are approximately 13 ½" x 9 ½". You will also receive a laminated poster like the one above -- 8
½" x 11" with Zerona. Each puppet is $25.
11 x 14 inches. The bag is available for purchase for $25.
There is a 10% discount if you would like to purchase Zero, Zerona and the Zero, the Hero bag! See below.
10% Discount for purchasing all three---- Zero, the Hero puppet, Zerona Puppet and the Treat Bag!
These little poems were posted on line years ago, and I don't have an author of the poems to give due credit!
Zero, the Hero Poems
I'm Zero the Hero and I'm here to say,
"Hooray, hooray on your 10th day!"
Congratulations on your very first zero.
That's the favorite number of Zero the Hero.
You've put a zero band around 10 straws.
And now we need lots of applause.
I'm Zero the Hero - Let's cheer today.
Hooray, hooray for our 10th day!
Good-by for today, here's a treat that's round and cool.
I'll be back in another 10 days of school.
I'm Zero the Hero and I'm here to say,
"Hooray, hooray on your 20th day!"
Congratulations on your 2nd zero.
That's the favorite number of Zero the Hero.
You have 2 zero bands and 2 bundles of straws.
And now we need lots of applause.
The 10's cup has 2 bundles - that's plenty.
Now we can count by 10s to 20.
I'm Zero the Hero - Let's cheer today.
Hooray, hooray for our 20th day!
Good-by for now, here's a round treat again
Can you count 20 of them?
I'm Zero the Hero and I'm here to say,
"Hooray, hooray, on your 30th day!"
Congratulations on your 3rd zero.
That's the favorite number of Zero the Hero.
You have 3 zero bands and 3 bundles of straws.
And now we need lots of applause.
The 10's cup has 3 bundles - WOW!
Let's count by 10s to 30 - right now!
I'm Zero the Hero - Let's cheer today
Hooray, hooray for our 30th day!
Good-by for now, here's a treat from me.
It's round like a zero, what can it be?
I'm Zero the Hero and I'm here to say,
"Hooray, hooray on your 40th day!"
Congratulations on your 4th zero.
That's the favorite number of Zero the Hero.
You have 4 zero bands and 4 bundles of straws.
And now we need lots of applause.
The 10's cup has 4 bundles - WOW!
Let's count by 10s to 40 - right now!
I'm Zero the Hero - Let's cheer today
Hooray, hooray for our 40th day!
Good-by for now, here's a treat from me.
It's round like a zero, what can it be?
I'm Zero the Hero and I'm here to say,
"Hooray, hooray on your 50th day!"
Congratulations on your 5th zero.
That's the favorite number of Zero the Hero.
You have 5 zero bands and 5 bundles of straws.
And now we need lots of applause.
The 10's cup has 5 bundles - WOW!
Let's count by 10s to 50 - right now!
I'm Zero the Hero - Let's cheer today
Hooray, hooray for our 50th day!
Good bye for now, here's a treat from me.
It's round like a zero, what can it be?
I'm Zero the Hero and I'm here to say,
"Hooray, hooray on your 60th day!
Congratulations on your 6th zero
That's the favorite number of Zero the Hero.
You have 6 zero bands and 6 bundles of straws.
And now we need lots of applause.
The 10's cup has 6 bundles - WOW!
Let's count by 10s to 60 - right now!
I'm Zero the Hero - Let's cheer today
Hooray, hooray for our 60th day!
Good-by for now, here's a treat from me.
It's round like a zero, what can it be?
I'm Zero the Hero and I'm here to say,
"Hooray, Hooray on your 70th day!"
Congratulations on your 7th zero.
That's the favorite number of Zero the Hero.
You have 7 zero bands and 7 bundles of straws.
And now we need lots of applause.
The 10's cup has 7 bundles - WOW!
Let's count by 10s to 70 - right now!
I'm Zero the Hero - Let's cheer today
Hooray, hooray for our 70th day!
Good-by for now, here's a treat from me.
It's round like a zero, what can it be?
I'm Zero the Hero and I'm here to say,
"Hooray, hooray on your 80th day!"
Congratulations on your 8th zero.
That's the favorite number of Zero the Hero.
You have 8 zero bands and 8 bundles of straws.
And now we need lots of applause.
The 10's cup has 8 bundles - WOW!
Let's count by 10s to 80 - right now!
I'm Zero the Hero - Let's cheer today
Hooray, hooray for our 80th day!
Good-by for now, here's a treat from me.
It's round like a zero, what can it be?
I'm Zero the Hero and I'm here to say,
"Hooray, hooray on your 90th day!"
Congratulations on your 9th zero.
That's the favorite number of Zero the Hero.
You have 9 zero bands and 9 bundles of straws.
And now we need lots of applause.
The 10's cup has 9 bundles - WOW!
Let's count by 10s to 90 - right now!
I'm Zero the Hero - Let's cheer today
Hooray, hooray for our 90th day!
Good-by for now, here's a sweet treat.
See you next time. 100 is neat!
I'm Zero the Hero and I'm here to say,
"Hooray. Hooray, on your 100th day!"
Congratulations on your 10th zero.
That's the favorite number of Zero the Hero.
The 10's cup has 10 bundles - WOW!
Let's count by 10s to 100 - right now!
I'm Zero the Hero - Let's cheer today
Hooray, hooray for our 100th day!
Good-by my friends, here's the last treat.
See you next year. 100 is neat!
|
{"url":"http://www.kinderteacher.com/ZeroTheHero.htm","timestamp":"2014-04-20T05:42:52Z","content_type":null,"content_length":"41488","record_id":"<urn:uuid:403bbb26-50d0-48ff-a4cf-e68924960f52>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computer-assisted Learning
Concepts & Techniques
Tell me and I'll forget.
Show me and I might remember.
But involve me and I will understand.
Computer-assisted learning is to convey a vast amount of information in a very short period of time. It is a powerful method of reinforcing concepts and topics first introduced to you through
your textbook, and discussion in the classroom. Computer-assessed learning enables you in a powerful way to comprehend complex concepts.
To search the site, try Edit | Find in page [Ctrl + f]. Enter a word or phrase in the dialogue box, e.g. "parameter " or "linear " If the first appearance of the word/phrase is not what you are
looking for, try Find Next.
Companion Sites:
The Value of Performing Experiment: If the learning environment is focused on background information, knowledge of terms and new concepts, the learner is likely to learn that basic information
successfully. However, this basic knowledge may not be sufficient to enable the learner to carry out successfully the on-the-job tasks that require more than basic knowledge. Thus, the
probability of making real errors in the business environment is high. On the other hand, if the learning environment allows the learner to experience and learn from failures within a variety of
situations similar to what they would experience in the "real world" of their job, the probability of having similar failures in their business environment is low. This is the realm of
simulations-a safe place to fail.
The appearance of management science software is one of the most important events in decision making process. OR/MS software systems are used to construct examples, to understand the existing
concepts, and to discover useful managerial concepts. On the other hand, new developments in decision making process often motivate developments of new solution algorithms and revision of the
existing software systems. OR/MS software systems rely on a cooperation of OR/MS practitioners, designers of algorithms, and software developers.
The major change in learning this course over the last few years is to have less emphasis on strategic solution algorithms and more on the modeling process, applications, and use of software.
This trend will continue as more students with diverse backgrounds seek MBA degrees without too much theory and mathematics. Our approach is middle-of-the-road. It does not have an excess of
mathematics nor too much of software orientation. For example, we lean how to formulate problems prior to software usage. What you need to know is how to model a decision problem, first by hand
and then using the software to solve it. The software should be used for two different purposes.
Personal computers, spreadsheets, professional decision making packages and other information technologies are now ubiquitous in management. Most management decision-making now involves some form
of computer output. Moreover, we need caveats to question our thinking and show why we must learn by instrument. In this course, the instrument is your computer software package. Every student
taking courses in Physics and Chemistry does experimentation in the labs to have a good feeling of the topics in these fields of study. You must also perform managerial experimentation to
understand the Management Science concepts and techniques.
Learning Objects: My teaching style deprecates the 'plug the numbers into the software and let the magic box work it out' approach.
Computer-assisted learning is similar to the experiential model of learning. The adherents of experiential learning are fairly adamant about how we learn. Learning seldom takes place by rote.
Learning occurs because we immerse ourselves in a situation in which we are forced to perform. You get feedback from the computer output and then adjust your thinking-process if needed.
Unfortunately, most classroom courses are not learning systems. The way the instructors attempt to help their students acquire skills and knowledge has absolutely nothing to do with the way
students actually learn. Many instructors rely on lectures and tests, and memorization. All too often, they rely on "telling." No one remembers much that's taught by telling, and what's told
doesn't translate into usable skills. Certainly, we learn by doing, failing, and practicing until we do it right. The computer assisted learning serve this purpose.
1. Computer assisted learning is a collection of experimentation (as in Physics lab to learn Physics) on the course software package to understand the concepts and techniques. Before using the
software, you will be asked to do a simple problem by hand without the aid of software. Then use the software to see in what format the software provides the solution. We also use the
software as a learning tool. For example, in order to understand linear programming sensitivity analysis concepts, you will be given several managerial scenarios to think about and then use
the software to check the accuracy of your answers.
2. To solving larger problems which are hard to do by hand.
Unfortunately, the first objective is missing in all management science/operations research textbooks.
What is critical and challenges for you is to lean the new technology, mainly the use of software within a reasonable portion of your time. The learning curve of the software we will be using is
very sharp.
We need caveats to question our thinking and show why we must learn-to by instrument that in this course is your computer package. Every student taking courses in Physics and Chemistry does
experimentation in the labs to have a good feeling of the topics in these fields of study. You must also perform experimentation to understand the Management Science concepts. For example, you
must use your computer packages to perform the "what if" analysis. Your computer software allows you observe the effects of varying the "givens".
You will be engaged in thinking process of building models rather than how to use the software to model some specific problems. Software is a tool, it cannot substitute for your the thinking
process. We will not put too much focus on the software at the expense of leaning the concepts. We will lean step-by-step problem formulation, and managerial interpretation of the software
Managerial Interpretations: The decision problem is stated by the decision maker often in some non-technical terms. When you think over the problem, and finding out what module of the software to
use, you will use the software to get the solution. The strategic solution should also be presented to the decision maker in the same style of language which is understandable by the decision
maker. Therefore, just do not give me the printout of the software. You must also provide managerial interpretation of the strategic solution in some non-technical terms.
Computer Package: WinQSB
Learn by Doing: We will use WinQSB package as a computer-assisted learning tool to gain a good "hands-on" experience on concepts and techniques used in this course. These labs experimentation
will enhance your understanding of the technical concepts covered in this course.
The QSB (Quantitative Systems for Business) is developed and maintains by Yih-Long Chang. This software package contains the most widely used problem-solving algorithms in Operations Research and
Management Science (OR/MS). The WinQSB is the Windows version of the QSB software package runs under the CD-ROM Windows. There is no learning-curve for this package, you just need a few minutes
to master its useful features.
The WinQSB Decision Support Software for MS/OM is available from the John Wiley & Sons publisher, ISBN 0-471-40672-4, 2003.
Further Reading:
Chang Y-L., QSB+: Quantitative Systems for Business Plus, Prentice Hall, 1994.
This course uses a professional software called WinQSB, available at: WinQSB Software.
WinQSB Installation Instructions
1. Create a folder (directory) named WinQSB
2. Open the CD-ROM files, and save this file to the folder created in step 1.
3. Run the WinQSB.exe program first (You must execute these programs in the proper sequence). Go to the file location (where you saved it step 2) and double click (or click on file-open) to
execute the file. Follow the prompts and extract the files to the WinQSB folder that you created in step 1.
4. Reboot your system.
5. You will now have a list of files (both executable and support files) in the WinQSB folder that you created in step 1. To use the Linear Program model, for example, click (or double-click) on
the file called LP-ILP.exe.
If you have no access to any computer outside, you may use the university computer network. The QSB is available on the the University NT server. To reach the system you need an NT account. To
obtain your NT account see the Technical Assistance (TA) at the lower level of Business Center. After obtaining your username and a password then you can access the NT system. To reach the QSB,
click on Start, choose the Business School Applications, then click on the Shortcut to QSB, or QSB. Then, pick-up the application you wish. All QSB applications are therein.
How to Use LINDO and Interpret Its Output for Linear Programs
Computer always solves real world linear programs mostly using the simplex method. The coefficients of the objective function are known as cost coefficients (because historically during World War
II, the first LP problem was a cost minimization problem), technological coefficients, and the RHS values. This is a perfect way to learn the concepts of sensitivity analysis. As a user, you have
the luxury of viewing numerical results and comparing them with what you expect to see.
The widely used software for LP problems is the Lindo package. A free Windows version can be downloaded right from LINDO's Home page at LINDO, http://www.lindo.com.
Caution! Before using any software, it is a good idea to check to see if you can trust the package.
Here is an LP Software Guide for your review.
Lindo is a popular software package, which solves linear programs. The LP/ILP application of WinQSB does the same operations as Lindo does, but in a much easier to use fashion.
The name LINDO is an abbreviation of Linear INteractive Discrete Optimization. Here the word "discrete" means jumping from one basic feasible solution (BFS) to the next one rather than crawling
around the entire feasible region in search of the optimal BFS (if it exists).
Like almost all LP packages, including WinQSB, Lindo uses the simplex method. Along with the solution to the problem, the program will also provide ordinary sensitivity analysis of the Objective
Function Coefficients (called Cost Coefficients) and the Right-hand-side (RHS) of the constraints. Below is an explanation of the output from the LINDO package.
Suppose you wish to run the Carpenter's Problem. Bring up the LINDO (or your WinQSB) package. Type in the current window as follow:
MAX 5X1 + 3X2
S.T. 2X1 + X2 < 40
X1 + 2X2 < 50
1. The objective function should not contain any constant. For example, Max 2X1 + 5 is not allowed.
2. All variables must appear in the left side of the constraints, while the numerical values must appear on the right side of the constraints (that is why these numbers are called the RHS
3. All variables are assumed to be nonnegative. Therefore, do not type in the non-negativity conditions.
If you wish to get all Simplex Tableaux, then
□ Click on "Reports" and then choose "Tableau", then click on "Solve" and choose "Pivot" then choose "OK", "Close", "Cancel", continue in this manner until you get the message "Do? Range
(Sensitivity) Analysis". Select "Yes", if you wish. After minimizing the current window, you will see the output that you may print for your managerial analysis.
□ Otherwise, click on "Solve", then choose "Solve".
It is good practice to copy the LP problem from your first window and then paste it at the top of the output page.
On the top of the page is the initial tableau, and across the top of tableau are the variables. The first row in the tableau is the objective function. The second row is the first constraint. The
third row is the second constraint, and so on until all constraints are listed in the tableau.
Following the initial tableau is a statement that indicates the entering variable and the exiting variable. The exiting variable is expressed as which row the entering variable will be placed.
The first iteration tableau is printed next. Entering statements and iterations of the tableau continue until the optimum solution is reached.
The next statement, `LP OPTIMUM FOUND AT STEP 2' indicates that the optimum solution was found in iteration 2 of the initial tableau. Immediately below this is the optimum of the objective
function value. This is the most important piece of information that every manager is interested in.
In many cases you will get a very surprising message: "LP OPTIMUM FOUND AT STEP 0." How could it be step 0. Doesn't first have to move in order to find out a result.....? This message is very
misleading. Lindo keeps a record of any previous activities performed prior to solving any problem you submit in its memory. Therefore it does not show exactly how many iterations it took to
solve your specific problem. Here is a detailed explanation and remedy for finding the exact number of iterations: Suppose you run the problem more than once, or solve a similar problem. To find
out how many iterations it really takes to solve any specific problem, you must quit Lindo and then re-enter, retype, and resubmit the problem. The exact number of vertices (excluding the origin)
visited to reach the optimal solution (if it exists) will be shown correctly.
Following this is the solution to the problem. That is, the strategy to set the decision variables in order to achieve the above optimal value. This is stated with a variable column and a value
column. The value column contains the solution to the problem. The cost reduction associated with each variable is printed to the right of the value column. These values are taken directly from
the final simplex tableau. The value column comes from the RHS. The reduced cost column comes directly from the indicator row.
Below the solution is the `SLACK OR SURPLUS' column providing the slack/surplus variable value. The related shadow prices for the RHS's are found to the right of this. Remember: Slack is the
leftover of a resource and a Surplus is the excess of production.
The binding constraint can be found by finding the slack/surplus variable with the value of zero. Then examine each constraint for the one which has only this variable specified in it. Another
way to express this is to find the constraint that expresses equality with the final solution.
Below this is the sensitivity analysis of the cost coefficients (i.e., the coefficients of the objective function). Each cost coefficient parameter can change without affecting the current
optimal solution. The current value of the coefficient is printed along with the allowable increase increment and decrease decrement.
Below this is the sensitivity analysis for the RHS. The row column prints the row number from the initial problem. For example the first row printed will be row two. This is because row one is
the objective function. The first constraint is row two. The RHS of the first constraint is represented by row two. To the right of this are the values for which the RHS value can change while
maintaining the validity of shadow prices.
Note that in the final simplex tableau, the coefficients of the slack/surplus variables in the objective row give the unit worth of the resource. These numbers are called shadow prices or dual
prices. We must be careful when applying these numbers. They are only good for "small" changes in the amounts of resources (i.e., within the RHS sensitivity ranges).
Creating the Non-negativity Conditions (free variables): By default, almost all LP solvers (such as LINDO) assume that all variables are non-negative.
To achieve this requirement, convert any unrestricted variable Xj to two non-negative variables by substituting y - Xj for every Xj. This increases the dimensionality of the problem by only one
(introduce one y variable) regardless of how many variables are unrestricted.
If any Xj variable is restricted to be non-positive, substitute - Xj for every Xj. This reduces the complexity of the problem.
Solve the converted problem, and then substitute these changes back to get the values for the original variables and optimal value.
Numerical Examples
Maximize -X1
subject to:
X1 + X2 ³ 0,
X1 + 3X2 £ 3.
The converted problem is:
Maximize -y + X1
subject to:
-X1 - X2 + 2y ³ 0,
-X1 - 3X2 + 4y £ 3,
X1 ³ 0,
X2 ³ 0,
and y ³ 0.
The optimal solution for the original variables is: X1 = 3/2 - 3 = -3/2, X2 = 3/2 - 0 = 3/2, with optimal value of 3/2.
Solving Unrestricted Variables by LINDO
Suppose you wish to solve the following LP model:
Maximize X1
Subject To:
X1 + X2 > 0
2X1 + X2 < 2
X1 > 0
X2 < 0
The LINDI input is:
Maximize X1
X1 + X2 > 0
2X1 + X2 < 2
X1 > 0
X2 < 0
free X1
free X2
Solution is (X1 = 2, X2 = -2) with optimal value of 2.
For details on the solution algorithms, visit the Web site Artificial-Free Solution Algorithms, example N0. 7 therein.
Integer LP Programs by LINDO
Suppose in the Carpenterâ s Problem ever table needs four chairs; then the LP formulation is:
Max 5X1+3X2
2X1+X2 £ 40
X1 + 2X2 £50
4X1 - X2 = 0
X1 ³0
X2 ³0
GIN X1
GIN X2
GIN stands for general integer variable.
The optimal solution is (X1 = 5, X2 = 20) with optimal value of 85.
Special case of binary variables (X= 0 or 1) is also permitted in LINDO, the command to make the variable X a binary variable is INT X1.
Computer Implementations with the WinQSB Package
Use the LP/ILP module in your WinQSB package for two purposes: to solve large problems, and to perform numerical experimentation for understanding concepts we have covered in the LP and ILP
Variable Type: Select the variable type from the "Problem Specification" screen (the first screen you see when introducing a new problem); for linear programming use the default "Continuous"
Input Data Format: Select the input data format from the "Problem Specification" screen. Usually, it is preferred to use the Matrix format to input the data. In the Normal format the model
appears typed in. This format may be found more convenient when solving a large problem with many variables. You can go back and forth between the formats, selecting the "Switch to theâ ¦" from
the Format menu.
Variable/Constraint Identification: It is a good idea to rename variables and constraints to help identify the context they represent. Changing the names of variables and constraints is done in
the Edit menu.
Best Fit: Using the best fit from the Format menu lets each column have its own width.
Solving for the Optimal Solution (if it exists): Select Solve the problem from the Solve and analyze menu, or use the "solve" icon at the top of the screen. The run returns a "Combined Report"
that gives the solution and additional output results (reduced costs, ranges of optimality, slack/surplus, ranges of feasibility, and shadow prices).
Solving by the Graphic Method: Select the Graphic method from the Solve and Analyze menu (can only be used for a two-variable problem.) You can also click the graph icon at the top the screen.
You can re-scale the X-Y ranges after the problem has been solved and the graph is shown. Choose the Option menu and select the new ranges from the drop down list.
Alternate Optimal Solutions (if they exist): After solving the problem, if you are notified that "Alternate solution exists!!", you can see all the extreme point optimal solutions by choosing the
Results menu and then select Obtain alternate optimal. Also visit Multiple Solutions section on this Web site for some warnings.
Use the "Help" file in WinQSB package to learn how to work with it.
For entering problem into the QSB software; for a constraint such as X1 + X2 ³ 50, the coefficient is 1 and should be entered that way in the software. For any variables that are not used in
that particular constraint (for example if there was X3 in the problem but it was not part of the above constraint), just leave the cell blank for that constraint.
You can change the direction of a constraint easily by clicking on £ (or ³) cell.
To construct the dual of a given problem click on Format, then select Switch to the Dual Form.
If you are not careful you may have difficulty with entering LP problem in WinQSB. For example, in a given problem a few of your constraints may have variable on the right hand side (RHS). You
cannot enter variable name on the RHS cell otherwise you keep getting an infeasibility response. Only numbers can be entered on the RHS. For example, for the constraint X2 + X4 £ .5X5 one must
write it first in the form of X2 + X4 - .5X5 £ 0, then using any LP package including your QSB.
Unfortunately, in some browsers the Graphical Methods of WinQSB may not be available. However, one may, e.g., use the following JavaScript instead.
Managerial Interpretation of the WinQSB Combined Report
The LP/ILP module in WinQSB, like any other popular Linear programming software packages solves large linear models. Most of the software packages use the modified Algebraic Method called the
Simplex algorithm. The input to any package includes:
1. The objective function criterion (Max or Min).
2. The type of each constraint.
3. The actual coefficients for the problem.
The combine report is a solution report consists of both the solution to the primal (original) problem and it Dual.
The typical output generated from linear programming software includes:
1. The optimal values of the objective function.
2. The optimal values of decision variables. That is, optimal solution.
3. Reduced cost for objective function value.
4. Range of optimality for objective function coefficients. Each cost coefficient parameter can change within this range without affecting the current optimal solution.
5. The amount of slack or surplus on each constraint depending on whether the constraint is a resource or a production constraint.
6. Shadow (or dual) prices for the RHS constraints. We must be careful when applying these numbers. They are only good for "small" changes in the amounts of resources (i.e., within the RHS
sensitivity ranges).
7. Ranges of feasibility for right-hand side values. Each RHS coefficient parameter can change within this range without affecting the shadow price for that RHS.
The following are detailed descriptions and the meaning of each box in the WinQSB output beginning in the upper left-hand corner, proceeding one row at a time. The first box contains the decision
variables. This symbol (often denoted by X1, X2, etc.) represents the object being produced. The next box entitled "solution value" represents the optimal value for the decision variables, that
is, the number of units to be produced when the optimal solution is used. The next box entitled "unit costs" represents the profit per unit and is the cost coefficient of the objective function
The next box "total contribution", is the dollar amount that will be contributed to the profit of the project, when the total number of units in the optimal solution is followed. This will
produce the optimal value. The next bow is the "Reduced Cost", which is really the increase in profit that would need to happen if one were to begin producing that item, in other words the
product, which is currently is not produce becomes profitable to produce.
The next box over is the "allowable minimum" and "allowable maximum", which shows the allowable change in the cost coefficients of that particular item that can happen and still the current the
optimal solution remains optimal. However, the optimal value may change if any cost coefficient is changed but the optimal solution will stay the same if the change is within this range. Remember
that these results are valid for one-change-at-a-time only and may not be valid for simultaneous changes in cost coefficients.
The next line is the optimal value, i.e., and the value of objective function evaluated at optimal solution strategy. This line shows the maximum (or minimum) value that can be derived under the
given the optimal strategy.
The next line down contains the constraints; often C1, C2, etc. denote the constraints. Starting on the left-hand side the first box contains the symbol C1 that represents the first constraint.
The next box is the constraint value. That is, the left-hand-side (LHS) of each C1 evaluated ate the optimal solution. The next box over is the "direction box", which is either greater than or
equal to / less than or equal to, which are the direction of the each constraint. The next box is the right hand side value, which states the value that is on the right hand side of each
The next box is the difference between RHS and LHS numerical values called the slack or surplus box. If it is slack, it will have a less than or equal to sign associated with it, which means
there is leftover of resources/raw material. If there is a surplus it will have a greater than or equal to sign associated with it, which means that there are over production. Next box over is
the shadow price. If any slack or surplus is not zero then its shadow price is zero, however the opposite statement may not be correct. A shadow price is the additional dollar amount that will be
earned if the right hand side constraint is increased by one unit while remaining within the sensitivity limits for that RHS.
The next two boxes show the minimum and maximum allowable for the right hand side constraints. The first box (minimum box) shows the minimum value that the RHS constraint can be moved to and
still have the same current shadow price. The second box shows the maximum number that the constraint can be moved to and still have the same current shadow price. Recall that the shadow prices
are the solution to the dual problem. Therefore, the allowable change in the RHS suggest how far each RHS can go-up or down while maintaining the same solution to the dual problem. In both cases
the optimal solution to the primal problem and the optimal value may change.
Shadow Price Might Have Wrong Sign
Some LP software packages do not obey strict duality for both maximization and minimization. Therefore one has to take that into account adjust the signs accordingly. This can be done by changing
the RHS by â smallâ amount and finding the new optimal value, then using the definition of shadow price as the rate of change in the optimal value with respect to the change in the RHS.
For example, considering the following LP with a unique optimum solution:
Minimize 18X[1] + 10X[2]
Subject to:
12X[1] + 10X[2] ³ 120000
10X[1] + 15X[2] £ 150000
X[1], X[2] ³ 0
Running this problem by LINDO, the final report gives the shadow prices U[1] = -2.125, and U[2] = 0.75, while the correct ones are U[1] = 2.125, and U[1] = - 0.75.
Another example:
Min 3X1 - 5x2
2x1 + x2 £ 40
x1 + 2x2 £ 50
X[1], X[2] ³ 0
Soling the problem by lindo, we get (X1 = 0, X2 = 25), with optimal value of -125, and shadow prices (0. 2.5). The correct shadow prices are (0. -2.5).
Further Readings:
Arsham H., Foundation of Linear Programming: A Managerial Perspective From Solving System of Inequalities to Software Implementation, International Journal of Strategic Decision Sciences, 3(3),
40-60, 2012.
Solving LP Problems by Excel
In solving an LP problem by the Solver module in the Excel, it is assumed that you have a good working knowledge and familiarity with the Excel. The following is the steps for solving LP
1. Before you begin using Solver, you should first enter the problem's parameters.
2. Click Tools, then click Solver.
3. In the Set Target Cell box, enter the cell you want to maximize, minimize or set to a specific value (Point to cell E18.) $E$18 should now appear in the box (Note: cell references should
always be absolute).
4. Click the Max option button to indicate that you want Solver to maximize the Total Profit.
5. In the By Changing Cells box, enter the cell or range of cells that Excel can change to arrive at the solution (Highlight cells C15:D15).
6. Click the Subject to the Constraints box, then click the Add... button to add the constraints: For exanple E17<=40 and E19<=50 and C15:D15>=C14:D14.
7. Click the OK button.
8. Click the Solve button.
9. Click the OK button.
Details: One can also us Excel to solve linear programming problems using the Solver tool. Before we can use the solver tool, we need to set up our spreadsheet. One needs to include three
different things for solver to run:
1. Cells that contain values of the decision variables.
2. A cell which will contain the value of the objective function.
3. Cells which contain the value of all of the functional constraints in the problem.
Then we will construct a cell that contains the setup for the sample problem. Once the cells have been setup, select solver under the Tools menu option (if Solver does not show up, go to addins
but make sure that the box marker solver has been selected. In the popup window you see the box labeled Solver Parameters. In the box labeled Set Target Cell: select the cell which contains the
value of the objective function. If your problem is a maximization problem, be sure the button label Max has been selected, otherewise select Min. This tells Excel that it is going to maximizes
the value in the Target Cell. In the box labeled "By Changing Cells" select the cells which contain the values of the decision variables. These are the cells which Excel is going to change to
maximize the Target Cell.
The only thing left to do is to setup the constraints for the problem. Click the Add button to include a new constraint.
Let us begin by including the non-negativity constraints. Under Cell Reference select a cell which contains a value of a decision variable. Next, select the option ">=" then in the constraint box
type 0. Select OK, you will now see the first non-negativity constraint included in the box labeled "Subject to the Constraints". Repeat this process for all non-negativity constraints.
Let us turn to the functional constraints. Click the add button again. Now Under the cell reference, select a cell which contains the value of one of the function constraints. Select the
appropriate sign and type in the desired value. Click Ok. Repeat this for all of the functional constraints.
Once you have finished inputting the constraints, simply hit the solve button. The next thing you should see is another pop-up which tells you that Solver has found a solution. Simply hit OK. You
should also notice that optimal solution will be reported in the cells which store the value of the decision rules, and the maximized value will be in the cell which contains the value of the
objective function.
Computer Implementations of Network Models: WinQSB Package
1. Prepare a network model for the problem. (Note: you don't have to have a formal model for the data entry. You may modify it along with the process.)
2. Select the command New Problem to start a new problem. The program will display a form to specify the problem. Click the problem type, objection function criterion, and the matrix form for
the data entry. Also enter the number of nodes and the problem name. Press OK button when specification is done. A spreadsheet will appear for entering the network connection.
3. Enter the arc or connection parameter/coefficient on the spreadsheet (matrix). Here are some tips:
4. If there is no connection between two nodes, you may leave the corresponding cell empty or enter "M" for infinite cost.
5. Use Tab or arrow keys to navigate in the spreadsheet.
6. You may click or double click a data cell to select it. Double clicking the light blue entry area above the spreadsheet will high-light the data entry.
7. Click the vertical or horizontal scroll bar, if it is shown, to scroll the spreadsheet.
8. (Optional) Use the commands from Edit Menu to change the problem name, node names, problem type, objective function criterion, and to add or delete nodes. You may also change the flow bounds
from Edit Menu if the problem is a network flow problem.
9. (Optional) Use the commands from Format Menu to change the numeric format, font, color, alignment, row heights, and column widths. You may also switch to the graphic model from the Format
10. (Optional, but important) After the problem is entered, choose the command Save Problem As to save the problem.
NonLinear Programming Implementations with the WinQSB
This program, NonLinear Programming (NLP), solves nonlinear objective functions with or without constraints. Constraints may also be nonlinear. Specific capabilities include:
1. Solve the single and multiple variable unconstrained problems by line search method
2. Solve the constrained problems by penalty function method
3. Allow to analyze an assigned solution
4. Analyze constraint violation for the constrained problems
5. Perform the constraint function analysis with graph and table
6. Perform the objective function analysis with graph and table
7. Enter the objective function and/or constraints in algebraic functions
8. Enter the problem in spreadsheet format
To specify an NLP problem, here is the procedure:
1. Enter the problem title, which will be part of the heading for the later windows.
2. Enter the number of variables.
3. Enter the number of constraints. If you enter 0 constraint, the problem is an unconstrained problem.
4. Click or choose the objective criterion of either maximization or minimization.
5. If the specification is complete, press the OK button for entering the problem model. Otherwise, press the Cancel button. The Help button is for this help message.
Decision Analysis Implementation with the WinQSB Package
The Da.exe "Decision Analysis" module in your WinQSB package is used for two distinct purposes: to solve large problems, and to perform numerical experimentation. Numerical experimentation
including what-if analysis of the payoff matrix and the subjective probability assignments to the states of nature.
The following functions are available in the Da.exe module:
Bayesian Analysis: Select this option from the Problem Specification screen to input prior probabilities and conditional probabilities (probability of an indicator value given a state of nature).
Then, press the 'solve' icon to obtain the posterior probabilities.
Decision Trees: You must draw the decision tree first to number all nodes, including the terminal nodes. These numbers become the node IDs, when building the decision tree within the program.
When you are ready to enter the data, select the option 'Decision Tree Analysis' from the 'Problem Specification' screen.
For each node, you will indicate the number of nodes immediately connected to it (type <node number>,â ¦, <node number>).
Mistakes may be corrected by directly typing the changes into the proper cells.
Business Forecasting Implementation with the WinQSB
To enter a forecasting problem, here is the general procedure:
1. For a time series problem, prepare the historical data; for a linear regression problem, prepare the data for multiple factors.
2. Select the command New Problem to specify the problem. Choose the appropriate problem type and enter the scope of the data. See Problem Specification.
3. Enter the historical data (time series) or factor data (regression) on the spreadsheet. If it is a regression problem, you may want to change the factor or variable name before entering the
data. Use the command Variable Name from the Edit Menu to change the variable names.
4. (Optional) Use the commands from Format to change the numeric format, font, color, alignment, row heights, and column widths.
5. (Optional, but important) After the problem is entered, choose the command Save Problem As to save the problem.
To perform forecasting for a time series data, here is the general procedure:
1. If the problem is not entered, use the procedure How to Enter a Problem to enter the problem.
2. For a general good practice, you may want to save the problem by choosing the command Save Problem As before solving it.
3. Select the command Perform Forecasting. The program will bring up a form for setting up the forecasting. See Perform Forecasting for detail.
4. After the forecasting is done, the result will be shown. You may choose the command Show Forecasting in Graph from the Results Menu to display graphical result.
To show time series forecasting in graph, here is the general procedure:
1. Use the procedure How to Perform Time Series Forecasting to perform forecasting. Note that you may specify to retain the previous forecasting result for comparison.
2. After the forecasting is done, the result will be shown. Choose the command Show Forecasting in Graph from the Results Menu to display graphical result. Note that you may change the graph
range by using the command Change Range.
To perform a linear regression, here is the general procedure:
1. If the problem is not entered, use the procedure How to Enter a Problem to enter the problem.
2. For a general good practice, you may want to save the problem by choosing the command Save Problem As before solve it.
3. Select the command Perform Linear Regression. The program will bring up a form for setting up the regression. See Perform Linear Regression for selecting dependent and independent variables.
4. After the regression is done, the summarized result will be shown. You may choose the commands from the Results Menu to display other related result.
To perform estimation or prediction in linear regression, here is the general procedure:
1. Use the procedure How to Perform Linear Regression to perform regression.
2. After the regression is done, the result will be shown. Choose the command Perform Estimation and Prediction. The program will bring up a form for specifying the significance level and
entering the values for independent variables. See Perform Estimation and Prediction for detail.
To show a regression line in linear regression, here is the general procedure:
1. Use the procedure How to Perform Linear Regression to perform regression.
2. After the regression is done, the result will be shown. Choose the command Show Regression Line from the Results Menu. The program will bring up a form for specifying the x-axis independent
variable and entering the values for other independent variables. See Show Regression Line for detail. Note that after the line is shown, you may change the graph range by using the command
Change Range.
Create a Graph/Chart From A Spreadsheet Data:
1. Select an area of data from the current spreadsheet. See Select Area for how to select a data area.
2. Select the command Chart/Graph. The program creates a 3D column chart for the selected data.
3. Using the Gallery Menu, you can change to different types of charts.
4. Using the Data Menu, you can change to titles and data for the chart.
Create a Graph/Chart From Other Data:
1. Select the command Chart/Graph. The program creates a 3D column chart with random data.
2. Using the Gallery Menu, you can change to different types of charts.
3. Using the Data Menu, you can change to titles and data for the chart.
How to Import/Export Data from WinQSB to Other Applications
1. For input data select the copy the data from Word.Doc, block the data area in QSB and then paste therein.
2. Save solution (or problem) as file.llp in WinQSB. Open file.llp, in directory, using Notepad. Copy the text and paste it into Excel. Select the area and add grid in "Print Preview". Again,
select the area and paste it into Word.
3. Use "Print Screen" button on the keyboard. Paste it into Word.
II) Guideline from WinQSB's help menu
How to Export Data to Other Applications
You may export the LP-ILP problem data to other applications. The following two methods can achieve the task.
Through Clipboard
1. Click the left-top cell of the LP-ILP problem (which selects the whole problem).
2. Choose the command Copy or its icon to copy the whole problem to the Clipboard.
3. Switch to the target application and use its Paste command to paste the problem to the target application.
Through File
1. Choose the command Save Problem As or its icon to save the problem to a file.
2. Switch to the target application and open the saved data file with the text format.
III) Export NET (i.e. transportation) problems
You may export the NET problem data to other applications. The following two methods can achieve the task.
Through Clipboard
1. Click the left-top cell of the NET problem (which selects the whole problem).
2. Choose the command Copy or its icon to copy the whole problem to the Clipboard.
3. Switch to the target application and use its Paste command to paste the problem to the target application.
Through File
1. Choose the command Save Problem As or its icon to save the problem to a file.
2. Switch to the target application and open the saved data file with the text format.
More on Decision Analysis
Run the program module for "Decision Analysis" of WinQSB. Select "New problem" from the pull down menu of "File" in the menu bar.
Change the name of decisions and state of the nature to fit this problem. You can do so by selecting appropriate options ("State of the Nature Name", and "Decision Alternative Name") of the pull
down menu of "Edit" in the menu bar. Then enter the problem data. Save the problem.
Select the "Solve and Analysis" option in the menu bar. A window will appear, accept all the default selections, and then click on OK. The software will solve the problem and provides a summary
of all decisions made using different criterion.
You can select three more options from the pull down menu of "Results" in the menu bar. These selections will provide you with details of payoff analysis using different criteria (Display 4),
regret table calculations, and a decision tree graph. Notice that if you select decision tree, a window (Decision Tree Setup) will appear that provides you options for how you would like the tree
to look like (Display 6). Make sure you play around with it for different configurations, then you can print it out.
Draw the decision tree to match the requirements for WinQSB. Notice the triangles are used to indicate an end point.
Run the program module for "Decision Analysis" of WinQSB. Select "New problem" from the pull down menu of "File" in the menu bar.
Change the name of nodes to fit this problem. You can do so by selecting each cell and typing in. Then enter the problem data. Save the problem.
Select the "Solve and Analysis" option in the menu bar. The software will solve the problem and provides a detailed solution.
You can select one more options, decision tree graph, from the pull down menu of "Results" in the menu bar. Notice that if you select decision tree, a window (Decision Tree Setup) will appear
that provides you options for how you would like the tree to look like. Make sure you play around with it for different configurations.
Queuing Analysis
To perform analysis of a queuing system, select the "Queuing Analysis" option of the WinQSB. You may follow the steps below to input data for each queuing system. Here, entering the cost
information is not discussed. As output you can obtain a table that summarizes the system performance and another table that lists the probability of having different number of people in the
Select "New Problem" from the pull down menu of "File" in menu bar. Select "M/M System". Enter information on number of server (1), service rate, and arrival rate.
Select "New Problem" from the pull down menu of "File" in menu bar. Select radio button for "General Queuing System" then click on "OK". Type in 1 for "Number of servers". Double click on
"service time distribution". Click on "Constant" and click on "OK". Type in a value for average service time (not service rate) for "Constant value". Similarly, select constant for "interarrival
time distribution" and enter a value.
Similar to M/M/1, you just type in the value of "queue capacity", C-1, to replace for "M".
Similar to M/M/1, you just type in the value for number of server (S).
Similar to M/M/1, you will type in the value of "R" for "Number of servers", the value of "K-R" for "Queue capacity", and the value of "K" for "Customer population" (to replace for "M").
Microsoft Excel Add-Ins
Forecasting with regression requires the Excel add-in called "Analysis ToolPak," and linear programming requires the Excel add-in called "Solver." How you check to see if these are activated on
your computer, and how to activate them if they are not active, varies with Excel version. Here are instructions for the most common versions. If Excel will not let you activate Data Analysis and
Solver, you must use a different computer.
Excel 2002/2003:
Start Excel, then click Tools and look for Data Analysis and for Solver. If both are there, press Esc (escape) and continue with the respective assignment. Otherwise click Tools, Add-Ins, and
check the boxes for Analysis ToolPak and for Solver, then click OK. Click Tools again, and both tools should be there.
Excel 2007:
Start Excel 2007 and click the Data tab at the top. Look to see if Data Analysis and Solver show in the Analysis section at the far right. If both are there, continue with the respective
assignment. Otherwise, do the following steps exactly as indicated:
-click the â Office Buttonâ at top left
-click the Excel Options button near the bottom of the resulting window
-click the Add-ins button on the left of the next screen
-near the bottom at Manage Excel Add-ins, click Go
-check the boxes for Analysis ToolPak and Solver Add-in if they are not already checked, then click OK
-click the Data tab as above and verify that the add-ins show.
Excel 2010:
Start Excel 2010 and click the Data tab at the top. Look to see if Data Analysis and Solver show in the Analysis section at the far right. If both are there, continue with the respective
assignment. Otherwise, do the following steps exactly as indicated:
-click the File tab at top left
-click the Options button near the bottom of the left side
-click the Add-ins button near the bottom left of the next screen
-near the bottom at Manage Excel Add-ins, click Go
-check the boxes for Analysis ToolPak and Solver Add-in if they are not already checked, then click OK
-click the Data tab as above and verify that the add-ins show.
Solving Linear Programs by Excel
Some of these examples can be modified for other types problems
The Copyright Statement: The fair use, according to the 1996 Fair Use Guidelines for Educational Multimedia, of materials presented on this Web site is permitted for non-commercial and classroom
purposes only.
This site may be mirrored intact (including these notices), on any server with public access, and linked to other Web pages. All files are available at http://home.ubalt.edu/ntsbarsh/
Business-stat for mirroring.
Kindly e-mail me your comments, suggestions, and concerns. Thank you.
This site was launched on 2/25/1994, and its intellectual materials have been thoroughly revised on a yearly basis. The current version is the 8^th Edition. All external links are checked once a
Back to:
EOF: Ã 1994-2015.
|
{"url":"http://home.ubalt.edu/ntsbarsh/opre640c/partX.htm","timestamp":"2014-04-18T13:07:32Z","content_type":null,"content_length":"64918","record_id":"<urn:uuid:292f3614-c742-4042-aa1f-f1f2e9152cd9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> LCA and covariates
Anonymous posted on Tuesday, June 14, 2005 - 8:37 am
I am new to MPLUS and LCA. I am interested in performing LCA with 4 subscale scores and 2 single items scores in a sample of 445 participants. I want like to include seven categorical covariates and
4 continuous covariates. I have tested multiple solutions (a 1 class though a 6 class solution), and I have a couple of questions regarding the results.
Here is some information that you might need
1)The output contained a warning that one or more multinomial logit parameters were fixed to avoid singularity of the information matrix…but that it terminated normally. Does this mean the model did
not run properly?
2)In the categorical latent variable section (relationship between the covariates and teach class) the estimates were huge (ex. 1169.266), the SEs were low (ex 1.033) and the Estimates/SE were huge
for each covariate. In three of the groups all the covariates were significant
3)In the 4th groups all the SE were 0 and the Estimates/SE were 0.
I am wondering if the program corrected the problem so this is a valid (although odd) solution.
I am also wondering if there is a limit of the number of covariates.
BMuthen posted on Tuesday, June 14, 2005 - 9:29 am
1. No, it means that for some in some classes there is no variablity for some of the covariates so the regression coefficient is not defined.
2. This sounds like the same problem as in 1.
3. This sounds like the same problem as in 1 also. Although there is no absolute limit to the number of covariates, it is problematic when they have no variances in some classes.
Sanjoy posted on Wednesday, June 22, 2005 - 8:01 pm
Dear Professor....This is what I want to do
Y1 and Y2 are the main dependent variables (both of them are binary 0/1). Y1 and Y2 are very much correlated as well.
X’s are the covariates. Total 14 Covaraites
Our key interest is in capturing the variation across class in the regression of Y1, Y2 on X.
I want to fit LCA with covariates
In order to make a BIC comparison between different classes, I have tried to fit three different classes (class 3, class 4 and class 5). Classes 3 and 5 works well, however while trying to fit 4
class it’s not working.
MPlus output telling …. “the standard errors of the model parameter estimates may not be trustworthy for some parameters due to a non-positive definite first-order derivative product matrix. this may
be due to the starting values but may also be an indication of model nonidentification. the condition number is -0.255d-11. problem involving parameter 45.”
Also there was something more …
“one or more multinomial logit parameters were fixed to avoid singularity of the information matrix. the singularity is most likely because the model is not identified, or because of empty cells in
the joint distribution of the categorical latent variables and any independent variables. the following parameters were fixed:
51 30 31 28 29 47 53 49”
Following are the excerpts from the command syntax
CLASSES = C(4);
MISSING ARE .;
LOGHIGH = +15;
LOGLOW = -15;
UCELLSIZE = 0.01;
LOGCRITERION = 0.0000001;
ITERATIONS = 10000;
CONVERGENCE = 0.000001;
C#1 C#2 C#3 ON
A1 A4 A5 A8
B41 B44
F5Employ F6A F6B
Mine is version 3.12 …
Q1. Could you suggest me some remedy please!
thanks and regards
Linda K. Muthen posted on Thursday, June 23, 2005 - 3:54 am
This is a support question. Please send it to support. It is data-model dependent and cannot therefore be answered without seeing the full output and the data.
Sanjoy posted on Thursday, June 23, 2005 - 4:30 am
Oh ok ...that's great madam, I'm going to send you right now, thanks and regards
Anonymous posted on Thursday, June 23, 2005 - 5:42 am
Hello again,
I posted the 1st message of this thread. I am becoming more familiar with MPlus and like the program. Just to remind you I would like to use LCA to derive the best model for 4 subscale scores and 2
single items scores. I have a sample of 445 participants. I have changed the covariates (now 6 continuous covariates). I have tested multiple solutions (a 1 class though a 6 class solution).
Since all the measured variables correlate I am wondering if I relax the default of zero covariances across all models or is it solution specific? Also in example 7.22 you suggest starting values. I
am wondering what strategy you would recommend for selecting these values.
Sanjoy posted on Thursday, June 23, 2005 - 9:26 am
Dear Madam ....since morning I have tried to send you the files three times, but for some reasons it failed to reach you ...I got email notice like "Sorry, unable to deliver your message to
support@statmodel.com for
the following reason:
552 Quota violation for support@statmodel.com
thanks and regards
BMuthen posted on Friday, June 24, 2005 - 1:57 am
The correlations among the measured variables are accounted for by having several latent classes. The fact that the within class correlation is zero does not mean that the correlations among the
measured variables is zero.
Usually start values are guided by theory. If you don't have a good feel for what they should be, I suggest using the default.
Linda K. Muthen posted on Friday, June 24, 2005 - 1:58 am
Try again tomorrow.
Sanjoy posted on Friday, June 24, 2005 - 4:08 pm
Dear Madam ... failing to send those files once again, I send them to Maija ...she said she will redirect them to you ...thanks and regards, to all of you
Linda K. Muthen posted on Saturday, June 25, 2005 - 4:30 am
I think the files have been received several times by now. This will be looked at after July 1.
Sanjoy posted on Saturday, June 25, 2005 - 2:33 pm
Oh ! is that so ... my gmail account was acting weird, thanks for your response!
Raheem Paxton posted on Friday, August 17, 2007 - 12:54 pm
I was wondering if there was a difference in output if you specified covariates in the USEV statment but never regressed the covariates on the class structure (no model statement) versus regressing
them on the classes (e.g., Model: c#1 - c#3 on A B C).
Thanks in advance
Linda K. Muthen posted on Friday, August 17, 2007 - 1:52 pm
All variables on the USEVARIABLES statement are used in model estimation. If they are not used as covariates, they will be used as latent class indicators in a latent class analysis.
Anjali Gupta posted on Monday, September 21, 2009 - 11:06 am
I'm unsure how to model a LCA with 1 class and covariates. If an example exists, please let me know.
Thank you,
Linda K. Muthen posted on Monday, September 21, 2009 - 4:25 pm
The multinomial logistic regression of a categorical latent variable on a set of covariates requires a minimum of two classes.
Jerry Cochran posted on Thursday, March 17, 2011 - 11:00 am
I have a question about LCA and co-variates.
I am fitting a LCA model and am following the procedure for determining classes from Nylund et al. 2007.
My question is at what point do I take entropy into account--the model with co-variates or the model without co-variates?
Or, is entropy even the best metric for quality of classification? Is using the classification table based on posterior probabilities a superior method for quality of classification?
Thank you for your time.
Bengt O. Muthen posted on Thursday, March 17, 2011 - 4:43 pm
You want to use for entropy or classification table the final model you settle on.
Entropy is a single-number summary, so the classification table gives you more information. For instance, you may easily tell some important classes apart, but perhaps not all of them.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=13&page=705","timestamp":"2014-04-18T11:11:53Z","content_type":null,"content_length":"41061","record_id":"<urn:uuid:073ebbc8-c957-4896-9894-c2c90a5538b0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Divergent Help
July 12th 2008, 08:00 AM #1
Junior Member
Jun 2008
Divergent Help
I need someone to help me with the following:
$A-B=N$, where A, B, and N diverge, but A does not equal N, and B does not equal N.
Is it possible to prove that $N/B$ diverges/ converges?
Please provide a clearer description of your question. The only plausible interpretation I can put on this would make it a series or sequence question wich would be belong in either the calculus
or discrete forums and not in number theory.
July 12th 2008, 10:17 AM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/number-theory/43524-divergent-help.html","timestamp":"2014-04-18T07:02:29Z","content_type":null,"content_length":"33570","record_id":"<urn:uuid:babf62f2-c887-45d8-8fe6-5bd4921da04d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Operations Research Applications and Algorithms 4th Edition Chapter 18.1 Solutions | Chegg.com
Similarly, if I have to keep 6 matches for the opponent’s pick, the opponent should have 11 matches at the previous pick. Since the opponent picks at most 4 matches I can keep 6 matches for next
pick. That means if I can force my opponent to play when 6, 11, 16, 21, 26, 31 or 36 matches remain, I am sure of victory. Thus initially I need to pick 4 matches on my first turn. Then I will pick 5
– (number of matches picked up by opponent) to make sure that at the end only one match is left for the opponent to pick and ultimately the opponent loses.
|
{"url":"http://www.chegg.com/homework-help/operations-research-applications-and-algorithms-4th-edition-chapter-18.1-solutions-9780534380588","timestamp":"2014-04-16T23:37:33Z","content_type":null,"content_length":"34090","record_id":"<urn:uuid:a71630bc-2426-41c4-834a-95f8675e1557>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I'm new taking my 1st class, and I am having trobble with my homework.. help please!
09-24-2011 #1
I'm new taking my 1st class, and I am having trobble with my homework.. help please!
Wednesday, September 21st, 2011 1:02 PM
Homework 2: The program asks user for a positve integer greater than 0 and less than 10. Program prints out:
1. whether number is even or odd
2. cube of number
3. square root of number
4. Sum of the digits
5. factorial
If an unsuitable number is entered, program exits only printing that an invalid number was entered.*/
#include <stdio.h>
#include <math.h> /* for sqrt*/
/* ask user to input a number between 1 and 10.
program will check number and terminate if input invalid
Otherwise the program will prnt out if the number is odd or even
value cubed, sqare root of the number, sum of the digits and factoral*/
int main()
int number, i, sumx, productx, cubed, sqrt, factoral;
printf(" enter a postive number between 1 and 10:");
scanf("%d", &number);
/* if ((number > 0 ) && (number < 11)).....*/
{ number/2 = even
printf ("the number %d is even\n");
printf ("the number %d is odd\n"); /* 1 */
{cubed = (number*number*number)
printf (" the number cubed is %d\n"); /*2*/
{sqrt = (sqrt(number))
printf (" the number square root is %d\n"); /*3*/
for (i=1;i<=number;i=i+1)
sumx =sumx+ i;
printf("the sum of the digits is %d\n", sumx); /* 4 */
for (i=1; i<= number; i=i+1)
for (i=1;i<=number;i=i*1)
factoral =factoral* i;
printf("factoral of this is %d\n", factoral); /* 5 */
for (i=1; i<= number; i=i*1)
return 0;
What is your question? Does the program work as expected or do you have some problem with it, if so what kind of problem?
it doesn't work... I can get only the sum of the numbers to work if I take all the rest out. I need the program to print 5 answers for each part of the assignment. the user enters if the user
enters a number 1 though 10 then the program should print out answers to each of these questions...
1. whether number is even or odd
2. cube of number
3. square root of number
4. Sum of the digits
5. factorial
So lets say the user types in 2 I need it to output
2 is odd, the cube is 8 , square root is 1, sum of the digits is 3 and the factorial is 2
also dunno if this is a factor but I"m using dev C
Start by commenting out everything but the first part (odd/even), then add them in one after one until they are all working. Looking at the first part:
/* if ((number > 0 ) && (number < 11)).....*/
{ number/2 = even
printf ("the number %d is even\n");
printf ("the number %d is odd\n"); /* 1 */
That is not even going to compile. The easiest way to determine if a number is uneven is to check if it's rightmost bit is set IMO.
what does it mean "it's rightmost bit is set IMO"? I don't think we learned that........
If you have not used bitwise operators then you haven't. But actually you could leave that bit since the first requirement (checking the range of the number) is commented out. So, solve that
first. I would look for numbers < 0 and > 10 instead, if 'number' pass that test, print an error message and return 0; which will end the program.
I think I get what you mean... but I'm not sure how to do it correctly.... this is my attempt.....
int main()
int number, i, sumx, productx, cubed, sqrt, factoral;
printf(" enter a postive number between 1 and 10:");
scanf("%d", &number);
if (number > 0);
printf (" number is not between 1 and 10\n");
if (number < 11);
printf (" number is not between 1 and 10\n");
/* if ((number > 0 ) && (number < 11)).....*/
In an integer value the least significant (right most) bit is always 1 for odd numbers and 0 for even numbers... it's a side effect of computers working in binary.
if ((number & 1) > 0)
printf("%d is odd",number);
printf(%d is even", number);
You can also use
if ((number % 2) > 0)
printf("%d is odd",number);
printf(%d is even", number);
Subsonics gave you excellent advice about working through it one small bit at a time. That's how we "real" programmers do things... write a given function or section of code and test... work on
the next, test again... until it's all working.
The computer does things in little tiny steps, one at a time... and so should you.
Last edited by CommonTater; 09-24-2011 at 03:54 PM.
I think I get what you mean... but I'm not sure how to do it correctly.... this is my attempt.....
int main()
int number, i, sumx, productx, cubed, sqrt, factoral;
printf(" enter a postive number between 1 and 10:");
scanf("%d", &number);
if (number > 0);
printf (" number is not between 1 and 10\n");
if (number < 11);
printf (" number is not between 1 and 10\n");
/* if ((number > 0 ) && (number < 11)).....*/
if ((number < 1) || (number > 10))
printf("I specifically said from 1 to 10\n");
// rest of program here
If the number is less than 1 OR the number is greater than 10 the program exits.
> printf ("the number %d is even\n");
Same mistake as in another thread - you're not supplying a parameter to match the %d
Mind you, you're using a gcc compiler, so perhaps you can find where in dev-c++ you set the compiler flags.
Add -W -Wall -ansi -pedantic and then see more error messages to fix.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
I think I get what you mean... but I'm not sure how to do it correctly.... this is my attempt.....
int main()
int number, i, sumx, productx, cubed, sqrt, factoral;
printf(" enter a postive number between 1 and 10:");
scanf("%d", &number);
if (number > 0);
printf (" number is not between 1 and 10\n");
if (number < 11);
printf (" number is not between 1 and 10\n");
/* if ((number > 0 ) && (number < 11)).....*/
You have the answer right at the end there, but you need to change it a bit. 'number' can not be both less than 1 and larger than 10, so use OR instead of AND, that is || so:
if( (number < 1) || (number > 10) ) {
printf("The number is out of range. (1-10)\n");
return 0;
Last edited by Subsonics; 09-24-2011 at 03:57 PM. Reason: 1-10
Ok tried fixing it and still getting error.... We haven't used || before so I don't really know how to use it but ya this is what I did and got an error on the return line it says....
#include <iostream>
using namespace std;
int main()
int number;
printf(" enter a positive number between 1 and 10:");
if( (number < 1) || (number > 10) ) {
printf("The number is out of range. (1-10)\n");
return 0; }
You're going to get an error alright... you're using C++ headers and C doesn't have the first clue what "namespace" is.
Plus your brackets are mismatched ...
You need to adopt a good indentation style. Watch:
#include <iostream> // this is C++
using namespace std; // so is this
int main() // this should really be (void) instead of just ()
int number;
printf(" enter a positive number between 1 and 10:");
if( (number < 1) || (number > 10) ) {
printf("The number is out of range. (1-10)\n");
return 0; }
// where is the matching }
Count the pairs of braces.
Hope is the first step on the road to disappointment.
09-24-2011 #2
Registered User
Join Date
Jan 2009
09-24-2011 #3
09-24-2011 #4
09-24-2011 #5
Registered User
Join Date
Jan 2009
09-24-2011 #6
09-24-2011 #7
Registered User
Join Date
Jan 2009
09-24-2011 #8
09-24-2011 #9
Join Date
Aug 2010
Ontario Canada
09-24-2011 #10
Join Date
Aug 2010
Ontario Canada
09-24-2011 #11
09-24-2011 #12
Registered User
Join Date
Jan 2009
09-24-2011 #13
09-24-2011 #14
Join Date
Aug 2010
Ontario Canada
09-24-2011 #15
|
{"url":"http://cboard.cprogramming.com/c-programming/141355-i%27m-new-taking-my-1st-class-i-am-having-trobble-my-homework-help-please.html","timestamp":"2014-04-17T11:47:48Z","content_type":null,"content_length":"112372","record_id":"<urn:uuid:fce67e2e-2afc-40d1-ae35-085ba2988d16>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Syntax and Semantics of Programming Languages
Marcelo Fiore
Computer Laboratory
This mini-course is about the mathematical structure of programming languages regarded as syntactic objects equipped with a computational semantics.
In Part I, I will start by recalling the basics of the algebraic approach to understanding abstract syntax and follow by presenting an extension that incorporates binding operators. With this
background, in Part II, I will consider a unified framework for giving structured, compositional semantics to process calculi. Our running examples will be CCS, value-passing CCS, and the
The aim of the mini-course is to offer tools for thinking about syntactic and semantics aspects of programming languages in a principled manner.
Part I.
• Signatures. Algebraic syntax. Substitution. Initial algebra semantics. Inductive types. Structural recursion.
• Binding signatures. Algebraic syntax with variable binding. Capture-avoiding substitution. Initial algebra semantics.
Part II.
• Coalgebraic transition systems. Bisimilarity. Final coalgebra semantics. Structural operational semantics. Compositionality, full abstraction, congruence. Examples: CCS, value-passing CCS,
Part I.
• M.Fiore, G.Plotkin, and D.Turi. Abstract syntax and variable binding. In 14th Logic in Computer Science Conference, pages 193-202. IEEE, Computer Society Press, 1999. (See http://www.cl.cam.ac.uk
• M.Gabbay and A.Pitts. A new approach to abstract syntax involving binders. In 14th Logic in Computer Science Conference, pages 214-224. IEEE Computer Society Press, 1999. (See http://
Part II
• D.Turi and G.Plotkin. Towards a mathematical operational semantics. In 12th Logic in Computer Science Conference, pages 280-291. IEEE, Computer Society Press, 1997. (See http://www.dcs.ed.ac.uk/
• M.Fiore, E.Moggi, and D.Sangiorgi. A fully-abstract model for the pi-calculus . In 11th Logic in Computer Science Conf. (LICS'96), pages 43-54. IEEE, Computer Society Press, 1996. (See http://
• M.Fiore and D.Turi. Semantics of Name and Value Passing. To appear in 16th Logic in Computer Science Conference. IEEE, Computer Society Press, 2001. (See http://www.cl.cam.ac.uk/~mpf23/papers/
|
{"url":"http://www.cl.cam.ac.uk/~mpf23/Mini-courses/2000-01/Fiore.html","timestamp":"2014-04-18T08:13:45Z","content_type":null,"content_length":"3876","record_id":"<urn:uuid:bc9b433f-be31-49f7-847d-55c2183c7c13>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pig Data and Bayesian Inference on Multinomial Probabilities
John C. Kern
Duquesne University
Journal of Statistics Education Volume 14, Number 3 (2006), www.amstat.org/publications/jse/v14n3/datasets.kern.html
Copyright © 2006 by John C. Kern II, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the
authors and advance notification of the editor.
Key Words:
Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs®. Prior information on these probabilities is readily available from the instruction
manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical and prior estimates, and yields posterior
predictive simulations used to compare competing extreme strategies.
1. Introduction
The twenty-five year-old game Pass the Pigs®, created by David Moffat and currently marketed by Winning Moves© Games, requires a player to roll a pair of rubber, pig-shaped dice. The player earns -
or as the case may be, loses - points for him or herself based on the configuration of the rolled pigs. Due to the unusual shape of these dice, however, it is difficult to intuit the probability of
any particular configuration. Indeed, exact probabilities for each configuration are unknown.
In this research, we use the data collected from 6000 rolls in combination with a multinomial - Dirichlet model to make Bayesian inference on the configuration probabilities. Our analysis is intended
for students of Bayesian inference, usually advanced undergraduate mathematics/statistics majors or first-year statistics graduate students - as well as their instructors. Aside from providing an
entertaining multinomial-Dirichlet application, this analysis is of particular pedagogical interest because specification of prior parameters comes so naturally: The point values for each
configuration as defined on the game’s packaging, scorepad, and instructions are used to directly determine the parameter values of the Dirichlet prior distribution. Before providing the details of
this model and prior specification, it is necessary to clarify both the rules of the game and the data collection process.
1.1 The Rules
In a game of Pass the Pigs®, two or more people compete against one another to be the first to earn 100 points. The game progresses on a turn-by-turn basis through a fixed player ordering, whereby
any points a player earns on a turn are added to their points earned on all previous turns. The advantageous first turn is randomly awarded. A player’s turn - which requires use of the pigs - is over
when they “pass the pigs” to the next player.
Any turn begins with the rolling of both pig-shaped dice. The configuration of the pigs in this roll, or any other, must fall under exactly one of the following three categories:
• A positive-scoring roll.
• A zero-scoring roll.
• A roll in which the pigs are at rest and in physical contact with each other, regardless of configuration.
If the initial roll on a turn is positive-scoring, the player may choose to immediately roll the pair of pigs again. Such a choice remains available to the player provided the previous roll in that
turn was positive scoring. In this way, the points a player earns on their turn is the sum of the point values of an unbroken string of positive-scoring rolls. The end of a player’s turn is
determined by the first occurrence of the following three events:
End-Turn Event I: The roll is zero-scoring. In this case the player loses all points accumulated on that turn and must pass the pigs to the next person.
End-Turn Event II: The roll is positive-scoring and the player chooses to pass the pigs to the next person. In this case the player retains all points accumulated on that turn.
End-Turn Event III: The roll finds the pigs in physical contact with each other. In this case the player loses all points accumulated on that turn as well as the points accumulated on all previous
turns. The pigs are then passed to the next person.
It is worthwhile to note that this game does allow a player to incorporate strategy, but only through End-Turn Event II. For example, a player may choose to roll the pigs only once per turn. This is
the most conservative strategy in the sense that points earned on a turn are never at risk of being lost to a zero-scoring roll. Conversely, the extreme risk strategy views each turn as an
all-or-nothing opportunity. Those adopting this strategy will continue to roll until either a non-positive-scoring roll is obtained or at least 100 points are accumulated. Each turn taken under this
strategy will end with the player earning either zero points or victory. Analysis of both extreme strategies is given in Section 2.3. Before presenting the data from these 6000 rolls (and the
collection method), we now detail the configuration-to-point-value mapping.
1.2 Scoring
The pair of pigs that come in a new package (available on-line and in most toy stores for roughly 10 U.S. dollars) are virtually indistinguishable from each other. Each pig is molded in the same
“trotting” position, reminiscent of the pose you might expect to see in a snapshot of a walking pig. When a single pig is rolled on a smooth, level, unobstructed surface, it will invariably come to
rest in one of the six positions listed in Table 1. Table 2 provides pictorial representation of the positions described in Table 1. The names given to the positions come directly from the game,
except for the Dot Up and Dot Down labels. These descriptors were chosen for the simple reason that the fleshy-pink colored pigs are marked by a noticeable black dot on the right side of their
bodies. Thus, for example, when a pig is resting on its left side, the black dot is in the “up” position. Throughout this paper, we will use the position names and numbers interchangeably.
Table 1. Position possibilities for the roll of a single pig.
│ Position │ Name │ Description │
│ 1 │ Dot Up │ Pig lies on its left side │
│ 2 │ Dot Down │ Pig lies on its right side │
│ 3 │ Trotter │ Pig stands on all fours │
│ 4 │ Razorback │ Pig lies on its spine, with feet skyward │
│ 5 │ Snouter │ Pig balances on front two legs and snout │
│ 6 │ Leaning Jowler │ Pig balances on front left-leg, snout, and left-ear │
Table 2. Pictorial representations of the six single-pig positions described in Table 1.
│ Position │
│ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │
│(Dot Up)│(Dot Down)│(Trotter)│(Razorback)│(Snouter)│(Leaning Jowler) │
Although we have just identified the positions assumed by the roll of a single pig, we remind the reader that a player will always roll both pigs. The points awarded to (or taken from) a player are
therefore based on the combined positions of the rolled pigs. If, for example, one pig lands Dot Up, and the other lands Trotter, then the player earns 5 points. Shown in Table 3 are the point values
awarded for all of the thirty-six possible position combinations, as specified by the instructions. This table assumes the pigs have (arbitrarily) been assigned labels of “Pig 1” and “Pig 2,” and
that once rolled, the pigs are not touching each other. Notice that higher point values are given darker background shading.
Table 3. Scoring table for all possible positive-scoring and zero-scoring configurations.
│ │ Pig 1 Position │
│ Pig 2 Position │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │
│ │(Dot Up)│(Dot Down)│(Trotter)│(Razorback)│(Snouter)│(Leaning Jowler)│
│1 (Dot Up) │ 1 │ 0 │ 5 │ 5 │ 10 │ 15 │
│2 (Dot Down) │ 0 │ 1 │ 5 │ 5 │ 10 │ 15 │
│3 (Trotter) │ 5 │ 5 │ 20 │ 10 │ 15 │ 20 │
│4 (Razorback) │ 5 │ 5 │ 10 │ 20 │ 15 │ 20 │
│5 (Snouter) │ 10 │ 10 │ 15 │ 15 │ 40 │ 25 │
│6 (Leaning Jowler) │ 15 │ 15 │ 20 │ 20 │ 25 │ 60 │
From Table 3 we see that only two of the thirty-six position combinations are zero-scoring; any roll that finds the pigs lying on opposite sides results in End-Turn Event I. Aside from the two
configurations (both pigs lying on the same side) worth 1 point, all other configurations are positive-scoring and worth some multiple of 5 points. Note that a positive-scoring roll must yield a
point value from the set {1, 5, 10, 15, 20, 25, 40, 60}. We will refer to Table 3 often, especially in Section 2 when determining a prior distribution for a Bayesian multinomial data model. Before
discussing this model, we finish this introduction by describing the data and the method by which it was collected.
1.3 The Data
Data collected from 6000 rolls of a pair of pigs was generated by two people. Both people rolled their own, brand new pair of pigs 3000 times, and recorded the position of the pigs after each roll.
To better enhance our roll-of-the-pigs understanding, one pig from both pairs was randomly selected and marked with a small black dot (made gently with a permanent marker) on its snout. The other was
left unmarked. In this way, the roller would record the position number (from Table 1) of the marked - or black - pig, as well as the position number of the unmarked - or pink - pig. Table 3 would
then be used to determine the score of the roll.
Due to variability in rolling technique across people, we decided to standardize the rolling technique by using a trap-door style rolling apparatus. This apparatus was constructed in such a way as to
impart on the pigs realistic rolling movement. It consisted of nothing more than a four-inch square sheet of sturdy cardboard, well-creased to divide its area into two equal size rectangles. This
sheet was then placed on a level, eight-inch tall wooden platform, such that the crease was parallel to an edge of the platform. Rolling the pigs was accomplished by placing the pigs on one half of
the crease-divided cardboard (in the trotting position, 0.25 inches apart, facing away from the crease and toward the parallel platform edge), and using the other half of the cardboard as a handle to
push-slide the cardboard toward the parallel platform edge - making sure to always keep the crease and platform edge parallel. When the cardboard is moved far enough for the crease to overlap the
edge of the platform, the pushing-sliding stops, and the weight of the pigs cause their half of the creased cardboard to drop in trap-door fashion. Even with no pigs on the cardboard, the crease was
such that the cardboard weight itself would cause the drop. The other half of the cardboard is anchored securely under the fingers of the roller; hence only the pigs tumble to the table below. In
this way, the pigs are not simply dropped to the table. Rather, they are dropped with the forward momentum gained from the pushing-sliding of the creased cardboard. Rolls that saw either pig touch a
platform support were ignored.
Variation in the rolling technique is introduced from a variety of sources. A source of variation we intentionally impose is that of platform height: One person rolled the pair of pigs 3000 times
from the aforementioned eight-inch tall platform, while the other pair were rolled 3000 times from a similar five-inch tall platform. Natural sources of variation not imposed by the author include:
• Any dissimilarity between the two pairs of pigs.
• Dissimilarity between the two rolling surfaces. (The rolls from eight inches were onto a Formica surface; the rolls from five inches were onto a hardwood surface.)
• Dissimilarity in the speeds at which the pig-carrying cardboard was pushed, both within roller and between rollers.
• Any pig rubber wear-and-tear associated with 3000 rolls.
This analysis treats these sources of variation - imposed or natural - as negligible.
Shown in Table 4 are the number of times each of the thirty-six possible position combinations were observed in the 6000 rolls. When comparing this data with the scores from Table 3, we see
combinations that occur more often are generally associated with lower point values. Note that the frequencies in this table sum to 5977; exactly 23 of the 6000 rolls resulted in the two pigs
touching each other (corresponding to End-Turn Event III). If on a given roll we treat the positions of the black and pink pigs as discrete random variables, then Table 4 can be viewed as their
empirical joint distribution by dividing each cell by 6000.
Table 4. Raw frequencies for the black-pink pig positions, based on 6000 rolls.
│ │ Pink Pig Position │
│Black Pig Position │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │
│ │(Dot Up)│(Dot Down)│(Trotter)│(Razorback)│(Snouter)│(Leaning Jowler)│
│1 (Dot Up) │ 573 │ 656 │ 139 │ 360 │ 56 │ 12 │
│2 (Dot Down) │ 623 │ 731 │ 185 │ 449 │ 58 │ 17 │
│3 (Trotter) │ 155 │ 180 │ 45 │ 149 │ 17 │ 5 │
│4 (Razorback) │ 396 │ 473 │ 124 │ 308 │ 45 │ 8 │
│5 (Snouter) │ 54 │ 67 │ 13 │ 47 │ 2 │ 1 │
│6 (Leaning Jowler) │ 10 │ 10 │ 0 │ 7 │ 1 │ 1 │
This data is used in the following section to estimate the probabilities of observing each of the eight possible positive scores, a zero-scoring roll, and a roll in which the pigs are touching each
2. Multinomial - Dirichlet Inference
We now turn our attention to estimating, in Bayesian fashion, the probability that a single roll of the pigs will yield k points, for k = 0, 1, 5, 10, 15, 20, 25, 40, and 60. These nine possible
scoring outcomes exclude only End-Turn Event III, wherein the pigs are touching each other. To this outcome we assign an artificial point value of k = -1. In this way, we can now let k points, where
and k is restricted to the ten integer set S defined by
S ={-1, 0, 1, 5, 10, 15, 20, 25, 40, 60}.
Thus, for example,
Bayesian analyses distinguish themselves from their classical counterparts by incorporating into parameter inference information supplied by the researcher before data have been observed. This
information is presented in the form of a probability distribution - called the prior distribution - on the parameter(s) of interest. In our application, the parameters of interest are the scoring
The method by which the provided scores are used to make a priori statements about
Assumption 1: Positive roll scores and their corresponding roll probabilities are inversely related.
Thus, a 60-point roll is less likely than any other positive-scoring roll (as it has the highest point value); a 1-point roll is more likely than any other positive-scoring roll (as it has the lowest
point value); the 5-point roll is more likely than the 10-point roll, but less likely than the 1-point roll, and so on. Our second assumption about how much more likely a k-point roll is, relative to
a 1-point roll for k > 0:
Assumption 2: The probability of observing a 1-point roll is k times greater than the probability of observing a k-point roll, for k > 0.
The yet unmentioned zero-scoring and End-Turn Event III rolls are accounted for in our final assumption:
Assumption 3: 1-point rolls and zero-scoring rolls have the same probability, while a 1-point roll is c times more likely (c > 0) than a k = -1 point roll.
That w > 0. As for the relationship between c equal to a number near 60 in an effort to equate the chances that the most detrimental event (score of k = -1) and the most beneficial event (score of k
= 60) occur.
From these assumptions, we formally recognize the relationship between
Using these relationships, we express the sum in (1) in terms of only
Solving (5) for
Using this value of (2) and (4) gives the following prior point estimates for the remaining eight probabilities:
We remind the reader that these point estimates come directly from the scores supplied by the game as well as our assumptions (2), (3), and (4) about the relationships among the scoring probabilities
(6), or (7). We choose the functional form of the prior distribution (i.e. the family of prior distributions) to be Dirichlet; the motivation and density for this distribution will be presented upon
explicit presentation of the multinomial data model.
2.1 Multinomial Likelihood
Let X[[k]] represent the number of k-point rolls observed among the n = 6000, where k is again restricted to the set S. Given X[[-1]], X[[0]], ..., X[[60]]} is then multinomial, provided we make the
mild assumptions that rolls are independent and the scoring probabilities
for a vector (n[[-1]], n[[0]], n[[1]], ..., n[[60]]) of nonnegative integers satisfying
When we view the right hand side of (8) as a function of n[[-1]], ..., n[[60]]) as known, we obtain the multinomial likelihood function for L(
The proportionality symbol is used to remind the reader that any function of (9) can be labeled “the” likelihood function, and will yield the same inference on
Standard classical inference finds the values
as shown in various texts (see Berry and Lindgren 1996, or Lange 1999). Thus, the maximum likelihood estimates for the k-point rolls observed among the 6000. These proportions, as obtained from the
data in Table 4, are:
Note that these estimates are based completely on the likelihood function for L(L(
Thus the conditional distribution of (10) can be obtained from the posterior distribution by finding the marginal expected value of each
2.2 Dirichlet Prior
We choose a = a[[-1]], a[[0]], ... , a[[60]]. The corresponding density is
makes the task of specifying a straightforward. Specifically, we set these marginal expected values equal to their corresponding prior point estimates in (6) or (7) by choosing c = 50 - thereby
assuming the a priori probability of k = -1 to fall between that of k = 40 and k = 60 - and letting
for some m > 0. Note that the choice of m > 0 does not influence the marginal expectation of any m does, however, influence the marginal variance of every m, we can express greater prior uncertainty
in a m, and greater prior certainty in a m.
It is now clear that one of the factors motivating our Dirichlet prior choice is our ability to use the available prior point estimates for a, up to a strength-of-prior-belief parameter m. Inference
on m. We will also refer to m as a hyperparameter, as is common for parameters of a prior distribution.
That the Dirichlet prior for (11) our likelihood from (9) and our prior density from (12) gives the following posterior distribution on
Comparing this posterior density with the prior density in (12), we see b = {b[[-1]], b[[0]], ..., b[[60]]}, where b[[k]] = (n[[k]] + a[[k]]).
While a Dirichlet prior on O’Hagan, 1994). For example, the correlation between a particular (m > 0. Other prior distributions on (15), we are now ready to make Bayesian inference on these ten
scoring probabilities.
2.3 Posterior Analysis
The marginal posterior expectation E[(15):
Note from (14) that each a[[k]] value (and hence A) is proportional to the hyperparameter m. Thus, as this strength-of-prior-belief parameter m goes to zero, the posterior point estimate of (16)
converges to E[a[[k]]/A.
Influence of m on Posterior Point Estimates
Recognizing that for fixed m
allows us to partition the unit interval into ten segments whose lengths represent the posterior expectation of Figure 1 partition the unit interval - presented along the vertical axis - into ten
segments so that the vertical space between two curves represents the posterior expected value of a m. As (10), and represent posterior estimates that ignore prior information. As (6) and (7), and
represent posterior estimates that ignore the data collected from the 6000 rolls. For values of m greater than 15, the curves have practically converged to horizontal lines whose separations are
equal to the prior point estimates.
Figure 1: Partition of the (0, 1) vertical axis based on the posterior expectation of m.
By comparing the curve separations where m is near zero with those as
• Values of m less than 2.2 estimate m greater than 2.2.
• Prior estimates of
• The empirical estimate of
• The empirical estimate of
Figure 2 shows a rescaled (i.e. magnified) version of Figure 1; only the curves presented in the [0.9, 1.0] subset of the Figure 1 unit interval are shown in Figure 2. From this magnified Figure we
notice the following:
• Prior estimates of
• The empirical estimate of
• The empirical estimate of
Figure 2: Partition of (0.9,1.0) vertical axis as magnified from Figure 1. The partition is a function of the hyperparameter m.
We conclude from these two Figures that the scores assigned by the game - in combination with our assumptions (2), (3),and (4) - do not reflect well the relative frequencies we observed in 6000
Posterior Predictive Distribution and Extreme Strategies
1. Sample (15). One way to easily accomplish such a simulation is to generate ten independent realizations from gamma densities (Gelman, Carlin, Stern, and Rubin 1995). For each w[[k]] ~ gamma(b
[[k]]). A draw from the Dirichlet posterior is then obtained by setting
2. Use this (8). This is accomplished by partitioning the unit interval according to the
Repeating steps 1 and 2 will yield a distribution of predicted scores that incorporates uncertainty in m. In the following analysis we use m = 1, as it reflects our belief that data from the 6000
rolls are more influential in estimating
We now use 10^5 simulated values of Section 1.1:
Extreme-Conservative: Player rolls the pigs only once per turn.
Extreme-Risk: Player continues rolling until either 100 points are accumulated, or a non-positive score is obtained.
Shown in Figure 3 are histograms representing the distribution of the number of turns necessary to obtain 100 points under the extreme-conservative strategy.
Figure 3 (left) Figure 3 (right)
Figure 3: Empirical distribution (left) and posterior predictive distribution (right) of the number of turns necessary to reach 100 points, assuming the conservative, one-roll-per-turn strategy.
The histogram on the left is based only on the 6000 rolls; the histogram on the right is based on 10^5 posterior predictive realizations of Table 5.
Table 5. Five number summary for the minimum number of extreme-conservative strategy turns necessary to reach 100 points, as based on the data and 10^5 posterior predictive realizations.
│ │ Min │ Q[[1]] │ Q[[2]] │ Q[[3]] │ Max │
│ Empirical │ 10 │ 18.75 │ 22 │ 26.25 │ 61 │
│ Predictive Posterior │ 5 │ 20 │ 24 │ 29 │ 103 │
Notice in this table the impact of our prior distribution. By choosing m = 1, we allow the prior scores to have a slight influence on our posterior for Table 5 are greater for the distribution based
on posterior predictive realizations.
The argument that our posterior with m = 1 estimates the probability of a penalty roll to be greater than our empirical estimate Figure 4 shows the distribution of the number of points obtained
before the first penalty roll as observed under the extreme-risk strategy.
Figure 4 (left) Figure 4 (right)
Figure 4: Empirical distribution (left) and posterior predictive distribution (right) of the number of points accumulated on a single turn before the first penalty roll is observed.
The histogram on the left is based only on the 6000 rolls; the histogram on the right is based on 10^5 posterior predictive realizations of Table 6 that indeed, fewer points are accumulated before
the first penalty roll when using posterior predictive realizations. Furthermore, the empirical proportion of turns resulting in 100 or more points before the first penalty roll is 0.022, while our
posterior predictive realizations estimate this proportion to be 0.011.
Table 6. Five number summary for the number of points accumulated before the first penalty roll using the extreme-risk strategy, as based on the data and 10^5 posterior predictive realizations.
│ │ Min │ Q[[1]] │ Q[[2]] │ Q[[3]] │ Max │
│ Empirical │ 0 │ 1.25 │ 12 │ 31 │ 215 │
│ Predictive Posterior │ 0 │ 0 │ 10 │ 25 │ 199 │
3. Discussion
The game Pass the Pigs® provides an opportunity for students to conduct Bayesian inference on multinomial probabilities using data collected from an entertaining random phenomena. Furthermore, this
game provides students an example where prior information on the parameters of interest clearly exists, and can be easily expressed through a (Dirichlet) prior density. This is in contrast to
standard introductory examples of Bayesian inference where prior distributions are specified either without explanation or intuition, or with motivation to which students have a difficult time
relating (such as “expert opinion suggests...”). Here, students can use the supplied scores to make a priori statements about the multinomial probabilities. What follows are our conclusions about the
ten scoring probabilities as based on a multinomial likelihood using data from a rolling apparatus, a Dirichlet prior with c = 50, and resulting Dirichlet posterior with m = 1.
The Dirichlet posterior for the ten scoring probabilities found in (15) combines information from 6000 rolls with the provided scores, and reveals that these scores do not reflect well what happens
in practice. For example, there are instances where a higher scoring roll is estimated to have a substantially higher probability than a roll that scores lower: a 5-point roll is more likely than a
1-point roll, and a 20 point roll is more likely than a 15-point roll. This does not agree with the basic intuition that a roll’s score and corresponding probability should be inversely related. In
this same spirit, the 60-point roll is so rare that, relative to the other roll scores, it is quite undervalued. This same argument can be made for the 40- and 25-point rolls.
Our Dirichlet posterior estimates the chance of a positive scoring roll to be 74.5%. Thus the probability that your next roll is a penalty roll is approximately 1/4. Using our Dirichlet posterior to
simulate posterior predictive realizations of the next, unobserved roll score allows us to investigate the effectiveness of various strategies. The extreme-conservative strategy will yield 100 points
every 24 turns, on average, while those adopting the extreme-risk strategy are expected to attain 100 points only once per 100 turns.
3.1 A Test for Symmetry
Certainly we are not limited to Bayesian methods when analyzing the data from our 6000 rolls. There exist several pertinent questions that can be directly addressed with a classical hypothesis test.
For example, we might ask: Is there a significant difference in the symmetry of the rolls, based on our arbitrary black/pink assignment? If not, then the observed frequencies displayed in Table 4
should be roughly symmetric about the main diagonal. A test for contingency table symmetry (Bowker, 1948) gives a non-significant p-value of 0.47; hence we do not have evidence to reject the
hypothesis of symmetry (i.e., that the pink/black assignment matters) based on the data from this Table.
3.2 Using Expected Gain to Define Strategy
The game Pig is a non-commercial analogue to Pass the Pigs® requiring a player to roll a single, fair, six-sided die. Rolling a 1 corresponds to our End-Turn Event I; a roll of any other value earns
the player that many points. Aside from these differences play proceeds as in Pass the Pigs®. There is no End-Turn Event III in Pig. Literature analyzing effective Pig strategies (see Neller and
Presser 2004, and Shi 2000) need not confront unknown score probabilities, as they are all 1/6. It is possible, however, to use the maximum likelihood estimates in (10) as the score probabilities for
Pass the Pigs® and conduct strategy analyses (non-Bayesian) similar to those in Neller and Presser 2004, and Shi 2000.
In the spirit of Shi 2000, we present here a brief, strategy-defining expected value calculation for Pass the Pigs®. Let X be the random variable representing the gain in score on a particular
player’s next roll. Note that this gain can be negative. Possible values of X and their corresponding probabilities are given in Table 7, where S represents the player’s score before starting the
current turn, and T the score gained, so far, by the player on the current turn.
Table 7. Distribution of the gain in score X on the next roll, as based on maximum likelihood probability estimates.
│ x │ -(S + T) │ -T │ 1 │ 5 │ 10 │ 15 │ 20 │ 25 │ 40 │ 60 │
│ P(X = x) │ 23/6000 │ 1279/6000 │ 1304/6000 │ 2337/6000 │ 508/6000 │ 171/6000 │ 372/6000 │ 2/6000 │ 2/6000 │ 1/6000 │
We can now represent the expected gain in score - with just one additional toss of the pigs on the current turn - as the expected value of the random variable X. Using EX to denote this expectation,
straightforward calculation gives
This expected value suggests a simple strategy: choose to roll again only when the expected score of the next roll is positive. Since (17) is positive if and only if
T <
this strategy dictates rolling again until T S = 0). Notice that the cutoff for T does not change much with S, as End-Turn Event III is so rare. For example, starting a turn with 79 points (S = 79)
dictates rolling again until T S S) is less than S + T) EX > 0 to occur simultaneously.
3.3 Further Analyses of Interest
The dataset accompanying this investigation provides an opportunity to conduct many more analyses than can be presented here. What is the influence of roll-height on the distribution of roll scores?
Indeed, all analyses conducted thus far can be applied separately to the 3000 rolls from eight inches, and then to those from five inches. Tests for symmetry would be especially relevant, as a true
difference between black and pink pigs may be masked in the combined data presented in Table 4 by the arbitrary black pig assignment for the five-inch rolls and the arbitrary black pig assignment for
the eight-inch rolls.
Does the starting position of the pigs have an effect on score distribution? The first 1500 rolls from both platforms saw both pigs in a “face-forward” initial position. The second 1500 rolls from
the eight-inch platform had one pig in the “face-forward” initial position, and the other in the “face-backward” initial position. The second 1500 rolls from the five-inch platform saw both pigs in
the “face-backward” initial position.
As our final suggestion for further analysis, we point out that sampling from the posterior predictive distribution provides an endless stream of “next roll” scores that can be used to ascertain the
effectiveness of any strategy-even those conditional upon the scores of other players. This is perhaps the most engaging analysis for students, as the personal strategy of each student can be
combined in a program that simulates an entire game. Several game simulations reveal the student with the most successful strategy.
4. Getting the Data
The file pig.dat.txt is a text file containing 6000 rows. Each row corresponds to a roll of the two pigs. The file pig.txt is a documentation file describing the variables.
The author would like to acknowledge the Phillip H. and Betty L. Wimmer Family Foundation for contributing their generous support to this project. The author is grateful to Rachel Riberich for her
contribution of 3000 pig rolls, and to the editor and referees for their comments and suggestions.
Berry, D. A., Lindgren, B. W., (1996), Statistics: Theory and Methods, 2^nd ed., Belmont, CA: Duxbury Press.
Bowker, A.H., (1948), “A test for symmetry in contingency tables,” Journal of the American Statistical Association, 43(244), 572 - 574.
Gelman, A. B., Carlin, J. S., Stern, H. S., and Rubin, D. B. (1995), Bayesian Data Analysis, London; New York: Chapman and Hall.
Lange, K. (1999), Numerical Analysis for Statisticians, New York: Springer-Verlag.
Neller, T., and Presser, C. (2004), “Optimal play of the dice game Pig,” The UMAP Journal, 25(1), 25 - 47.
O’Hagan, A. (1994), Kendall’s Advanced Theory of Statistics: Bayesian Inference, New York; Toronto: Halsted Press.
Shi, Y. (2000), “The game PIG: Making decisions based on mathematical thinking,” Teaching Mathematics and its Applications, 19(1), 30 - 34.
John C. Kern II
Department of Mathematics and Computer Science
Duquesne University
Pittsburg, PA 15282
Volume 14 (2006) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications
|
{"url":"http://www.amstat.org/publications/jse/v14n3/datasets.kern.html","timestamp":"2014-04-18T15:39:14Z","content_type":null,"content_length":"64481","record_id":"<urn:uuid:6354d490-ea95-4d2e-bea4-8dc8b9988c47>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simplifying Expression
1. December 2nd 2012, 08:34 AM #1
2. December 2nd 2012, 09:09 AM #2
Re: Simplifying Expression
Yes, positive exponents go up top and negative exponents go on the bottom.
Typically when I start a problem like this I put all exponents as positive. For example
I put this as
Then I simplify.
Similar Math Help Forum Discussions
Search Tags
|
{"url":"http://mathhelpforum.com/algebra/208892-simplifying-expression.html","timestamp":"2014-04-16T11:53:15Z","content_type":null,"content_length":"34497","record_id":"<urn:uuid:35313b69-6e42-46e2-be40-25e04e61a072>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: correlation function
Replies: 3 Last Post: Nov 19, 2012 5:04 PM
Messages: [ Previous | Next ]
Jure correlation function
Posted: Nov 18, 2012 4:06 AM
Posts: 10
Registered: 9/25/ Hello,
12 I'm having hard time calculating correlation (autocorrelation) function of
two lists (list). I'm trying two different ways of calculating it. One way
is to use fourier transform and second way is to use Mathematica's function
ListCorrelate. I get different results but have no idea why. Here's my code:
korelacija1 = ListCorrelate[data, data, {1, 1}];
korelacija11 = Abs[InverseFourier[Abs[Fourier[data]]^2]];
All elements of "data" are real. I have two Abs in second line because for some reason InverseFourier returns small imaginary parts - I know it shouldn't. It's probably only
numerical error.
Thanks for help.
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2415784&messageID=7924958","timestamp":"2014-04-17T05:20:35Z","content_type":null,"content_length":"19862","record_id":"<urn:uuid:51651ad0-f940-4c8c-bc17-0813568e6205>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ranking and Unranking Permutations in Linear Time
Wendy Myrvold, Department of Computer Science, University of Victoria, Canada.
Frank Ruskey, Department of Computer Science, University of Victoria, Canada.
A ranking function for the permutations on n symbols assigns a unique integer in the range [0, n!-1] to each of the n! permutations. The corresponding unranking function is the inverse: given an
integer between 0 and n!-1, the value of the function is the permutation having this rank. We present simple ranking and unranking algorithms for permutations that can be computed using O(n)
arithmetic operations.
• The postscript file is 127,281 bytes, the dvi file is 19,732 bytes.
• Please send me a note if you download a copy -- Thanks!
• Appears in Information Processing Letters, 79 (2001) 281-284.
• This message by George Russell appeared on the newsgroup sci.math.research in July 2001. It contains essentially the same algorithm developed in our paper, implemented in FORTRAN. He reports
(private communication) that the algorithm was developed in 1993, but was never published.
• The paper is referenced in Knuth volume 4, prefascicle 2B, Generating All Permutations.
• Typo: On page 282 of the IPL version, in the second line the big-O expression should be O( (n log n)/(log log n) ).
Selected papers that refer to this paper:
• Gustavo Rodrigues Galvao and Zanoni Dias, Computing rearrangement distance of every permutation in the symmetric group, Proceedings of the 2011 ACM Symposium on Applied Computing (SAC '11),
(2011) 106-107.
• Stefan Edelkamp, Damian Sulewski, and Cengizhan Yucel, Perfect Hashing for State Exploration on the GPU, Proceedings of the 20th Internation Conference on Automated Planning and Scheduling (ICAPS
2010), 57-64.
• Daniel J. Ford, Encodings of cladograms and labeled trees Electronic Journal of Combinatorics, 17 (2010) 38 pages.
• Sukriti Bhattacharya and Agostino Cortesi, A Distortion Free Watermark Framework for Relational Databases, 4th International Conference on Software and Data Technologies (ICSOFT 2009) 229-234.
• Jose Antonio Pascual, Jose Antonio Lozano, and Jose Miguel Alonso, Parallelization of the Quadratic Assignment Problem on the Cell, XX Jornadas de Paralelismo. 16-18 Septiembre, A Coruń.
• Ting Kuo, A New Method for Generating Permutations in Lexicographic Order, Jounal of Science and Engineering Technology, 5 (2009) 21-29.
• Richard E. Korf, Linear-time disk-based implicit graph search, Journal of the ACM (JACM), 55 (2008) ???-???.
• Blai Bonet, Efficient Algorithms to Rank and Unrank Permutations in Lexicographic Order, Workshop on Search in Artificial Intelligence and Robotics at AAAI 2008.
• Xiapu Luo, Edmond W. W. Chan, and Rocky K. C. Chang, Crafting Web Counters into Covert Channels, Proc. 22nd IFIP TC-11 International Information Security Conference (SEC’07), May 2007.
• Ariel Felner, Richard E. Korf, Ram Meshulam and Robert C. Holte, Compressed Pattern Databases, Journal of Artificial Intelligence Research (JAIR), to appear, 2007.
• Mark C. Wilson, Random and Exhaustive Generation of Permutations and Cycles, arXiv, math.CO/0702753.
• Antti Valmari, What the small Rubik’s cube taught me about data structures, information theory, and randomisation, International Journal on Software Tools for Technology Transfer (STTT),
Springer, 8 (2006) 180-194.
• Stefan Edelkamp and Tilman Mehler, Incremental Hashing in State Space Search, 18th Workshop "New Results in Planning, Scheduling and Design" (PUK2004), Ulm, 24. September, 2004 (part of the 27th
German Conference on Artificial Intelligence).
• Roberto Grossi and Iwona Bialynicka-Birula, Rank-Sensitive Data Structures, String Processing and Information Retrieval (SPIRE 2005), Buenos Aires, November 2-4, 2005.
• Richard E. Korf and Peter Schultze, Large-Scale Parallel Breadth-First Search, National Conference on Artificial Intelligence, AAAI-05, 1380-1385, July 2005.
• Online Encyclopedia of Integer Sequences, Sequences A060117 and A060125 (submitted by Antti Karttunen).
Back to list of publications.
|
{"url":"http://webhome.cs.uvic.ca/~ruskey/Publications/RankPerm/RankPerm.html","timestamp":"2014-04-16T13:02:30Z","content_type":null,"content_length":"5387","record_id":"<urn:uuid:762a290e-2be1-4431-8d3f-f0ffb481698e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Anglers’ Fishing Problem
Karpowicz, Anna and Szajowski, Krzysztof (2010): Anglers’ Fishing Problem. Published in: Annals of the International Society of Dynamic Games , Vol. 12, No. Advances in Dynamic Games (August 2012):
pp. 327-349.
Download (302Kb) | Preview
The model considered here will be formulated in relation to the “fishing problem,” even if other applications of it are much more obvious. The angler goes fishing, using various techniques, and has
at most two fishing rods. He buys a fishing pass for a fixed time. The fish are caught using different methods according to renewal processes. The fish’s value and the interarrival times are given by
the sequences of independent, identically distributed random variables with known distribution functions. This forms the marked renewal–reward process. The angler’s measure of satisfaction is given
by the difference between the utility function, depending on the value of the fish caught, and the cost function connected with the time of fishing. In this way, the angler’s relative opinion about
the methods of fishing is modeled. The angler’s aim is to derive as much satisfaction as possible, and additionally he must leave the lake by a fixed time. Therefore, his goal is to find two optimal
stopping times to maximize his satisfaction. At the first moment, he changes his technique, e.g., by discarding one rod and using the other one exclusively. Next, he decides when he should end his
outing. These stopping times must be shorter than the fixed time of fishing. Dynamic programming methods are used to find these two optimal stopping times and to specify the expected satisfaction of
the angler at these times.
Item Type: MPRA Paper
Original Anglers’ Fishing Problem
Language: English
Keywords: Stopping time Optimal stopping Dynamic programming Semi-Markov process Marked renewal process Renewal–reward process Infinitesimal generator Fishing problem Bilateral approach Stopping
C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C61 - Optimization Techniques; Programming Models;
Subjects: Dynamic Analysis
C - Mathematical and Quantitative Methods > C7 - Game Theory and Bargaining Theory > C72 - Noncooperative Games
C - Mathematical and Quantitative Methods > C7 - Game Theory and Bargaining Theory > C73 - Stochastic and Dynamic Games; Evolutionary Games; Repeated Games
Item ID: 41800
Depositing Krzysztof Szajowski
Date 08. Oct 2012 13:28
Last 20. Feb 2013 08:12
Boshuizen, F., Gouweleeuw, J.: General optimal stopping theorems for semi-Markov processes. Adv. Appl. Probab. 4, 825–846 (1993)
Boshuizen, F.A.: A general framework for optimal stopping problems associated with multivariate point processes, and applications. Sequential Anal. 13(4), 351–365 (1994)
Brémaud, P.: Point Processes and Queues. Martingale Dynamics. Springer, Berlin (1981)
Davis, M.H.A.: Markov Models and Optimization. Chapman & Hall, New York (1993)
Ferenstein, E., Pasternak-Winiarski, A.: Optimal stopping of a risk process with disruption and interest rates. In: Brèton, M., Szajowski, K. (eds.) Advances in Dynamic Games:
Differential and Stochastic Games: Theory, Application and Numerical Methods, Annals of the International Society of Dynamic Games, vol. 11, 18 pp. Birkhäuser, Boston (2010)
Ferenstein, E., Sierociński, A.: Optimal stopping of a risk process. Appl. Math. 24(3), 335–342 (1997)
Ferguson, T.: A Poisson fishing model. In: Pollard, D., Torgersen, E., Yang, G. (eds.) Festschrift for Lucien Le Cam: Research Papers in Probability and Statistics. Springer, Berlin
Haggstrom, G.: Optimal sequential procedures when more then one stop is required. Ann. Math. Stat. 38, 1618–1626 (1967)
Jacobsen, M.: Point process theory and applications. Marked point and piecewise deterministic processes. In: Prob. and its Applications, vol. 7. Birkhäuser, Boston (2006)
Jensen, U.: An optimal stopping problem in risk theory. Scand. Actuarial J. 2, 149–159 (1997) CrossRef
Jensen, U., Hsu, G.: Optimal stopping by means of point process observations with applications in reliability. Math. Oper. Res. 18(3), 645–657 (1993) CrossRef
Karpowicz, A.: Double optimal stopping in the fishing problem. J. Appl. Prob. 46(2), 415–428 (2009). DOI 10.1239/jap/1245676097
Karpowicz, A., Szajowski, K.: Double optimal stopping of a risk process. GSSR Stochast. Int. J. Prob. Stoch. Process. 79, 155–167 (2007) CrossRef
Kramer, M., Starr, N.: Optimal stopping in a size dependent search. Sequential Anal. 9, 59–80 (1990)
Muciek, B.K., Szajowski, K.: Optimal stopping of a risk process when claims are covered immediately. In: Mathematical Economics, Toru Maruyama (ed.) vol. 1557, pp. 132–139. Research
Institute for Mathematical Sciences, Kyoto University, Kyoto 606-8502 Japan Kôkyûroku (2007)
Nikolaev, M.: Obobshchennye posledovatelnye procedury. Litovskiui Mat. Sb. 19, 35–44 (1979)
Rolski, T., Schmidli, H., Schimdt, V., Teugels, J.: Stochastic Processes for Insurance and Finance. Wiley, Chichester (1998)
Shiryaev, A.: Optimal Stopping Rules. Springer, Berlin (1978)
Starr, N.: Optimal and adaptive stopping based on capture times. J. Appl. Prob. 11, 294–301 (1974)
Starr, N., Wardrop, R., Woodroofe, M.: Estimating a mean from delayed observations. Z. f ür Wahr. 35, 103–113 (1976)
Starr, N., Woodroofe, M.: Gone fishin’: optimal stopping based on catch times. Report No. 33, Department of Statistics, University of Michigan, Ann Arbor, MI (1974)
Szajowski, K.: Optimal stopping of a 2-vector risk process. In: Stability in Probability, Jolanta K. Misiewicz (ed.), Banach Center Publ. 90, 179–191. Institute of Mathematics, Polish
Academy of Science, Warsaw (2010), doi:10.4064/bc90-0-12
URI: http://mpra.ub.uni-muenchen.de/id/eprint/41800
|
{"url":"http://mpra.ub.uni-muenchen.de/41800/","timestamp":"2014-04-18T15:55:01Z","content_type":null,"content_length":"27302","record_id":"<urn:uuid:670355dd-039d-4d01-b48e-0dee7b0870af>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
An Empirical Study of the Simulation of Various Models used for Images
May 1994 (vol. 16 no. 5)
pp. 507-513
ASCII Text x
A.J. Gray, J.W. Kay, D.M. Titterington, "An Empirical Study of the Simulation of Various Models used for Images," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 5, pp.
507-513, May, 1994.
BibTex x
@article{ 10.1109/34.291447,
author = {A.J. Gray and J.W. Kay and D.M. Titterington},
title = {An Empirical Study of the Simulation of Various Models used for Images},
journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {16},
number = {5},
issn = {0162-8828},
year = {1994},
pages = {507-513},
doi = {http://doi.ieeecomputersociety.org/10.1109/34.291447},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
TI - An Empirical Study of the Simulation of Various Models used for Images
IS - 5
SN - 0162-8828
EPD - 507-513
A1 - A.J. Gray,
A1 - J.W. Kay,
A1 - D.M. Titterington,
PY - 1994
KW - image reconstruction; Bayes methods; Markov processes; Markov random fields; Bayesian image restoration methods; moderate-to-large scale clustering; iterative algorithms; Markov mesh models;
strong directional effects; parameter estimation
VL - 16
JA - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -
Markov random fields are typically used as priors in Bayesian image restoration methods to represent spatial information in the image. Commonly used Markov random fields are not in fact capable of
representing the moderate-to-large scale clustering present in naturally occurring images and can also be time consuming to simulate, requiring iterative algorithms which can take hundreds of
thousands of sweeps of the image to converge. Markov mesh models, a causal subclass of Markov random fields, are, however, readily simulated. We describe an empirical study of simulated realizations
from various models used in the literature, and we introduce some new mesh-type models. We conclude, however, that while large-scale clustering may be represented by such models, strong directional
effects are also present for all but very limited parameterizations. It is emphasized that the results do not detract from the use of Markov random fields as representers of local spatial properties,
which is their main purpose in the implementation of Bayesian statistical approaches to image analysis. Brief allusion is made to the issue of parameter estimation.
[1] K. Abend, T. J. Harley, and L. N. Kanal, "Classification of binary random patterns,"IEEE Trans. Inform. Theory, vol. IT-4, pp. 538-544, 1965.
[2] N. Ahuja and A. Rosenfeld, "Mosaic models for textures,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-3, pp. 1-11, 1981.
[3] N. Ahuja and B. J. Schachter, "Image models,"Computing Surveys. vol. 13, pp. 374-397, 1981.
[4] M. S. Bartlett,The Statistical Analysis of Spatial Pattern. London: Chapman and Hall, 1975.
[5] R. J. Baxter, "Partition function of the eight-vertex lattice model,"Ann. Phys., vol. 70, 193-228, 1970.
[6] J. Besag, "Spatial interaction and the statistical analysis of lattice systems (with Discussion),"J. Roy. Statist. Soc. B, vol. 36, pp. 192-236, 1974.
[7] J. Besag, "Statistical analysis of non-lattice data,"The Statistician, vol. 24, pp. 179-195, 1975.
[8] J. Besag, "On the statistical analysis of dirty pictures (with discussion),"J. Roy. Statist. Soc. B, vol. 48, pp. 259-302, 1986.
[9] J. Besag and P. J. Green, "Spatial statistics and Bayesian computation,"J. Roy. Statist. Soc. B, vol. 55, pp. 25-37, 1993.
[10] C. C. Chen, "Markov random field models in image analysis," Ph.D. thesis, Michigan State Univ., East Lansing, MI, 1980.
[11] G. R. Cross and A. K. Jain, "Markov random field texture models,"IEEE Trans. Pattern. Anal. Machine Intell., vol. PAMI-5, pp. 25-39, 1983.
[12] H. Derin and H. Elliott, "Modeling and segmentation of noisy and textured images using Gibbs random fields,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-9, pp. 39-55, Jan. 1987.
[13] P. A. Devijver, "Image segmentation using causal Markov random field models," inProc. 4th Int. Conf. Pattern Recognit., 1988, pp. 131-143.
[14] P. A. Devijver, "Real-time modelling of image sequences based on hidden Markov mesh random field models," Manuscript M-307, Phillips Res. Lab., Brussels, Belgium, 1989.
[15] R. C. Dubes and A. K. Jain, "Random field models in image analysis,"J. Appl. Statist., vol. 16, pp. 131-164, 1989.
[16] P. A. Flinn, "Monte-Carlo calculation of phase separation in a two-dimensional Ising system,"J. Statist. Phys., vol. 10, pp. 89-97, 1974.
[17] S. Geman and D. Geman, "Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images,"IEEE Trans. Pattern. Anal. Machine Intell., vol. PAMI-6, pp. 721-741, 1984.
[18] A. J. Gray, "Simulating posterior Gibbs distributions: a comparison of the Swendsen-Wang and Gibbs sampler methods,"Statist. Computing, to appear, 1994.
[19] P. J. Green and X.-L. Han, "Metropolis methods, gaussian proposals and antithetic variables," inLecture Notes in Statistics, vol. 74, P. Baroneet al., Eds. Berlin: Springer, 1992, pp. 142-164.
[20] D. M. Greig, B. T. Porteous, and A. H. Seheult, "Exact maximum aposterioriestimation for binary images,"J. Roy. Statist. Soc. B, vol. 51, pp. 271-279, 1989.
[21] E. Ising, "Beitrag zur theorie des ferromagnetismus,"Zeitschrift Physik, vol. 31, p. 253, 1925.
[22] L. N. Kanal, "Markov mesh models,"Computer Graphics and Image Processing, vol. 12, pp. 371-375, 1980.
[23] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, "Equations of state calculations by fast computing machines,"J. Chem. Phys., vol. 21, pp. 1087-1092, 1953.
[24] G. F. Newell and E. W. Montroll, "On the theory of the Ising model of ferromagnetism,"Revs. Modern Phys., vol. 25, pp. 353-389, 1953.
[25] L. Onsager, "Crystal Statistics I: A two-dimensional model with an order-disorder transition,"Phys. Rev., vol. 65, pp. 117-149, 1944.
[26] D. K. Pickard, "Statistical inference on lattices," Unpublished Ph.D. thesis, Australian Nat. Univ., 1977.
[27] D. K. Pickard, "A curious binary lattice process,"J. Appl. Probab., vol. 14, pp. 717-731, 1977.
[28] D. K. Pickard, "Unilateral Ising models,"Suppl. Adv. Appl. Probab., vol. 10, pp. 58-64, 1978.
[29] D. K. Pickard, "Unilateral Markov fields,"Adv. Appl. Probab., vol. 12, pp. 655-671, 1980.
[30] D. K. Pickard, "Inference for discrete Markov fields: The simplest non-trivial case,"J. Amer. Statist. Assoc., vol. 82, pp. 90-96, 1982.
[31] A. Possolo, "Estimation of binary Markov random fields," Tech. Rep. 77, Dept. of Statistics, GN-22, Univ. of Washington, Seattle, WA, 1986.
[32] R. B. Potts, "Some generalised order-disorder transformations," inProc. Cambridge Phil. Soc., vol. 48, 1952, pp. 106-109.
[33] W. Qian and D. M. Titterington, "Pixel labeling for three-dimensional scenes based on Markov mesh models,"Signal Processing, vol. 22, pp. 313-328, 1991.
[34] W. Qian and D. M. Titterington, "Multidimensional Markov chain models for image textures,"J. Roy. Statist. Soc. B, vol. 53, pp. 661-674, 1991.
[35] W. Qian and D. M. Titterington, "Estimation of parameters in hidden Markov models,"Phil. Trans. Roy. Soc. Lond. A, vol. 337, pp. 407-428, 1991.
[36] B. D. Ripley and M. D. Kirkland, "Iterative simulation methods,"J. Comput. Appl. Math., vol. 31, pp. 165-172, 1990.
[37] B. Schachter and N. Ahuja, "Random pattern generation processes,"Computer Graphics Image Processing, vol. 10, pp. 95-114, 1979.
[38] R. H. Swendsen and J.-S. Wang, "Nonuniversal critical dynamics in Monte Carlo simulations,"Phys. Rev. Lett., vol. 58, pp. 86-88, 1987.
[39] A. M. W. Verhagen, "A three-parameter isotropic distribution of atoms and the hard-core square lattice gas,"J. Chem. Phys., vol. 67, pp. 5060-5065, 1977.
[40] C. N. Yang, "The spontaneous magnetization of a two-dimensional Ising model,"Phys. Rev., vol. 85, pp. 808-816, 1952.
Index Terms:
image reconstruction; Bayes methods; Markov processes; Markov random fields; Bayesian image restoration methods; moderate-to-large scale clustering; iterative algorithms; Markov mesh models; strong
directional effects; parameter estimation
A.J. Gray, J.W. Kay, D.M. Titterington, "An Empirical Study of the Simulation of Various Models used for Images," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 5, pp.
507-513, May 1994, doi:10.1109/34.291447
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tp/1994/05/i0507-abs.html","timestamp":"2014-04-19T05:07:59Z","content_type":null,"content_length":"58122","record_id":"<urn:uuid:8c91fae4-e020-42e2-a347-bbaf42186049>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Science Guys
The Science Guys
July 2000
I am 12 years old and weigh about 100 pounds. How many helium balloons would it take to lift me?
About 250 B.C. King Hieron of Greece asked Archimedes to determine whether his crown was made of pure gold or an alloy. The task had to be performed without destroying the crown. Legend has it that
Archimedes figured out a solution while taking a bath. He observed that his arms did not appear to weigh as much when they were in the water. This gave him the idea for what is now known as
Archimedes Principle. Legend further states that Archimedes was so excited at his discovery, he leaped from his bath and ran naked down the street shouting "Eureka" which is Greek for "I found it."
Archimedes’ principle gives us the answer to our question today. The principle states, "any body in a fluid is buoyed up by a force equal to the weight of the fluid displaced." In the case of a
person in water, they are ’buoyed up’ with a force equal to the weight of the water their body displaces or pushes out of the way. In the case of a balloon, the fluid is the surrounding air. Most
people do not think of air as having weight but it does, specifically 0.0807 pounds per cubic foot.
When a less dense (lighter) fluid is placed in a more dense (heavier) fluid, the lighter one floats on top. This lighter fluid is ’held up’ by a buoyant force. That buoyant force is equal to the
weight of the heavier displaced fluid. Since the displaced fluid is heavier, the buoyant force is greater that the weight of the lighter fluid so the lighter fluid floats on the heavier fluid.
Helium is less dense than air. Helium has 0.0114 pounds per cubic foot. For a one cubic foot helium filled balloon , gravity pulls the down on the helium with a force of 0.0114 pounds while the air
pushes up with a force equal to the weight of the air the helium displaced, or 0.0807 pounds. The difference in the up and down force is 0.069 pounds. Therefore each cubic foot of helium could lift
0.069 pounds. In order to lift 100 pounds (which would include the weight of your load, the balloon, and the helium) you would need 1449 cubic feet of helium. This would require a balloon with about
a 15.5 foot diameter. If instead you used small spherical (one foot diameter) balloons (which holds about 0.526 cubic feet of gas), it would take over 2754 of them to lift the 100 pounds.
Therefore a handful of small balloons will not lift you off the ground. It is possible however to lift a person as evidenced by the following story that appeared in the newspaper in 1997. A
California man wanted to recline in a lawn chair about 30 feet above his back yard, buoyed up by weather balloons. When he wanted to return to the ground he planned to pop balloons one by one with a
pellet gun.
The poor fellow attached 45 large weather balloons to a lawn chair and inflated them with helium. He climbed into the lawn chair and released the anchor rope. The buoyant force rocketed him into the
blue and he came to equilibrium at about 11,000 feet! Too afraid to pop a balloon, he eventually strayed into the air approach corridor of Los Angeles International Airport where he was spotted by a
passing jetliner. After being rescued by a helicopter, he was arrested for violating the air space of LA International! If only he had taken the time to apply physics to the situation, perhaps he
could have avoided the ordeal.
|
{"url":"http://www.uu.edu/dept/physics/scienceguys/2000July.cfm","timestamp":"2014-04-18T15:07:43Z","content_type":null,"content_length":"7308","record_id":"<urn:uuid:778f8ef9-1f22-4520-814c-3a1807f5d49e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next: Preliminaries Up: Research Note: New Polynomial Previous: Research Note: New Polynomial
Abduction consists in searching for a plausible explanation for a given observation. For instance, if explanation, here knowledge base (here query (hypotheses among which the explanations have to be
chosen, and by a preference criterion among them.
The problem of abduction proved its practical interest in many domains. For instance, it has been used to formalize text interpretation [HSAM93], system [CM98,SW01] or medical diagnosis [BATJ89,
Section 6]. It is also closely related to configuration problems [AFM02], to the ATMS/CMS [RK87], to default reasoning [SL90] and even to induction [Goe97].
We are interested here in the complexity of propositional logic-based abduction, i.e., we assume both the knowledge base and the query are represented by propositional formulas. Even in this
framework, many different formalizations have been proposed in the literature, mainly differing about the definition of an hypothesis and that of a best explanation [EG95]. We assume here that the
hypotheses are the conjunctions of literals formed upon a distinguished subset of the variables involved, and that a best explanation is one no proper subconjunction of which is an explanation (
subset-minimality criterion).
Our purpose is to exhibit new polynomial classes of abduction problems. We give a general algorithm for finding a best explanation in the framework defined above, independently from the syntactic
form of the formulas representing the knowledge base and the query. Then we explore the syntactic forms that allow a polynomial running time for this algorithm. We find new polynomial classes of
abduction problems, among which the one restricting the knowledge base to be given as a Horn DNF and the query as a positive CNF, and the one restricting the knowledge base to be given as an affine
formula and the query as a disjunction of linear equations. Our algorithm also unifies several previous such results.
The note is organized as follows. We first recall the useful notions of propositional logic (Section 2), formalize the problem (Section 3) and briefly survey previous work about the complexity of
abduction (Section 4). Then we give our algorithm (Section 5) and explore polynomial classes for it (Section 6). Finally, we discuss our results and perspectives (Section 7). For lack of space we
cannot detail proofs, but a longer version of this work, containing detailed proofs and examples, is available [Zan03].
Next: Preliminaries Up: Research Note: New Polynomial Previous: Research Note: New Polynomial Bruno Zanuttini 2003-06-30
|
{"url":"http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume19/zanuttini03a-html/node1.html","timestamp":"2014-04-19T12:27:27Z","content_type":null,"content_length":"6969","record_id":"<urn:uuid:8f30e27d-0cb5-4609-8759-aa69c88d4bb5>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/waleed_imtiaz/answered","timestamp":"2014-04-18T16:47:43Z","content_type":null,"content_length":"112334","record_id":"<urn:uuid:3f423d19-f74c-41ff-bf3c-30c28533b6df>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
place of decimals
place of decimals (plural places of decimals)
1. (arithmetic, dated, chiefly UK, usually plural) A decimal place.
□ 1841, Charles Hutton, Thomas Stephens Davies, A Course of Mathematics in Two Volumes, Composed for the Use of the Royal Military Academy, page 56:
It will always be better to calculate one place of decimals more than are required by the question.
□ 1980, Michael H. Stone, The Borderline Syndromes: Constitution, Personality, and Adaptation, page 101:
Find the positive root of the equation e^x – 3x = 0, correct to 3 places of decimals.
□ 1995, G. N. Watson, A Treatise on the Theory of Bessel Functions, page 655:
In consequence of the need of Tables of J[n] (x) with fairly large values of n and x for Astronomical purposes, Hansen constructed a Table of J[0] (x) and J[i] (x) to six places of
decimals with a range from x = 0 to x = 10.0 with interval 0.1.
□ 2001, Keith Pledger, Edexcel GCSE Mathematics, page 103:
To round a decimal correct to one place of decimals (1 d.p.) you look at the second place of decimals.
□ 2005, Veerarajan & Ramachandran, Numerical Methods: With Programs In C, page 3-54:
Use Newton Raphson method to find the values of (i) VT2 and (ii) — , correct to four places of decimals.
□ 2006, Bansi Lal, Topics in Integral Calculus, page 272:
In the application of Simpson's rule, when the number of places of decimals to which the answer is required, is not mentioned, it is usual to calculate the answer correct to three places
of decimals.
position of digit to the right of the decimal point
— see decimal place
Last modified on 4 December 2013, at 16:37
|
{"url":"http://en.m.wiktionary.org/wiki/place_of_decimals","timestamp":"2014-04-19T22:22:34Z","content_type":null,"content_length":"16735","record_id":"<urn:uuid:76ba306d-27b1-4684-8e4d-9d1f2a16880d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computer Vision (CSE 455), Winter 2003
Project 1: Image Scissors
Assigned: Thursday, Jan 09, 2003
Due: Wednesday, Jan 22, 2003 (by 11:59pm)
In this project, you will create a tool that allows a user to cut an object out of one image and paste it into another. The tool helps the user trace the object by providing a "live wire" that
automatically snaps to and wraps around the object of interest. You will then use your tool to create a composite image. The class will vote on the best composites.
Forrest Gump shaking hands with J.F.K.
You will be given a working skeleton program, which provides the user interface elements and data structures that you'll need for this program. This skeleton is described below. We have provided a
sample solution executable and test images. Try this out to see how your program should run.
This program is based on the paper Intelligent Scissors for Image Composition, by Eric Mortensen and William Barrett, published in the proceedings of SIGGRAPH 1995. The way it works is that the user
first clicks on a "seed point" which can be any pixel in the image. The program then computes a path from the seed point to the mouse cursor that hugs the contours of the image as closely as
possible. This path, called the "live wire", is computed by converting the image into a graph where the pixels correspond to nodes. Each node is connected by links to its 8 immediate neighbors. Note
that we use the term "link" instead of "edge" of a graph to avoid confusion with edges in the image. Each link has a cost relating to the derivative of the image across that link. The path is
computed by finding the minimum cost path in the graph, from the seed point to the mouse position. The path will tend to follow edges in the image instead of crossing them, since the latter is more
expensive. The path is represented as a sequence of links in the graph.
Next, we describe the details of the cost function and the algorithm for computing the minimum cost path. The cost function we'll use is a bit different than what's described in the paper, but
closely matches what was discussed in lecture.
As described in the lecture notes, the image is represented as a graph. Each pixel (i,j) is represented as a node in the graph, and is connected to its
8 neighbors in the image by graph links (labeled from 0 to 7), as shown in the following figure.
Cost Function
To simplify the explanation, let's first assume that the image is grayscale instead of color (each pixel has only a scalar intensity, instead of a RGB triple) as a start. The same approach is easily
generalized to color images.
● Computing cost for grayscale images
Among the 8 links, two are horizontal (links 0 and 4), two are vertical (links 2 and 6), and the rest are diagonal. The magnitude of the intensity derivative across the diagonal links, e.g. link1, is
approximated as:
The magnitude of the intensity derivative across the horizontal links, e.g. link 0, is approximated as:
D(link 0)=|(img(i,j-1) + img(i+1,j-1))/2 - (img(i,j+1) + img(i+1,j+1))/2|/2
Similarly, the magnitude of the intensity derivative across the vertical links, e.g. ln2, is approximated as:
We compute the cost for each link, cost(link), by the following equation:
where maxD is the maximum magnitude of derivatives across links over in the image, e.g., maxD = max{D(link) | forall link in the image}, length(link) is the length of the link. For example, length
(link 0) = 1, length(link 1) = sqrt(2) and length(link 2) = 1.
If a link lies along an edge in an image, we expect that the intensity derivative across that link is large and accordingly, the cost of link is small.
As in the grayscale case, each pixel has eight links. We first compute the magnitude of the intensity derivative across a link, in each color channel independently, denoted as
( DR(link),DG(link),DB(link) ).
Then the magnitude of the color derivative across link is defined as
D(link) = sqrt( (DR(link)*DR(link)+DG(link)*DG(link)+DB(link)*DB(link))/3 ).
Then we compute the cost for link link in the same way as we do for a gray scale image:
Notice that cost(link 0) for pixel (i,j) is the same as cost(link 4) for pixel (i+1,j). Similar symmetry property also applies to vertical and diagonal links.
For debugging perpose, you may want to scale down each link cost by a factor of 1.5 or 2 so that they can be converted to byte format without clamping to[0,255]
● Using cross correlation to compute link intensity derivatives
You are *required* to implement the D(link) formulas above using 3x3 cross correlation. Each of the eight link directions will require using a different cross correlation kernel. You will need to
figure out for yourself what the proper entries in each of the eight kernels will be.
Computing the Minimum Cost Path
The pseudo code for the shortest path algorithm in the paper is a variant of Dijkstra's shortest path algorithm, which is described in any of the classic algorithm books (including text books used in
data structures courses like 326). You could also refer to any of the classic algorithm text books (e.g.,
Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Cliff Stein, published by MIT Press). Here is some pseudo code which is equivalent to the algorithm in the
SIGGRAPH paper, but we feel is easier to understand.
procedure LiveWireDP
input: seed, graph
output: a minimum path tree in the input graph with each node pointing to its predecessor along the minimum cost path to that node from the seed. Each node will also be assigned a total cost,
corresponding to the cost of the the minimum cost path from that node to the seed.
comment: each node will experience three states: INITIAL, ACTIVE, EXPANDED sequentially. the algorithm terminates when all
nodes are EXPANDED. All nodes in graph are initialized as INITIAL. When the algorithm runs, all ACTIVE nodes are kept in a priority queue, pq, ordered by the current total cost from the node to the
initialize the priority queue pq to be empty;
initialize each node to the INITIAL state;
set the total cost of seed to be zero;
insert seed into pq;
while pq is not empty
extract the node q with the minimum total cost in pq;
mark q as EXPANDED;
for each neighbor node r of q
if r has not been EXPANDED
if r is still INITIAL
insert r in pq with the sum of the total cost of q and link cost from q to r as its total cost;
mark r as ACTIVE;
else if r is ACTIVE, e.g., in already in the pq
if the sum of the total cost of q and link cost between q and r is less than the total cost of r
update the total cost of r in pq;
We provide the priority queue functions that you will need in the skeleton code (implemented as a binary heap). These are: ExtractMin, Instert, and Update.
Skeleton Code
You can download the skeleton files. The code is compiled in Visual C++ 7.0, and organized by the workspace file iScissor.sln. All the source files (.h/.cpp) are in the subdirectory src. There are 28
files total, but many are just user interface files that are automatically generated by fluid, the FLTK user interface building tool. Here is a description of what's in there:
● ImageLib/*.cpp/h are files taking care of image file I/O. The only image format we support in this tool is targa (.tga) format, which is also readable in Photoshop and most other image viewing
and editing tools. One reason we like tga is that it's possible to store a transparency (alpha) channel with the RGB image data, in 32 bit mode. You shouldn't have to change these files.
● IScissorPanelUI.h/cpp, BrushConfigUI.h/cpp, FltDesignUI.h/cpp, ImgFilterUI.h/cpp, and HelpPageUI.h/cpp define four types of windows used in the tool. They are generated by fluid through
BrushConfigUI.fl, FltDesignUI.fl, ImgFilterUI.fl, and HelpPageUI.fl. You don't have to worry too much about them if you don't want to change the structures of the windows. If you do decide to
change the UI, you may prefer using fluid to do so.
● imgflt.h is a header file used in most of the files.
● FltAux.h defines a few auxiliary structures/routines for the application.
● ImgFltMain.cpp is the main file where you want to start reading.
● correlation.h/cpp are the files where your image filtering code will go.
● ImgView.h/cpp are files where most of the UI requests are implemented. In ImgView.cpp, you want to finish handle() (for extra credit only) so that it applies the brush filter when the left mouse
button is pressed and moved.
● PriorityQueue.h, which defines several template classes for dynamic array, binary heap, and doubly linked list. They are useful for representing contours and searching the minimum path tree.
ImgView contains most of the data structures and handles interface messages. You will work with iScissor.cpp most often.
Data structures
The main data structure that you will use in Project 1 is the Pixel Node.
Pixel Node
Use the following Node structure when computing the minimum path tree.
struct Node{
double linkCost[8];
int state;
double totalCost;
Node *prevNode;
int column, row;
//other unrelated fields;}
linkCost contains the costs of each link, as described above.
state is used to tag the node as being INITIAL, ACTIVE, or EXPANDED during the min-cost tree computation.
totalCost is the minimum total cost from this node to the seed node.
prevNode points to its predecessor along the minimum cost path from the seed to that node.
column and row remember the position of the node in original image so that its neighboring nodes can be located.
For visualization purposes, we provide code to convert a pixel node array into an image that displays the computed cost values in the user interface. This image buffer, called Cost Graph, has the
structure shown below. Cost Graph has 3W columns and 3H rows and is obtained by expanding the original image by a factor of 3 in both the horizontal and vertical directions. For each 3 by 3 unit, the
RGB(i,j) color is saved at the center and the eight link costs, as described in the Cost Function section, are saved in the 8 corresponding neighbor pixels. The link costs shown are the average of
the costs over the RGB channels, as described above (NOT the per-channel costs). The Cost Graph may be viewed as an RGB image in the interface, (dark = low cost, light = high cost).
││Cost(link3)│Cost(link2) │Cost(link1)│││Cost(link3)│Cost(link2)│Cost(link1)│││Cost(link3)│Cost(link2) │Cost(link1)││
││Cost(link4)│RGB(i-1,j-1)│Cost(link0)│││Cost(link4)│RGB(i,j-1) │Cost(link0)│││Cost(link4)│RGB(i+1,j-1)│Cost(link0)││
││Cost(link5)│Cost(link6) │Cost(link7)│││Cost(link5)│Cost(link6)│Cost(link7)│││Cost(link5)│Cost(link6) │Cost(link7)││
│┌───────────┬───────────┬───────────┐ │┌───────────┬───────────┬───────────┐│┌───────────┬───────────┬───────────┐ │
││Cost(link3)│Cost(link2)│Cost(link1)│ ││Cost(link3)│Cost(link2)│Cost(link1)│││Cost(link3)│Cost(link2)│Cost(link1)│ │
│├───────────┼───────────┼───────────┤ │├───────────┼───────────┼───────────┤│├───────────┼───────────┼───────────┤ │
││Cost(link4)│RGB(i-1,j) │Cost(link0)│ ││Cost(link4)│ RGB(i,j) │Cost(link0)│││Cost(link4)│RGB(i+1,j) │Cost(link0)│ │
│├───────────┼───────────┼───────────┤ │├───────────┼───────────┼───────────┤│├───────────┼───────────┼───────────┤ │
││Cost(link5)│Cost(link6)│Cost(link7)│ ││Cost(link5)│Cost(link6)│Cost(link7)│││Cost(link5)│Cost(link6)│Cost(link7)│ │
│└───────────┴───────────┴───────────┘ │└───────────┴───────────┴───────────┘│└───────────┴───────────┴───────────┘ │
││Cost(link3)│Cost(link2) │Cost(link1)│││Cost(link3)│Cost(link2)│Cost(link1)│││Cost(link3)│Cost(link2) │Cost(link1)││
││Cost(link4)│RGB(i-1,j+1)│Cost(link0)│││Cost(link4)│RGB(i,j+1) │Cost(link0)│││Cost(link4)│RGB(i+1,j+1)│Cost(link0)││
││Cost(link5)│Cost(link6) │Cost(link7)│││Cost(link5)│Cost(link6)│Cost(link7)│││Cost(link5)│Cost(link6) │Cost(link7)││
│ Pixel layout in Cost Graph (3W*3H) │
User Interface
File-->Save Contour, save image with contour marked;
File-->Save Mask, save compositing mask for PhotoShop;
Tool-->Scissor, open a panel to choose what to draw in the window
Work Mode:
Image Only: show original image without contour superimposed on it;
Image with Contour: show original image with contours superimposed on it;
Debug Mode:
Pixel Node: Draw a cost graph with original image pixel colors at the center of each 3by3 window, and black everywhere else;
Cost Graph: Draw a cost graph with both pixel colors and link costs, where you can see whether your cost computation is reasonable or not, e.g., low cost (dark intensity) for links along image edges.
Path Tree: show minimum path tree in the cost graph for the current seed; You can use the counter widget to simulate how the tree is computed by specifying the number of expanded nodes. The tree
consists of links with yellow color. The back track direction (towards the seed) goes from light yellow to dark yellow.
Min Path: show the minimum path between the current seed and the mouse position;
Ctrl+"+", zoom in;
Ctrl+"-", zoom out;
Ctrl+Left click first seed;
Left click, following seeds;
Enter, finish the current contour;
Ctrl+Enter, finish the current contour as closed;
Backspace, when scissoring, delete the last seed; otherwise, delete selected contour.
Select a contour by moving onto it. Selected contour is red, un-selected ones are green.
To Do
All the required work can be done in iScissor.cpp, iScissor.h, and correlation.cpp.
1. implement InitNodeBuf, which takes as input an image of size W by H and an allocated node buffer of the same dimensions, and initializes the column, row, and linkCost fields of each node in the
buffer. InitNodeBuf MUST make calls to pixel_filter.You will also need to modify the eight cross correlation kernels defined in iScissor.h.
2. implement pixel_filter, which takes as input an image of size W by H, a filter kernel, and a pixel position at which to compute the cross correlation.
3. implement image_filter, which applies a filter to an entire image. You may do this by making calls to pixel_filter if you wish.
4. implement LiveWireDP, which takes a node buffer and a seed position as input and computes the minimum path tree from the seed node.
5. implement MinimumPath, which takes as input a node buffer and a node position and returns a list of nodes along the minimum cost path from the input node to the seed node in the buffer (the seed
has a NULL predecessor).
The Artifact
For this assignment, you will turn in a final image (the artifact) which is a composite created using your program. Your composite can be derived from as many different images as you'd like. Make it
interesting in some way--be it humorous, thought provoking, or artistic! You should use your own scissoring tool to cut the objects out and save them to matte files, but then can use Photoshop or any
other image editing program to process the resulting mattes (move, rotate, adjust colors, warp, etc.) and combine them into your composite. Instructions on how to do this in Photoshop are provided
here. You should still turn in an artifact even if you don't get the program working fully, using the scissoring tool in the sample solution or in Photoshop.
Besides the artifact, please also turn in a web-page that includes all your original images, all the masks you made, and a description of the process, including anything special you did. The web-page
should be placed in the project1/artifact directory along with all the images in JPEG format. If you are unfamiliar with HTML you can use any web-page editor such as FrontPage, Word, or Visual Studio
7.0 to make your web-page.
The class will vote on the best composites.
Bells and Whistles
Here is a list of suggestions for extending the program for extra credit. You are encouraged to come up with your own extensions. We're always interested in seeing new, unanticipated ways to use this
One problem with the live wire is that it prefers shorter paths so will tend to cut through large object rather than wrap around them. One way to fix this is specify a specific region in which the
path must stay. As long as this region contains the object boundary but excludes most of the interior, the path will be forced to follow the boundary. One way of specifying such a region is to use a
thick (e.g., 50 pixel wide) paint brush. Implement this feature. Note: we already provide support for brushing a region using a selection buffer.
Modify the interface and program to allow blurring the image by different amounts before computing link costs. Describe your observations on how this changes the results.
Try different costs functions, for example the method described in Intelligent Scissors for Image Composition, and modify the user interface to allow the user to select different functions. Describe
your observations on how this changes the results.
The only point that doesn't snap to edges is the seed. Implement a seed snapping feature, where the seed is automatically moved to the closest edge.
Implement path cooling, as described in Intelligent Scissors for Image Composition.
Implement dynamic training, as described in Intelligent Scissors for Image Composition.
Add other interesting editing operations--see here for some inspiring examples (credit depends on what you implement!)
Implement a live wire with sub-pixel precision. You can find the position of an edge to sub-pixel precision by fitting a curve (e.g., a parabola) to the gradient magnitude values across an edge, and
finding the maximum. Another way (more complex but potentially better) of doing this is given in a follow on to Mortensen's scissoring paper. It is probably easiest to first compute the standard
(pixel-precision) live wire and then use one of these curve fitting techniques to refine it.
Last modified on September 30, 2003
|
{"url":"http://courses.cs.washington.edu/courses/cse455/03wi/projects/project1/web/project1.htm","timestamp":"2014-04-18T13:07:03Z","content_type":null,"content_length":"50004","record_id":"<urn:uuid:8b5b3399-7d11-4e8c-862f-20a9ccb7e097>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PROPHET StatGuide: Do your data violate one-way ANOVA assumptions?
A lack of independence within a sample is often caused by the existence of an implicit factor in the data. For example, values collected over time may be serially correlated (here time is the
implicit factor). If the data are in a particular order, consider the possibility of dependence. (If the row order of the data reflect the order in which the data were collected, an index plot of
the data [data value plotted against row number] can reveal patterns in the plot that could suggest possible time effects.)
Whether the samples are independent of each other is generally determined by the structure of the experiment from which they arise. Obviously correlated samples, such as a set of observations
over time on the same subjects, are not independent, and such data would be more appropriately tested by a one-way blocked ANOVA or a repeated measures ANOVA. If you are unsure whether your
samples are independent, you may wish to consult a statistician or someone who is knowledgeable about the data collection scheme you are using.
Values may not be identically distributed because of the presence of outliers. Outliers are anomalous values in the data. Outliers tend to increase the estimate of sample variance, thus
decreasing the calculated F statistic for the ANOVA and lowering the chance of rejecting the null hypothesis. They may be due to recording errors, which may be correctable, or they may be due to
the sample not being entirely from the same population. Apparent outliers may also be due to the values being from the same, but nonnormal, population. The boxplot and normal probability plot
(normal Q-Q plot) may suggest the presence of outliers in the data.
The F statistic is based on the sample means and the sample variances, each of which is sensitive to outliers. (In other words, neither the sample mean nor the sample variance is resistant to
outliers, and thus, neither is the F statistic.) In particular, a large outlier can inflate the overall variance, decreasing the F statistic and thus perhaps eliminating a significant difference.
A nonparametric test may be a more powerful test in such a situation. If you find outliers in your data that are not due to correctable errors, you may wish to consult a statistician as to how to
The values in a sample may indeed be from the same population, but not from a normal one. Signs of nonnormality are skewness (lack of symmetry) or light-tailedness or heavy-tailedness. The
boxplot, histogram, and normal probability plot (normal Q-Q plot), along with the normality test, can provide information on the normality of the population distribution. However, if there are
only a small number of data points, nonnormality can be hard to detect. If there are a great many data points, the normality test may detect statistically significant but trivial departures from
normality that will have no real effect on the F statistic.
For data sampled from a normal distribution, normal probability plots should approximate straight lines, and boxplots should be symmetric (median and mean together, in the middle of the box) with
no outliers.
The one-way ANOVA's F test will not be much affected even if the population distributions are skewed, but the F test can be sensitive to population skewness if the sample sizes are seriously
unbalanced. If the sample sizes are not unbalanced, the F test will not be seriously affected by light-tailedness or heavy-tailedness, unless the sample sizes are small (less than 5), or the
departure from normality is extreme (kurtosis less than -1 or greater than 2).
Robust statistical tests operate well across a wide variety of distributions. A test can be robust for validity, meaning that it provides P values close to the true ones in the presence of
(slight) departures from its assumptions. It may also be robust for efficiency, meaning that it maintains its statistical power (the probability that a true violation of the null hypothesis will
be detected by the test) in the presence of those departures. The one-way ANOVA's F test is robust for validity against nonnormality, but it may not be the most powerful test available for a
given nonnormal distribution, although it is the most powerful test available when its test assumptions are met. In the case of nonnormality, a nonparametric test or employing a transformation
may result in a more powerful test.
The inequality of the population variances can be assessed by examination of the relative size of the sample variances, either informally (including graphically), or by a robust variance test
such as Levene's test. (Bartlett's test is even more sensitive to nonnormality than the one-way ANOVA's F test, and thus should not be used for such testing.) The effect of inequality of
variances is mitigated when the sample sizes are equal: The F test is fairly robust against inequality of variances if the sample sizes are equal, although the chance increases of incorrectly
reporting a significant difference in the means when none exists. This chance of incorrectly rejecting the null hypothesis is greater when the population variances are very different from each
other, particularly if there is one sample variance very much larger than the others.
The effect of inequality of the variances is most severe when the sample sizes are unequal. If the larger samples are associated with the populations with the larger variances, then the F
statistic will tend to be smaller than it should be, reducing the chance that the test will correctly identify a significant difference between the means (i.e., making the test conservative). On
the other hand, if the smaller samples are associated with the populations with the larger variances, then the F statistic will tend to be greater than it should be, increasing the risk of
incorrectly reporting a significant difference in the means when none exists. This chance of incorrectly rejecting the null hypothesis in the case of unbalanced sample sizes can be substantial
even when the population variances are not very different from each other.
Although the effect of unbalanced sample sizes and unequal population variances increases for smaller sample sizes, it does not decrease substantially if the sample sizes are increased without
changing the lack of balance in the sample sizes. For this reason, and because equal sample sizes mitigate the effect of unequal population variances, the best course is to keep the sample sizes
as equal as possible.
If both nonnormality and unequal variances are present, employing a transformation may be preferable. A nonparametric test like the Kruskal-Wallis test still assumes that the population variances
are comparable.
The plot of each sample's values against its mean (or its sample ID) will consist of vertical "stacks" of data points, one stack for each unique sample mean value. If the assumptions for the
samples' population distributions are correct, the stacks should be about the same length. Outliers may appear as anomalous points in the graph.
A fan pattern like the profile of a megaphone, with a noticeable flare either to the right or to the left as shown in the picture (one or more of the "stacks" of data points is much longer than
the others), suggests that the variance in the values increases in the direction the fan pattern widens (usually as the sample mean increases), and this in turn suggests that a transformation may
be needed.
Side-by-side boxplots of the samples can also reveal lack of homogeneity of variances if some boxplots are much longer than others, and reveal suspected outliers.
If one or more the sample sizes is small, it may be difficult to detect assumption violations. With small samples, violation assumptions such as nonnormality or inequality of variances are
difficult to detect even when they are present. Also, with small sample size(s) the one-way ANOVA's F test offers less protection against violation of assumptions.
Even if none of the test assumptions are violated, a one-way ANOVA with small sample sizes may not have sufficient power to detect any significant difference among the samples, even if the means
are in fact different. The power depends on the error variance, the selected significance (alpha-) level of the test, and the sample size. Power decreases as the variance increases, decreases as
the significance level is decreased (i.e., as the test is made more stringent), and increases as the sample size increases. With very small samples, even samples from populations with very
different means may not produce a significant one-way ANOVA F test statistic unless the sample variance is small. If a statistical significance test with small sample sizes produces a
surprisingly non-significant P value, then a lack of power may be the reason. The best time to avoid such problems is in the design stage of an experiment, when appropriate minimum sample sizes
can be determined, perhaps in consultation with a statistician, before data collection begins.
The one-way ANOVA test is not too sensitive to inequality of variances if the sample sizes are equal. If the sample sizes are not approximately equal, and especially if the larger sample
variances are associated with the smaller sample sizes, then the calculated F statistic may be dominated by the sample variances for the larger samples, so that the test is less likely to
correctly identify significant differences in the means if the larger samples are associated with the larger population variances, and more likely to report nonexistent differences in the means
if the smaller samples are associated with the larger population variances. Unbalanced sample sizes also increase any effect due to nonnormality, and require adjustments to be made in calculating
multiple comparisons tests.
In general, the multiple comparisons tests will be robust in those situations when the one-way ANOVA's F test is robust, and will be subject to the same potential problems with unequal variances,
particularly when the sample sizes are unequal. As with the one-way ANOVA itself, the best protection against the effects of possible assumption violations is to employ equal sample sizes.
Unequal variances may make individual comparisons of means inaccurate, because the multiple comparison techniques rely on a pooled estimate for the variance, based on the assumption that the
sample variances are equal.
Ideally, the sample sizes will be equal for all-pairwise multiple comparison tests. When they are not, an adjustment must be made to the calculations. The Tukey-Kramer adjustment (based on the
harmonic mean of each pair's sample sizes), which Prophet uses, may be conservative (that is, it may be less likely to flag means as different than the nominal significance level would suggest),
but in general performs well. An alternative procedure is to use the harmonic mean of all the sample sizes for all the pairwise comparisons. This has the disadvantage that the actual significance
level of the test is more often different from the nominal significance level than is the case with the Tukey-Kramer adjustment; worse, the actual significance level of the test may be greater
than the nominal significance level, meaning that the test is more likely to incorrectly flag a mean difference as significant.
|
{"url":"http://www.basic.northwestern.edu/statguidefiles/oneway_anova_ass_viol.html","timestamp":"2014-04-21T14:55:26Z","content_type":null,"content_length":"19633","record_id":"<urn:uuid:3c2a84fb-c3dc-4118-a7bf-8bec272d53b0>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Barrier Inhomogeneity and Electrical Properties of InN Nanodots/Si Heterojunction Diodes
Journal of Nanomaterials
Volume 2011 (2011), Article ID 189731, 7 pages
Research Article
Barrier Inhomogeneity and Electrical Properties of InN Nanodots/Si Heterojunction Diodes
^1Materials Research Centre, Indian Institute of Science, Bangalore 560012, India
^2Central Research Laboratory, Bharat Electronics, Bangalore 560013, India
Received 25 July 2011; Accepted 28 August 2011
Academic Editor: Zhi Li Xiao
Copyright © 2011 Mahesh Kumar et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The electrical transport behavior of indium nitride nanodot-silicon (InN ND-Si) heterostructure Schottky diodes is reported here, which have been fabricated by plasma-assisted molecular beam epitaxy.
InN ND structures were grown on a 20nm InN buffer layer on Si substrates. These dots were found to be single crystalline and grown along [0 0 0 1] direction. Temperature-dependent current
density-voltage plots () reveal that the ideality factor () and Schottky barrier height (SBH) () are temperature dependent. The incorrect values of the Richardson constant () produced suggest an
inhomogeneous barrier. Descriptions of the experimental results were explained by using two models. First one is barrier height inhomogeneities (BHIs) model, in which considering an effective area of
the inhomogeneous contact provided a procedure for a correct determination of . The Richardson constant is extracted ~110A cm^−2 K^−2 using the BHI model and that is in very good agreement with the
theoretical value of 112A cm^−2 K^−2. The second model uses Gaussian statistics and by this, mean barrier height and were found to be 0.69eV and 113A cm^−2 K^−2, respectively.
1. Introduction
Group III nitrides represent a material class with promising electronic and optical properties [1]. Among these, InN possesses the lowest effective mass, the highest mobility, narrow band gap of
0.7–0.9eV and the highest saturation velocity [2, 3], which make this an attractive material for applications in solar cells, terahertz emitters, and detectors [4–6]. Good quality InN layers are
difficult to grow because of the low dissociation temperature of InN and the lack of an appropriate substrate [7, 8]. The above constraints lead to the formation of dislocations and strain in the
grown epitaxial layers resulting in the degradation of the device performance. Grandal et al. [9] reported that defect- and strain-free InN nanocolumns of very high crystal quality can be grown by
molecular beam epitaxy (MBE) with and without buffer layer on silicon substrates. Since silicon is the most sought semiconductor material, it is very important to understand the transport mechanism
of InN nanostructure-based devices and their behavior at different temperatures prior to their adoption in the fabrication of optoelectronic devices. In the present study, InN nanodot (ND) structures
were grown on Si substrates using an InN buffer layer by plasma-assisted MBE.
The interfaces of the semiconductor heterostructures are important part of semiconductor electronic and optoelectronic devices. One of the most interesting properties of a semiconductor
heterostructure interface is its Schottky barrier height (SBH) (), which is a measure of the mismatch of the energy levels for the majority carriers across the interface. Temperature-dependent
ideality factor () and SBH () and incorrect values of the Richardson constant () suggest an inhomogeneous barrier. Two techniques exist to modify the classic Richardson plot to extract the barrier
height, taking into account SBH lowering due to an inhomogeneous contact. The first involves extracting the barrier height relevant to analyses, a value referred to as the effective barrier height .
For this technique, Tung [10] developed a barrier height inhomogeneities (BHIs) model of such inhomogeneities on the electron transport across the metal semiconductor interface, taking into account
the possible presence of a distribution of nanometer-sized “patches” with lower barrier height embedded in a uniform high barrier background. Gammon et al. [11] has been reported that same BHI model
can be applied to inhomogeneous heterojunction Schottky contacts. The second technique uses Gaussian statistics to modify the classic Richardson plot to extract the barrier height, taking into
account SBH lowering due to an inhomogeneous contact.
In this article, we have studied the electrical transport behavior of InN ND/Si heterostructure Schottky diodes that have been fabricated by plasma-assisted MBE. These electrical measurements showed
a dependence on the temperature of the ideality factor () and the SBH () that can be explained by BHI and Gaussian statistics model. The BHI model provided a method for a correct determination of the
Richardson constant. Hence, the observed underestimation of the value could be attributed to an effective area, involved in the current transport, which may be significantly lower than the geometric
area of the diode.
2. Experimental Procedure
The growth system used in this study was a plasma-assisted MBE system equipped with a radio frequency (RF) plasma source. The base pressure in the system was below 1 × 10^−10 mbar. The -Si (111) (~2
× 10^16cm^−3) wafers were first chemically cleaned followed by dipping in 5% HF to remove the surface oxide and then thermally cleaned at 900°C for one hour in ultrahigh vacuum. The deposition of
InN consists of a two-step growth method. The initial low temperature buffer layer of thickness ~20nm was deposited at 400°C. Further, the substrate temperature was raised to 500°C to fabricate the
NDs. The duration of NDs growth was kept for 60min. The general set of growth conditions includes indium beam equivalent pressure (BEP), nitrogen flow rate, and rf-plasma power, which were kept at
2.1 × 10^−7mbar, 0.5sccm, and 350W, respectively. The structural evaluation of the as-grown NDs was carried out by the high-resolution X-ray diffraction (HRXRD), field emission scanning electron
microscopy (FESEM), and transmission electron microscopy (TEM). The aluminum contacts were fabricated by thermal evaporation. The adequate Ohmic nature of the contacts to InN and Si was verified. The
device transport characteristics were studied at various temperatures using the probe station attached with the KEITHLEY-236 source measure unit.
3. Results and Discussion
Figure 1 shows XRD pattern of the InN NDs grown on Si (111) substrate. From the figure it can be seen that except the substrate peaks, only (0002) InN diffracted peak at is present, indicating the
InN NDs to be highly oriented along the [0001] direction of the wurtzite structures of InN. Figure 2(a) shows a typical FESEM image of InN NDs and illustrates that the as-grown NDs are vertically
aligned and uniformly grown over the entire substrate. Figures 2(b)–2(d) represent typical TEM micrographs, high-resolution TEM (HRTEM) images, and selected area electron diffraction (SAED) of InN
NDs, respectively. The HRTEM shows one of the corner edges of an ND. The interplanar spacing, as observed from the fringe pattern of the HRTEM image, is 0.305nm, which corresponds to the (1000)
lattice spacing of InN. The SAED pattern shows clearly visible bright spots which represent that each ND is single crystalline.
The current density-voltage-temperature () characterization of the Schottky diodes was performed in order to determine the significant parameters ruling the current transport across the InN ND/Si
contact, namely, the ideality factor () and the SBH (). Figure 3(a) shows a schematic diagram of the device and measurement method. Figure 3(b) shows the semilogarithmic plot of the curves of the InN
ND/Si diodes, which were acquired in the temperature range of 120–450K. An excellent rectifying behavior was observed at lower temperatures, but at high temperatures, a deterioration was observed in
the rectifying nature, which may be due to thermally generated carrier tunneling. It is very clear from the curve that at fixed bias, the forward current increases with increasing temperature. This
indicates that the current is induced by the thermionic emission (TE). The values of SBH () and the ideality factor () for the junction were calculated as a function of measuring temperature by
fitting a line in the linear region of the forward curves using the TE equation [12, 13]: where is the saturation current density expressed by
Here, is the area of the diode, is the effective Richardson constant (112Acm^−2K^−2 for -type Si) [14], is the Boltzmann’s constant, is the electron charge, and is the measure temperature.
Figure 4 shows the ideality factor () and the SBH () of InN ND/Si Schottky diodes as function of temperature, which have been extracted from the forward bias characteristics. It may be clearly seen
that these parameters are both temperature dependent. In particular, by decreasing temperatures the ideality factor deviates from the unity and SBH () decreases from the value of 0.68eV at 450K to
0.48eV at 120K. The deviation from the ideality observed at lower temperatures suggests the presence of an inhomogeneous barrier. Further, to justify the application of the BHI model to our data,
the temperature dependence of the ideality factor was reported in a plot of versus and is shown in Figure 5, in which the straight line shows an ideal behavior of a Schottky contact (). The
experimental data could be fitted by a straight line, which is parallel to that of the ideal Schottky contact behavior and the ideality factor () can be expressed in the form , with K in present
case, as determined by the fit parameters. This behavior is referred as the “ anomaly” [15]. Such parametric dependence is typical of a real Schottky contact with a distribution of barrier
inhomogeneities [16].
Plotting the SBH () against their respective ideality factors as shown in Figure 6 displays the linear correlation between the two. Extrapolating a linear fit of the data to reveals the average
barrier height [17]. By the extrapolation of SBH () at of the experimental data reported in Figure 6, a value of eV was determined. Another technique used to extract the barrier height is via a
Richardson plot. The values of the saturation current density () were determined from the extrapolation at of the linear fit of the curves in the range 280–450K, that is, where the deviation from
the ideality is small. The conventional Richardson’s plot of ln() versus was obtained and is shown in Figure 7. From the linear fit to the plot, the Richardson constant () and effective Schottky
barrier height () were calculated to be ~16Acm^-2K^−2 and 0.57eV, respectively. The value of obtained from the conventional Richardson plot is more than a factor of seven lower with respect to
the theoretical value of 112Acm^−2K^−2 for -type Si. Also, the value of effective barrier height is less than the average barrier height , suggesting the formation of an inhomogeneous SBH at the
The expression of the current flowing through the InN ND/Si Schottky diodes can be obtained by adding the current through the low barrier patches to the current which passes through the surrounding
homogeneous regions. However, it is often reasonably assumed that the characteristic is dominated by the current flow through the low SBH patches. Considering the TE equations ((1) and (2)), the area
can be replaced with the product , which represents the total area of a contact made up of a low SBH. Individually, is the number of patches in the area and is the area of a single patch of a low
SBH. Equation (1) can be rewritten substituting in the product where the SBH modeling parameter replaces [.] The effective area of the low SBH patch can be expressed as where is the band bending at
the InN ND/Si interface and .
The experimental curves were fitted by using (3) with the number of patches as free parameter. A value of × 10^7 gave a good fit of the experimental data in the investigated range of temperatures.
The product represents the total effective area contributing to the current transport. Although is independent of temperature, as it gives the number of patches in the contact, the product exhibits
temperature dependence. Figure 8 shows that selecting the correct balance of the variables , , and ideality factor () provides a very good approximation to the linear fits of the experimental data.
The values of used were those extracted from each individual plot, as was illustrated in Figure 4. The values of and were arrived at by using a modified Richardson plot. The product is temperature
dependent, in order to eliminate the temperature dependence on the SBH within a Richardson plot, the total area dominated by the low SBH patches () is taken into consideration within the Richardson
plot is (3) rearranged to Figure 9 shows the modified Richardson’s plot. From the slope of the straight line fitting the data, the value of was obtained ~0.57eV, while from the intercept, a value of
the Richardson constant of Acm^-2K^−2 was determined and this is very close to the theoretical value of 112Acm^-2K^−2. The product of was found 8.27 × 10^−5cm^2 at room temperature,
represented 11% of the total area.
The other technique used the Gaussian statistics [18–20] to modify the classic Richardson plot to extract the barrier height, taking into account SBH lowering due to an inhomogeneous contact. This
technique uses Gaussian statistics to relate experimental values of SBH extracted from analysis , back to the mean SBH . The amount of patches () that will have SBH values falling between and the
value of SBH measured from the individual curves have a Gaussian distribution given by where is the standard deviation of the distribution and is the total number of patches in the area . Solving (1
), (2), and (6), the total forward current can be given by [11] The TE equation for current over the barrier is Combining (7) and (8) and rearranging this allows the values of SBH measured from the
analysis to be plotted against the inverse thermal energy, to extract and . This is shown in Figure 10 where and were found to be 0.0593eV and 0.70eV, respectively. Verification of this value can
be carried out using a Richardson plot, and rearranging (7) and (9): Figure 11 shows the resulting Richardson plot where was found to be 0.69eV and was 113Acm^−2K^−2.
4. Conclusions
We have demonstrated the electrical transport behavior of -InN ND/Si heterostructure Schottky diodes that have been fabricated by plasma-assisted MBE. Single-crystalline wurtzite structures of InN
NDs are verified by the X-ray diffraction and transmission electron microscopy. Temperature-dependent current-voltage plots () reveal that the ideality factor () and Schottky barrier height (SBH) ()
are temperature dependent and that incorrect values of the Richardson constant () are being produced, suggesting an inhomogeneous barrier. Descriptions of the experimental results were explained by
using BHI and Gaussian statistics models. The Richardson constant was extracted to be ~110Acm^−2K^−2 using the BHI model and that is in very good agreement with the theoretical value of 112Acm^
−2K^−2. The second model uses Gaussian statistics and by this, mean barrier height and were found to be 0.69eV and 113Acm^−2K^−2, respectively.
1. S. Nakamura, “The roles of structural imperfections in InGaN-based blue light- emitting diodes and laser diodes,” Science, vol. 281, no. 5379, pp. 956–961, 1998. View at Scopus
2. J. Wu, W. Walukiewicz, K. M. Yu et al., “Unusual properties of the fundamental band gap of InN,” Applied Physics Letters, vol. 80, no. 21, p. 3967, 2002. View at Publisher · View at Google
Scholar · View at Scopus
3. M. Kumar, B. Roul, T. N. Bhat et al., “Kinetics of self-assembled InN quantum dots grown on Si (111) by plasma-assisted MBE,” Journal of Nanoparticle Research, vol. 13, p. 1281, 2011. View at
Publisher · View at Google Scholar · View at Scopus
4. H. C. Yang, P. F. Kuo, T. Y. Lin, et al., “Mechanism of luminescence in InGaN/GaN multiple quantum wells,” Applied Physics Letters, vol. 76, no. 25, p. 3712, 2000.
5. E. Bellotti, B. K. Doshi, K. F. Brennan, J. D. Albrecht, and P. P. Ruden, “Ensemble Monte Carlo study of electron transport in wurtzite InN,” Journal of Applied Physics, vol. 85, no. 2, pp.
916–923, 1999. View at Scopus
6. C. Y. Chang, G. C. Chi, W. M. Wang et al., “Transport properties of InN nanowires,” Applied Physics Letters, vol. 87, no. 9, Article ID 093112, 2005. View at Publisher · View at Google Scholar
7. Q. Guo, O. Kato, and A. Yoshida, “Thermal stability of indium nitride single crystal films,” Journal of Applied Physics, vol. 73, no. 11, pp. 7969–7971, 1993. View at Publisher · View at Google
Scholar · View at Scopus
8. K. Wang and R. R. Reeber, “Thermal expansion and elastic properties of InN,” Applied Physics Letters, vol. 79, no. 11, pp. 1602–1604, 2001. View at Publisher · View at Google Scholar · View at
9. J. Grandal, M. A. Sánchez-García, E. Calleja, E. Luna, and A. Trampert, “Accommodation mechanism of InN nanocolumns grown on Si(111) substrates by molecular beam epitaxy,” Applied Physics Letters
, vol. 91, no. 2, Article ID 021902, 2007. View at Publisher · View at Google Scholar
10. R. T. Tung, “Electron transport at metal-semiconductor interfaces: general theory,” Physical Review B, vol. 45, no. 23, pp. 13509–13523, 1992. View at Publisher · View at Google Scholar · View at
11. P. M. Gammon, A. Pérez-Tomás, V. A. Shah et al., “Analysis of inhomogeneous Ge/SiC heterojunction diodes,” Journal of Applied Physics, vol. 106, no. 9, Article ID 093708, 2009. View at Publisher
· View at Google Scholar
12. L. Wang, M. I. Nathan, T. H. Lim, M. A. Khan, and Q. Chen, “High barrier height GaN Schottky diodes: Pt/GaN and Pd/GaN,” Applied Physics Letters, vol. 68, no. 9, pp. 1267–1269, 1996. View at
Publisher · View at Google Scholar · View at Scopus
13. H. Morkoc, Handbook of Nitride Semiconductors and Devices, Wiley-VCH, New York, NY, USA, 2008.
14. C. Hayzelden and J. L. Batstone, “Silicide formation and silicide-mediated crystallization of nickel-implanted amorphous silicon thin films,” Journal of Applied Physics, vol. 73, no. 12, pp.
8279–8289, 1993. View at Publisher · View at Google Scholar · View at Scopus
15. F. Roccaforte, F. La Via, V. Raineri, R. Pierobon, and E. Zanoni, “Richardson's constant in inhomogeneous silicon carbide Schottky contacts,” Journal of Applied Physics, vol. 93, no. 11, pp.
9137–9144, 2003. View at Publisher · View at Google Scholar · View at Scopus
16. B. Abay, G. Cankaya, H. S. Guder, H. Efeoglu, and Y. K. Yogurtcu, “Barrier characteristics of Cd/p-GaTe Schottky diodes based on I–V–T measurements,” Semiconductor Science and Technology, vol.
18, no. 2, pp. 75–81, 2003.
17. R. F. Schmitsdorf, T. U. Kampen, and W. Mönch, “Explanation of the linear correlation between barrier heights and ideality factors of real metal-semiconductor contacts by laterally nonuniform
Schottky barriers,” Journal of Vacuum Science and Technology B, vol. 15, no. 4, pp. 1221–1226, 1997.
18. Y. P. Song, R. L. Van Meirhaeghe, W. H. Laflère, and F. Cardon, “On the difference in apparent barrier height as obtained from capacitance-voltage and current-voltage-temperature measurements on
Al/p-InP Schottky barriers,” Solid State Electronics, vol. 29, no. 6, pp. 633–638, 1986. View at Scopus
19. J. H. Werner and H. H. Güttler, “Barrier inhomogeneities at Schottky contacts,” Journal of Applied Physics, vol. 69, no. 3, pp. 1522–1533, 1991. View at Publisher · View at Google Scholar · View
at Scopus
20. F. E. Jones, B. P. Wood, J. A. Myers, C. Daniels-Hafer, and M. C. Lonergan, “Current transport and the role of barrier inhomogeneities at the high barrier n-InP | poly(pyrrole) interface,”
Journal of Applied Physics, vol. 86, no. 11, pp. 6431–6441, 1999. View at Scopus
|
{"url":"http://www.hindawi.com/journals/jnm/2011/189731/","timestamp":"2014-04-19T19:14:10Z","content_type":null,"content_length":"179432","record_id":"<urn:uuid:91657cd5-2213-4952-8d4a-577d7bfe0855>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finance Wizard - Over 140 calculators - Android Forums
Financial calculators for JB+. Over 140 calculators currently with more on the way. Includes usage tracking and time of last use. Also share, copy, and favorites
Annuity - Future Value
Annuity - Payment (FV)
Annuity Due - Future Value
Annuity Due Payment (FV)
Average Collection Period
Auto loan
Equated Monthly Installment
Fixed Deposit
Immediate Annuity
Cost of Equity
Annuity - Present Value Interest Factor
Actual Cash Value (ACV)
Sharpe ratio
Discount Factor
Gross Rent Multiplier
Annuity - Future Value Interest Factor
Simple savings calculator
Macaulay Bond Duration
Macaulay Modified Bond Duration
Salary Per Day, Hour, Min and Sec
Capitalization Rate
Credit Card - Minimum Payment
Nominal Interest Rate
Currency Converter
Dupont analysis
Dividend discount model
Internal Rate of Return (IRR)
Credit card equation
Put-call Parity
Jensen's Alpha
Bond Value
Bond Equivalent Yield
Capital Gains Yield
Compound Interest
Capital Asset Pricing Model (CAPM)
Net Present Value
Weighted Average Cost Of Capital (WACC)
Continuous Compounding
Current Ratio
Days in Inventory
Debt Ratio
Debt to Equity Ratio (D/E)
Debt to Income Ratio
Dividend Payout Ratio
Dividend Yield (Stock)
Discount Calculator
Dividends Per Share
Doubling Time
Doubling Time - Continuous Compounding
Earnings Per Share
Equity Multiplier
Estimated Earnings
Future Value
Future Value Continuous Compounding
Future Value Factor
Growing Annuity - Future Value
Growing Annuity Payment - PV
Growing Annuity - Present Value
Growing Perpetuity - Present Value
Interest Coverage Ratio
Inventory Turnover Ratio
Loan - Balloon Balance
Loan - Payment
Loan - Remaining Balance
Loan to Deposit Ratio
Loan to Value Ratio
Mortgage Calculator
Net Asset Value
Net Profit Margin
Net Working Capital
Payback Period
Preferred Stock
Present Value
Present Value - Continuous Compounding
Price to Book Value
Price Earnings Ratio
Price to Sales Ratio
Quick Ratio
Rate Of Inflation
Receivables Turnover Ratio
Retention Ratio
Return on Assets (ROA)
Return on Equity (ROE)
Return on Investment (ROI)
Risk Premium
Rule of 72
Sales Tax
Simple Interest
Stock - PV with Constant Growth
Tax Equivalent Yield
Total Stock Return
Tip Calculator
Zero Coupon Bond Value
Zero Coupon Bond Yield
Percent calculator
Percent increase
Percent decrease
Percent change
Gross margin
Annual Percentage Yield (APY)
Contribution margin
Diluted Earnings Per Share
Equivalent Annual Annuity
Free Cash Flow to Equity (FCFE)
Free Cash Flow to Firm (FCFF)
Real Rate of Return
Stock - PV with Zero Growth
Certificate of Deposit
Total Inventory Cost
Economic Order Quantity (EOQ)
Effective Annual Rate
Annual Salary to Hourly Wage
Hourly Wage to Salary
Break-Even Point
Compound annual growth rate
Leverage ratio
Market value added
Market to book ratio
Operating Profit Margin
External Funding Needed (EFN)
Operating Cycle
Annuity - FV w/ Continuous Compounding
Asset to Sales Ratio
Asset Turnover Ratio
Number of Periods PV & FV
Present Value Factor
Cash Ratio
Inventory Period
Number of payments
Annuity - Present value
Annuity - payment (PV)
Annuity - Payment Factor (PV)
Annuity Due Payment (PV)
https://play.google.com/store/apps/details?id=com.fluffydelusions.app.financewizard&f eature=search_result#?t=W251bGwsMSwxLDEsImNvbS5mbH VmZnlkZWx1c2lvbnMuYXBwLmZpbmFuY2V3aXphcmQiXQ
|
{"url":"http://androidforums.com/alpha-beta-testing/740970-finance-wizard-over-140-calculators.html","timestamp":"2014-04-18T03:56:07Z","content_type":null,"content_length":"73025","record_id":"<urn:uuid:c042b36e-e9b8-456b-9e54-68a6d7cea778>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Two mathematically similar frustums have heights of 20cm and 30cm. The surface area of the smaller frustum is 450cm^2. Calculate the surface area of the larger frustum.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
I understand that you have to multiply the smaller frustum's height by 1.5 to get the larger frustum's height, so do you just have to multiply the smaller frustum's area by 1.5 as well?
Best Response
You've already chosen the best response.
If the two solids are /similar/ then their surface area will be in a ratio that is the square of the ratio of any of their linear dimensions (e.g. height ratios). Similarly, their volumes will be
in a ratio that is the cube of their ratio of linear dimensions
Best Response
You've already chosen the best response.
So you would do 450*1.5^2?
Best Response
You've already chosen the best response.
yes, but be careful with what you are squaring, it should be:\[450\times(1.5)^2\]
Best Response
You've already chosen the best response.
Yup that's what I meant :) Gracias!
Best Response
You've already chosen the best response.
yw :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50804441e4b0b8b0cacd93e4","timestamp":"2014-04-18T10:56:07Z","content_type":null,"content_length":"39891","record_id":"<urn:uuid:574dfe54-0e53-48c1-8808-bc213657f35e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shortcut fusion for pipes
Rewrite rules are a powerful tool that you can use to optimize Haskell code without breaking backwards compatibility. This post will illustrate how I use rewrite rules to implement a form of shortcut
fusion for my pipes stream programming library. I compare pipes performance before and after adding shortcut fusion and I also compare the peformance of pipes-4.0.2 vs. conduit-1.0.10 and
This post also includes a small introduction to Haskell's rewrite rule system for those who are interested in optimizing their own libraries.
Edit: This post originally used the term "stream fusion", but Duncan Coutts informed me that the more appropriate term for this is probably "short-cut fusion".
Rule syntax
The following rewrite rule from pipes demonstrates the syntax for defining optimization rules.
{-# RULES "for p yield" forall p . for p yield = p #-}
-- ^^^^^^^^^^^^^ ^ ^^^^^^^^^^^^^^^
-- Label | Substitution rule
-- |
-- `p` can be anything -+
All rewrite rules are substitution rules, meaning that they instruct the compiler to replace anything that matches the left-hand side of the rule with the right-hand side. The above rewrite rule says
to always replace for p yield with p, no matter what p is, as long as everything type-checks before and after substitution.
Rewrite rules are typically used to substitute code with equivalent code of greater efficiency. In the above example, for p yield is a for loop that re-yields every element of p. The re-yielded
stream is behaviorally indistinguishable from the original stream, p, because all that for p yield does is replace every yield in p with yet another yield. However, while both sides of the equation
behave identically their efficiency is not the same; the left-hand side is less efficient because for loops are not free.
Rewrite rules are not checked for correctness. The only thing the compiler does is verify that the left-hand side and right-hand side of the equation both type-check. The programmer who creates the
rewrite rule is responsible for proving that the substitution preserves the original behavior of the program.
In fact, rewrite rules can be used to rewrite terms from other Haskell libraries without limitation. For this reason, modules with rewrite rules are automatically marked unsafe by Safe Haskell and
they must be explicitly marked TrustWorthy to be used by code marked Safe.
You can verify a rewrite rule is safe using equational reasoning. If you can reach the right-hand side of the equation from the left-hand side using valid code substitutions, then you can prove (with
some caveats) that the two sides of the equation are functionally identical.
pipes includes a complete set of proofs for its rewrite rules for this purpose. For example, the above rewrite rule is proven here, where (//>) is an infix synonym for for and respond is a synonym
for yield:
for = (//>)
yield = respond
This means that equational reasoning is useful for more than just proving program correctness. You can also use derived equations to optimize your program , assuming that you already know which side
of the equation is more efficient.
Rewrite rules have a significant limitation: we cannot possibly anticipate every possible expression that downstream users might build using our libraries. So how can we optimize as much code as
possible without an excessive proliferation of rewrite rules?
I anecdotally observe that equations inspired from category theory prove to be highly versatile optimizations that fit within a small number of rules. These equations include:
• Category laws
• Functor laws
• Natural transformation laws (i.e. free theorems)
The first half of the shortcut fusion optimization consists of three monad laws, which are a special case of category laws. For those new to Haskell, the three monad laws are:
-- Left Identity
return x >>= f = f x
-- Right Identity
m >>= return = m
-- Associativity
(m >>= f) >>= g = m >>= (\x -> f x >>= g)
If you take these three laws and replace (>>=)/return with for/yield (and rename m to p, for 'p'ipe), you get the following "for loop laws":
-- Looping over a yield simplifies to function application
for (yield x) f = f x
-- Re-yielding every element returns the original stream
for p yield = p
-- You can transform two passes over a stream into a single pass
for (for p f) g = for p (\x -> for (f x) g)
This analogy to the monad laws is precise because for and yield are actually (>>=) and return for the ListT monad when you newtype them appropriately, and they really form a Monad in the Haskell
sense of the word.
What's amazing is that these monad laws also double as shortcut fusion optimizations when we convert them to rewrite rules. We already encountered the second law as our first rewrite rule, but the
other two laws are useful rewrite rules, too:
{-# RULES
"for (yield x) f" forall x f .
for (yield x) f = f x
; "for (for p f) g" forall p f g .
for (for p f) g = for p (\x -> for (f x) g)
Note that the RULES pragma lets you group multiple rules together, as long as you separate them by semicolons. Also, there is no requirement that the rule label must match the left-hand side of the
equation, but I use this convention since I'm bad at naming rewrite rules. This labeling convention also helps when diagnosing which rules fired (see below) without having to consult the original
rule definitions.
Free theorems
These three rewrite rules alone do not suffice to optimize most pipes code. The reason why is that most idiomatic pipes code is not written in terms of for loops. For example, consider the map
function from Pipes.Prelude:
map :: Monad m => (a -> b) -> Pipe a b m r
map f = for cat (\x -> yield (f x))
The idiomatic way to transform a pipe's output is to compose the map pipe downstream:
p >-> map f
We can't optimize this using our shortcut fusion rewrite rules unless we rewrite the above code to the equivalent for loop:
for p (\y -> yield (f y))
In other words, we require the following theorem:
p >-> map f = for p (\y -> yield (f y))
This is actually a special case of the following "free theorem":
-- Exercise: Derive the previous equation from this one
p1 >-> for p2 (\y -> yield (f y))
= for (p1 >-> p2) (\y -> yield (f y))
A free theorem is an equation that you can prove solely from studying the types of all terms involved. I will omit the proof of this free theorem for now, but I will discuss how to derive free
theorems in detail in a follow-up post. For now, just assume that the above equations are correct, as codified by the following rewrite rule:
{-# RULES
"p >-> map f" . forall p f .
p >-> map f = for p (\y -> yield (f y))
With this rewrite rule the compiler can begin to implement simple map fusion. To see why, we'll compose two map pipes and then pretend that we are the compiler, applying rewrite rules at every
opportunity. Every time we apply a rewrite rule we will refer to the rule by its corresponding string label:
map f >-> map g
-- "p >-> map f" rule fired
= for (map f) (\y -> yield (g y))
-- Definition of `map`
= for (for cat (\x -> yield (f x))) (\y -> yield (g y))
-- "for (for p f) g" rule fired
= for cat (\x -> for (yield (f x)) (\y -> yield (g y)))
-- "for (yield x) f" rule fired
= for cat (\x -> yield (g (f x)))
This is identical to a single map pass, which we can prove by equational reasoning:
for cat (\x -> yield (g (f x)))
-- Definition of `(.)`, in reverse
= for cat (\x -> yield ((g . f) x))
-- Definition of `map`, in reverse
= map (g . f)
So those rewrite rules sufficed to fuse the two map passes into a single pass. You don't have to take my word for it, though. For example, let's say that we want to prove that these rewrite rules
fire for the following sample program, which increments, doubles, and then discards every number from 1 to 100000000:
-- map-fusion.hs
import Pipes
import qualified Pipes.Prelude as P
main = runEffect $
for (each [1..10^8] >-> P.map (+1) >-> P.map (*2)) discard
The -ddump-rule-firings flag will output every rewrite rule that fires during compilation, identifying each rule with the string label accompanying the rule:
$ ghc -O2 -ddump-rule-firings map-fusion.hs
[1 of 1] Compiling Main ( test.hs, test.o )
Rule fired: p >-> map f
Rule fired: for (for p f) g
Rule fired: for (yield x) f
I've highlighted the rule firings that correspond to map fusion, although there are many other rewrite rules that fire (including more shortcut fusion rule firings).
Shortcut fusion
We don't have to limit ourselves to just fusing maps. Many pipes in Pipes.Prelude has an associated free theorem that rewrites pipe composition into an equivalent for loop. After these rewrites, the
"for loop laws" go to town on the pipeline and fuse it into a single pass.
For example, the filter pipe has a rewrite rule similar to map:
{-# RULES
"p >-> filter pred" forall p pred .
p >-> filter pred =
for p (\y -> when (pred) (yield y))
So if we combine map and filter in a pipeline, they will also fuse into a single pass:
p >-> map f >-> filter pred
-- "p >-> map f" rule fires
for p (\x -> yield (f x)) >-> filter pred
-- "p >-> filter pred" rule fires
for (for p (\x -> yield (f x))) (\y -> when (pred y) (yield y))
-- "for (for p f) g" rule fires
for p (\x -> for (yield (f x)) (\y -> when (pred y) (yield y)))
-- for (yield x) f" rule fires
for p (\x -> let y = f x in when (pred y) (yield y))
This is the kind of single pass loop we might have written by hand if we were pipes experts, but thanks to rewrite rules we can write high-level, composable code and let the library automatically
rewrite it into efficient and tight loops.
Note that not all pipes are fusible in this way. For example the take pipe cannot be fused in this way because it cannot be rewritten in terms of a for loop.
These rewrite rules make fusible pipe stages essentially free. To illustrate this I've set up a criterion benchmark testing running time as a function of the number of map stages in a pipeline:
import Criterion.Main
import Data.Functor.Identity (runIdentity)
import Pipes
import qualified Pipes.Prelude as P
n :: Int
n = 10^6
main = defaultMain
[ bench' "1 stage " $ \n ->
each [1..n]
>-> P.map (+1)
, bench' "2 stages" $ \n ->
each [1..n]
>-> P.map (+1)
>-> P.map (+1)
, bench' "3 stages" $ \n ->
each [1..n]
>-> P.map (+1)
>-> P.map (+1)
>-> P.map (+1)
, bench' "4 stages" $ \n ->
each [1..n]
>-> P.map (+1)
>-> P.map (+1)
>-> P.map (+1)
>-> P.map (+1)
bench' label f = bench label $
whnf (\n -> runIdentity $ runEffect $ for (f n) discard)
(10^5 :: Int)
Before shortcut fusion (i.e. pipes-4.0.0), the running time scales linearly with the number of map stages:
warming up
estimating clock resolution...
mean is 24.53411 ns (20480001 iterations)
found 80923 outliers among 20479999 samples (0.4%)
32461 (0.2%) high severe
estimating cost of a clock call...
mean is 23.89897 ns (1 iterations)
benchmarking 1 stage
mean: 4.480548 ms, lb 4.477734 ms, ub 4.485978 ms, ci 0.950
std dev: 19.42991 us, lb 12.11399 us, ub 35.90046 us, ci 0.950
benchmarking 2 stages
mean: 6.304547 ms, lb 6.301067 ms, ub 6.310991 ms, ci 0.950
std dev: 23.60979 us, lb 14.01610 us, ub 37.63093 us, ci 0.950
benchmarking 3 stages
mean: 10.60818 ms, lb 10.59948 ms, ub 10.62583 ms, ci 0.950
std dev: 61.05200 us, lb 34.79662 us, ub 102.5613 us, ci 0.950
benchmarking 4 stages
mean: 13.74065 ms, lb 13.73252 ms, ub 13.76065 ms, ci 0.950
std dev: 61.13291 us, lb 29.60977 us, ub 123.3071 us, ci 0.950
Stream fusion (added in pipes-4.0.1) makes additional map stages essentially free:
warming up
estimating clock resolution...
mean is 24.99854 ns (20480001 iterations)
found 1864216 outliers among 20479999 samples (9.1%)
515889 (2.5%) high mild
1348320 (6.6%) high severe
estimating cost of a clock call...
mean is 23.54777 ns (1 iterations)
benchmarking 1 stage
mean: 2.427082 ms, lb 2.425264 ms, ub 2.430500 ms, ci 0.950
std dev: 12.43505 us, lb 7.564554 us, ub 20.11641 us, ci 0.950
benchmarking 2 stages
mean: 2.374217 ms, lb 2.373302 ms, ub 2.375435 ms, ci 0.950
std dev: 5.394149 us, lb 4.270983 us, ub 8.407879 us, ci 0.950
benchmarking 3 stages
mean: 2.438948 ms, lb 2.436673 ms, ub 2.443006 ms, ci 0.950
std dev: 15.11984 us, lb 9.602960 us, ub 23.05668 us, ci 0.950
benchmarking 4 stages
mean: 2.372556 ms, lb 2.371644 ms, ub 2.373949 ms, ci 0.950
std dev: 5.684231 us, lb 3.955916 us, ub 9.040744 us, ci 0.950
In fact, once you have just two stages in your pipeline, pipes greatly outperforms conduit and breaks roughly even with io-streams. To show this I've written up a benchmark comparing pipes
performance against these libraries for both pure loops and loops that are slightly IO-bound (by writing to /dev/null):
import Criterion.Main
import Data.Functor.Identity (runIdentity)
import qualified System.IO as IO
import Data.Conduit
import qualified Data.Conduit.List as C
import Pipes
import qualified Pipes.Prelude as P
import qualified System.IO.Streams as S
criterion :: Int -> IO ()
criterion n = IO.withFile "/dev/null" IO.WriteMode $ \h ->
[ bgroup "pure"
[ bench "pipes" $ whnf (runIdentity . pipes) n
, bench "conduit" $ whnf (runIdentity . conduit) n
, bench "io-streams" $ nfIO (iostreams n)
, bgroup "io"
[ bench "pipes" $ nfIO (pipesIO h n)
, bench "conduit" $ nfIO (conduitIO h n)
, bench "iostreams" $ nfIO (iostreamsIO h n)
pipes :: Monad m => Int -> m ()
pipes n = runEffect $
for (each [1..n] >-> P.map (+1) >-> P.filter even) discard
conduit :: Monad m => Int -> m ()
conduit n =
C.enumFromTo 1 n $= C.map (+1) $= C.filter even $$ C.sinkNull
iostreams :: Int -> IO ()
iostreams n = do
is0 <- S.fromList [1..n]
is1 <- S.map (+1) is0
is2 <- S.filter even is1
S.skipToEof is2
pipesIO :: IO.Handle -> Int -> IO ()
pipesIO h n = runEffect $
each [1..n]
>-> P.map (+1)
>-> P.filter even
>-> P.map show
>-> P.toHandle h
conduitIO :: IO.Handle -> Int -> IO ()
conduitIO h n =
C.enumFromTo 1 n
$= C.map (+1)
$= C.filter even
$= C.map show
$$ C.mapM_ (IO.hPutStrLn h)
iostreamsIO :: IO.Handle -> Int -> IO ()
iostreamsIO h n = do
is0 <- S.fromList [1..n]
is1 <- S.map (+1) is0
is2 <- S.filter even is1
is3 <- S.map show is2
os <- S.makeOutputStream $ \ma -> case ma of
Just str -> IO.hPutStrLn h str
_ -> return ()
S.connect is3 os
main = criterion (10^6)
The benchmarks place pipes neck-and-neck with io-streams on pure loops and 10% slower on slightly IO-bound code. Both libraries perform faster than conduit:
warming up
estimating clock resolution...
mean is 24.50726 ns (20480001 iterations)
found 117040 outliers among 20479999 samples (0.6%)
45158 (0.2%) high severe
estimating cost of a clock call...
mean is 23.89208 ns (1 iterations)
benchmarking pure/pipes
mean: 24.04860 ms, lb 24.02136 ms, ub 24.10872 ms, ci 0.950
std dev: 197.3707 us, lb 91.05894 us, ub 335.2267 us, ci 0.950
benchmarking pure/conduit
mean: 172.8454 ms, lb 172.6317 ms, ub 173.1824 ms, ci 0.950
std dev: 1.361239 ms, lb 952.1500 us, ub 1.976641 ms, ci 0.950
benchmarking pure/io-streams
mean: 24.16426 ms, lb 24.12789 ms, ub 24.22919 ms, ci 0.950
std dev: 242.5173 us, lb 153.9087 us, ub 362.4092 us, ci 0.950
benchmarking io/pipes
mean: 267.7021 ms, lb 267.1789 ms, ub 268.4542 ms, ci 0.950
std dev: 3.189998 ms, lb 2.370387 ms, ub 4.392541 ms, ci 0.950
benchmarking io/conduit
mean: 310.3034 ms, lb 309.8225 ms, ub 310.9444 ms, ci 0.950
std dev: 2.827841 ms, lb 2.194127 ms, ub 3.655390 ms, ci 0.950
benchmarking io/iostreams
mean: 239.6211 ms, lb 239.2072 ms, ub 240.2354 ms, ci 0.950
std dev: 2.564995 ms, lb 1.879984 ms, ub 3.442018 ms, ci 0.950
I hypothesize that pipes performs slightly slower on IO compared to io-streams because of the cost of calling lift, whereas iostreams operates directly within the IO monad at all times.
These benchmarks should be taken with a grain of salt. All three libraries are most frequently used in strongly IO-bound scenarios, where the overhead of each library is pretty much negligible.
However, this still illustrates how big of an impact shortcut fusion can have on pure code paths.
pipes is a stream programming library with a strong emphasis on theory and the library's contract with the user is a set of laws inspired by category theory. My original motivation behind proving
these laws was to fulfill the contract, but I only later realized that the for loop laws doubled as fortuitous shortcut fusion optimizations. This is a recurring motif in Haskell: thinking
mathematically pays large dividends.
For this reason I like to think of Haskell as applied category theory: I find that many topics I learn from category theory directly improve my Haskell code. This post shows one example of this
phenomenon, where shortcut fusion naturally falls out of the monad laws for ListT.
|
{"url":"http://www.haskellforall.com/2014/01/stream-fusion-for-pipes.html","timestamp":"2014-04-16T07:22:23Z","content_type":null,"content_length":"83341","record_id":"<urn:uuid:6eb6dfcb-3fd6-435a-b47e-0b1ee6b19136>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FM Question on Bonds/Basic Price Formula (self.actuary)
submitted ago by MiaEllaStudent
sorry, this has been archived and can no longer be voted on
You purchase a ten-year $1000 bond with semiannual coupons for $982. The bond had a $1100 redemption payment at maturity, a nominal coupon rate of 7% for the first five years, and a nominal coupon
rate of q% for the final five years. You calculated that your annual effective yield for the ten-year period was 7.35%. Find q.
The answer is: q = 5.21616%.
---I have gotten as far as labeling all the values in the problem and needing to use the Basic Price Formula, but where I'm stuck is that I think I need to use geometric progression, but I'm not
quite sure how to set it up. Thank you for any help!
all 9 comments
[–]thderrickHealth1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
In your calculator (* indicates computed value)
Step 1:
First find the value of the bond after five years:
N is 5 years * 2 coupons.
The interest rate is the the effective interest for 6 months.
Present value is given.
PMT is the coupon rate divided by 2 * the face value of the bond.
Compute Future Value
I/Y=(1.0735^.5-1)*100 = 3.6098
PMT= 1000*3.5% = 35
FV*= 987.29
Step 2:
Replace the PV with the (-1) * future value
Plug in 1100 for the future value.
Compute the payment (this is the coupon rate*the face value)
I/Y=(1.0735^.5-1)*100 = 3.6098
PMT*= 26.0708
FV= 1100
[–]thderrickHealth0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
[–]MiaEllaStudent[S] 0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
[–]ProspectorJoe0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
For the first one: Compute the future value at 5 years with the 70 semiannual coupons (twice per year at 35). Use the half year effective interest rate in this calculation. (I/Y at 3.6098% for N =
10) Use the computer future value as your new present value (PV) for the remaining five years. Set the future value of the maturity into your calculator at -1100 (FV) and then compute the coupon.
(PMT). It will be -26.0808. Multiply by 2. Divide by 100. Hope this helps.
[–]MiaEllaStudent[S] 0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
[–]ProspectorJoe0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
[–]aurenzLife Insurance0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
For the first question write out the time diagram with coupons of $35 each 6 months for the first 5 years, then q/2 for the next 5 years. Since it tells you your annual effective return is 7.35%, you
want a basic relationship of 982(1+.0735)^ 10=FV(coupons and redemption). This is because you put $982 into this investment, and get out all of the coupons and the redemption value 10 years later.
Make sure since you have semi-annual coupons you convert the 7.35% annual effective into i(upper 2)/2. This will give you the coupon amount Q per year, which is 52.162, so you divide it by the bond
amount $1000 to get the fractional coupon payment, and convert to a percent to get 5.2162%.
[–]MiaEllaStudent[S] 0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
[–]therealsylvosProperty / Casualty0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
You should be able to draw a time diagram for basically every single question on Interest rate theory.
So here you have a 10 year bond, but it pays coupons semi-annually. So immediately draw a time diagram from 0 to 20. Put -982 at 0, The first 10 coupons are $35 so put that at each time period. You
also have a redemption amount of 1100 so put that at time 20. From here you have all the information you need to solve the problem.
When its all out in front of you the method of attack should become clearer.
|
{"url":"http://www.reddit.com/r/actuary/comments/189grs/fm_question_on_bondsbasic_price_formula/","timestamp":"2014-04-16T07:42:24Z","content_type":null,"content_length":"74789","record_id":"<urn:uuid:1185a9eb-dff4-4e8e-ade7-362906bcad27>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Figure 10: (a–c) pdf and (d–f) cdf of the log of shadow fading (after removing the distance-dependent path loss) and the normal distribution match for all the data gathered in the basement of ECE
bulding. The three columns show the impact of the averaging window size on the match: (a, d) window size of , (b, e) window size of , and (c, f) window size of , with m denoting the wavelength of the
transmitted signal.
|
{"url":"http://www.hindawi.com/journals/jr/2011/340372/fig10/","timestamp":"2014-04-19T10:23:18Z","content_type":null,"content_length":"4078","record_id":"<urn:uuid:971ac793-054f-4227-aa65-83ca76cfa281>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
08. Tuned Circuits
To navigate this multi-page article:
Use the drop-down lists and arrow icons located at the top and bottom of each page.
Click here to download the
This page shows how Sage manages complex numbers in the analysis of electronic concepts like impedance. Since we're surrounded by electronics in all directions, I thought this would be a useful
and educational example of what Sage can do.
Complex numbers aren't all that complex if they're pictured as two-dimensional vectors. Just think of a two-component Cartesian vector — its elements are at right angles to each other and its
properties are easy to understand.
For a Cartesian vector, we might name the components x and y and think of them as the base and height of a right triangle (Figure 1). It's important to emphasize that the three notation systems
in Figure 1 are equivalent and interchangeable — it's a matter of deciding which notation is appropriate to a particular problem.
In this article we'll be analyzing electronic RLC (resistor/inductor/capacitor) circuits, which possess a property called "impedance". Analyzing impedance naturally calls for complex numbers,
because the currents flowing through an RLC circuit behave differently in each of the three components:
□ For the resistor, voltage and current are synchronized in time, which means the resistor dissipates power (voltage times current) as heat.
□ For the inductor, measured current represents the time integral of voltage (current lags voltage by 90°).
□ Because an inductor's voltage and current are out of phase by 90°, an ideal inductor doesn't dissipate any power but stores energy as a magnetic field.
□ For the capacitor, measured voltage represents the time integral of current (voltage lags current by 90°).
□ Because a capacitor's voltage and current are out of phase by 90°, an ideal capacitor doesn't dissipate any power but stores energy as an electrical field.
For an RLC circuit and depending on the connection details, the circuit is appropriately described using equations involving complex numbers. Such a circuit is normally analyzed with respect to
time-varying applied voltages — a typical example would be an RLC circuit driven by a signal generator.
Before we begin our topic, let's look at how Sage manages complex numbers. A mathematician would create a complex number like this:
z = x+yi
Where i^2 = -1. Without providing all the details, this means that y is orthogonal (at right angles) to x. That in turn means we can treat x and y as a Cartesian vector (Figure 1) and derive some
useful values.
Let's take the next steps in Sage. But first, as with my other articles in this series, let's put this initialization cell in a Sage worksheet (copy the content from this page or click here to
download the complete Sage worksheet):
# special equation rendering
def render(x,name = "temp.png",size = "normal"):
if(type(x) != type("")): x = latex(x)
latex.eval("\\" + size + " $" + x + "$",{},"",name)
var('r_l r_c X_l X_c f a b c x y z zl zc r l c L C omega')
radians = pi/180.0
degrees = 180.0/pi
# define arg(x) in degrees
def argd(x):
return N(arg(x) * degrees)
# complex plot function
def plt(q,a,b,typ = abs,col = 'blue'):
return plot(lambda x: float(typ(q(x))),(x,a,b),rgbcolor=col)
omega = 2*pi*f
# series rlc circuit definition
zs(r,l,c,f) = r + (i*omega*l) - i/(omega*c)
# parallel rlc circuit definition
# from http://hyperphysics.phy-astr.gsu.edu/Hbase/electric/rlcpar.html#c2
zp(r_l,r_c,l,c,f) = ((r_l + i*omega*l)*(r_c - i/(omega*c))) / ((r_l+r_c)+i*(omega*l-1/(omega*c)))
Now we can move on to our example:
var('x y')
f(x,y) = x+y*i
Function f(x,y) constructs a complex number from the arguments x and y.
The Sage function "abs(x)" produces the "absolute value" of a number, which means different things depending on context. For a scalar (a one-component number, not a vector), it will do this:
Meaning abs(x) will produce an unsigned version of x. But for a complex number:
In this case, "abs(x)" produces the magnitude of the provided complex argument (or the hypotenuse of a right triangle with x and y as base and height, see Figure 1). It turns out that (3,4,5) is
the first Pythagorean triple, which means 3^2 + 4^2 = 5^2.
Another useful complex number function is "arg(x)", which provides the angle component of its complex argument in radians, equal to tan^-1(y/x) (this is θ in Figure 1):
Or, expressed in degrees (using a custom function in the initialization cell above):
Intuitively, since 4 > 3, the angle should be more than 45 degrees.
Series RLC Circuit
Figure 2: Series RLC Circuit
In electronics the term "reactance" refers to the behavior listed above for inductors and capacitors in which they store energy instead of dissipating it. The most important thing to understand
about reactance is that voltage and current are orthogonal (at right angles), which explains why ideal inductors and capacitors don't dissipate any power (real-world reactors have some internal
resistance, so they dissipate some power).
Remember that reactance is a dynamic property that depends primarily on the rate of change in current in the circuit containing the reactors. Consequently reactance is typically expressed with
respect to frequency. Here are reactance equations for inductors and capacitors:
Capacitive reactance:
Inductive reactance:
And f = frequency in Hertz. Above we mentioned impedance, which is the combination of resistance and reactance. Because reactances and resistances are orthogonal, we need to express this idea
using complex numbers. Here is how we compute impedance (z) for a series RLC circuit:
Note the presence of "i" (i^2 = -1) in both the inductive and capacitive reactance subexpressions. This serves to keep the reactances at right angles to the resistance.
For such a circuit there is a resonant frequency where (in a series circuit as shown in Figure 2) the inductive and capacitive reactances become equal (and cancel out), as a result of which the
current peaks and the phase angle between resistance and reactance becomes zero. We can plot this resonance point:
# series rlc circuit plot
# plotting range: a = start, b = end (Hertz)
a = 500
b = 2500
# r = .1 ohms, l = 100 microhenries, c = 100 microfarads
r = .1
l = 1e-4
c = 1e-4
q(f) = zs(r,l,c,f)
p1 = plt(1/q,a,b,abs,'#800000')
p2 = plt(q,a,b,arg,'#008000')
show(p1+p2,figsize=(4,3),axes_labels=('freq Hz','admittance $\mho$'))
First, I am using a roundabout way to plot these results. This is required because of a bug in Sage 4.1.2 that prevents plotting complex numbers without some special precautions. I am calling a
special function "plt()" that appears above in the worksheet initialization block.
In this graph the red trace represents admittance (with units of mhos or Siemens), because I decided to plot 1/q(f) rather than q(f) (with units of impedance), in order to produce a definite
The green trace represents complex phase angle in radians, and according to the series RLC equation, at resonance it should become zero. In the graph this resonance appears to be located at about
1600 Hz. Here's the theoretical location of resonance (in Hertz):
Let's compute this value using Sage and our defined l and c values:
Now let's see if our series RLC function "zs(r,l,c,f)" agrees with the theoretical prediction:
find_root(lambda f: float(arg(zs(r,l,c,f))),500,2500)
For this example, the numerical root finder "find_root()" was able to find a zero using arg(x), which, as shown in the above graph, crosses zero at resonance. It should be pointed out that the
"lambda f: float(arg())" construction is a temporary work-around for a bug in the current (4.1.2) version of Sage.
Another point about the above graph is that the red peak rises to about 10 mhos (units of admittance), which is consistent with our circuit's resistance value of .1 ohm and the fact that our
"signal generator" is producing 1 volt.
I ask my readers to think about the graph above. The current flowing in the circuit is proportional to the hypotenuse of a right triangle with the resistance representing the base, and both
reactances representing the height. This means the steepness of the resonance curve should depend very much on the resistance. Let's find out by changing the resistance in the above code block to
.01 Ω (r = 0.01):
We see a dramatic change in the steepness of the slopes, the height of the resonance peak and the abrupt phase change at resonance (green trace). Now let's change the resistance to 0.5 Ω:
We see the opposite effect — a broadening of the resonance curve, a decrease in the amplitude of the resonance peak and a more gradual phase change near resonance. I want to emphasize to my
readers that these results are perfectly consistent with physical measurement of actual circuits.
About these graphs, it would be nice to have two independent graph scales, one for the amplitude trace and another for the phase angle, so the green trace doesn't disappear when high admittance
values are being plotted, but it seem Sage hasn't acquired that feature yet.
One more observation about circuits possessing impedance — it is almost never the case than one can determine the power being dissipated in the circuit by measuring voltage and current. The
reason is that the voltage and current are nearly always out of phase with each other, and the actual power is roughly proportional to the voltage times the current, times the cosine of the phase
angle between voltage and current.
Parallel RLC Circuit
Figure 3: Parallel RLC Circuit
Let's move on to the parallel case (see Figure 3). This circuit is a bit more complex, having two resistances (r[l] and r[c]) to compute, and consistent with real-world parallel resonant circuits
where both resistances must be taken into account. Many of the previously described reactance equations apply here, but computing impedance is somewhat more complex:
In this connection remember again that:
And f = frequency in Hertz. Let's begin by plotting an example of this system:
# parallel rlc circuit plot
# plotting range: a = start, b = end (Hertz)
a = 500
b = 2500
# r_l = .1 ohms, r_c = .1 ohms
# l = 100 microhenries, c = 100 microfarads
r_l = .1
r_c = .1
l = 1e-4
c = 1e-4
q(f) = zp(r_l,r_c,l,c,f)
p1 = plt(q,a,b,abs,'#800000')
p2 = plt(q,a,b,arg,'#008000')
show(p1+p2,figsize=(4,3),axes_labels=('freq Hz','impedance $\Omega$'))
First, notice about this plot that the vertical axis is impedance, not admittance as with the series examples above. Notice that the phase versus frequency relationship is reversed compared to
the series case, and that at resonance the impedance is 5Ω. The reason for the relatively low impedance is that currents within a parallel resonant circuit flow back and forth between the
inductor and capacitor (the inductor stores energy as a magnetic field and the capacitor stores energy as an electrical field), but because of the circuit layout (Figure 3), this flow is resisted
by both r[l] and r[c]. Try setting r[l] to 0Ω in the above Sage plotting code and see what happens:
The impedance is now 10Ω at resonance. Think about this — at the circuit's resonant frequency we have increased the circuit's overall impedance by decreasing one of the resistances. Remember
that, in this idealized circuit, the resistors are the only devices able to dissipate power. Isn't it ironic that we can increase impedance by decreasing a resistance?
Again, the reason for this is that an ideal parallel RLC circuit with no resistances should dissipate no power and should therefore have infinite impedance at resonance (and zero impedance
everywhere else). Let's test this idea — set both r[l] and r[c] to .0001Ω and plot the result:
Well, that's clearly not right, and I suspect it's because the plot routine isn't providing enough resolution near resonance. So let's narrow the plot range — in the plotting code listed above,
set variable a to 1580 and b to 1600, for this result:
That's more like it. Remember this about parallel-resonant circuits — the impedance at resonance is determined by the distributed resistances in the circuit, and if they could be eliminated, the
impedance at resonance would be infinite, but for an infinitely narrow bandwidth. Real-world circuits tend to have a finite bandwidth and a finite impedance, as shown by the first few graphs in
this section.
A careful examination of Figure 3 above will reveal that, at f << f[0] (f[0] = resonant frequency), resistor r[l] should the the primary factor, and at f >> f[0], resistor r[c] should be the
primary factor. Let's find out — let's give r[l] and r[c] distinctive values and produce numerical results for f much less than, and much greater than, f[0]. Remember that the parallel circuit
function "zp()" is defined as:
z = zp(r[l],r[c],l,c,f)
□ z = impedance, Ω
□ r[l] = resistance on inductor leg, Ω
□ r[c] = resistance on capacitor leg, Ω
□ l = inductance, Henries
□ c = capacitance, Farads
□ f = frequency, Hertz
Result for r[l] = 7, r[c] = 13, f << f[0]:
Result for r[l] = 7, r[c] = 13, f >> f[0]:
The reason for this result is that, at frequencies much less than f[0], the capacitive reactance (X[c]) becomes very high and the inductive reactance (X[l]) becomes very low, which causes r[l] to
predominate, but at frequencies much greater than f[0], it's the opposite. It would be nice to see this result on a log scale ... hold on, I think I can do that using a text printout. For r[l] =
0.25 and r[c] = 0.75:
for n in range(0,7):
print "f = 10^%d, z = %5.2f" % (n,N(abs(zp(.25,.75,1e-4,1e-4,10^n))))
f = 10^0, z = 0.25
f = 10^1, z = 0.25
f = 10^2, z = 0.26
f = 10^3, z = 0.86
f = 10^4, z = 0.78
f = 10^5, z = 0.75
f = 10^6, z = 0.75
One can see the resonance peak even at this crude resolution. Hmm — maybe we can plot this relationship after all, even though we don't have an explicit log x scale:
# parallel rlc circuit log/linear plot
# plotting range: a = start, b = end, units f*10^n
a = 0
b = 7
r_l = .25
r_c = .75
l = 1e-4
c = 1e-4
q(n) = zp(r_l,r_c,l,c,10^n)
p1 = plt(q,a,b,abs,'#800000')
show(p1,figsize=(4,3),axes_labels=('$f = 10^n Hz$','impedance $\Omega$'),ymin=0)
This plot clearly shows how r[l] and r[c] predominate in domains far from resonance. I chose resistances of 0.25Ω and 0.75Ω to get a positive peak at resonance, but if the resistances are > 1Ω,
we get a very different result. Set r[l] = 1.25, r[c] = 1.75 in the above code block for this plot:
Think about that result — the impedance at resonance is less than the resistance value in either leg. Now make one more plot — set both r[l] and r[c] to 1Ω. I won't reveal the result — it defies
expectation, but a clue can be seen in the above two plots.
This kind of virtual circuit modeling is a very fruitful way to learn about electronic circuits in a short time, especially when compared to building and testing actual circuits. It's also a
great deal less expensive. I must say, even though I've designed circuits for years including some aboard the NASA Space Shuttle, I didn't know about the outcome for two resistances of 1Ω in a
parallel-resonant circuit. Even though I never realized it before, I can picture the reason (it has to do with internal versus external currents). And if I hadn't built a model, I would never
have known about it.
I've been using computer algebra systems for years, and what impresses me most about Sage is the speed with which one acquires useful results, as well as the fact that one can easily export a
decent graphic of an equation or a plot (for me personally, that's a revelation).
Those familiar with the conventions of electronic calculation may object to my having used i as the complex operator (i^2 = -1) — in electrical work, i normally stands for current and this
conflicts with its other meaning. For those who prefer to use j instead of i, Sage makes this very simple:
I want to close with a discussion of the symbolic representation for admittance, the reciprocal of impedance, expressed in the unit "mho" (that's ohm spelled backwards, if you missed it) or
"Siemens" in SI units. First, there isn't a defined HMTL entity for this symbol, so when writing HTML one must use either a graphic image or an explicit Unicode value: ℧ = ℧ (this might
fail on some browsers or systems without Unicode font support).
When presented with "\mho", TeX-capable systems like Sage produce this:
"\mho" ->
In contrast with the symbol for "ohm" (uppercase Greek Omega):
"\Omega" ->
But isn't it obvious that:
Just as:
I expect eventual mass confusion.
|
{"url":"http://arachnoid.com/sage/tuned_circuits.html","timestamp":"2014-04-16T14:14:32Z","content_type":null,"content_length":"35658","record_id":"<urn:uuid:240132d1-4811-4207-8cdd-546e38bcf30e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
This book is based on a capstone course that the author taught to upper division undergraduate students with the goal to explain and visualize the connections between different areas of mathematics
and the way different subject matters flow from one another. In teaching his readers a variety of problem solving techniques as well, the author succeeds in enhancing the readers' hands-on knowledge
of mathematics and provides glimpses into the world of research and discovery. The connections between different techniques and areas of mathematics are emphasized throughout and constitute one of
the most important lessons this book attempts to impart. This book is interesting and accessible to anyone with a basic knowledge of high school mathematics and a curiosity about research
The author is a professor at the University of Missouri and has maintained a keen interest in teaching at different levels since his undergraduate days at the University of Chicago. He has run
numerous summer programs in mathematics for local high school students and undergraduate students at his university. The author gets much of his research inspiration from his teaching activities and
looks forward to exploring this wonderful and rewarding symbiosis for years to come.
Request an examination or desk copy.
Undergraduate students interested in analysis, combinatorics, number theory, and geometry.
"...a tremendous asset and an endless source of inspiration..."
-- EMS Newsletter
|
{"url":"http://ams.org/bookstore?fn=20&arg1=tb-gi&ikey=STML-39","timestamp":"2014-04-16T04:39:09Z","content_type":null,"content_length":"16276","record_id":"<urn:uuid:bc7a0f71-9b3f-4f26-a8c0-cbfb93bc81b0>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Combining Vectors
September 4th 2007, 09:09 AM #1
Apr 2007
Combining Vectors
Hello, was wondering if someone could help me with a little vector maths problem please.
What I need to find is the angle between two vectors that are derived from/relative to a third vector.
So what I having working/understood is that theta = cos-1(“vector a” * “vector b”) where “vector a” and “vector b” are 3d vectors at 90 deg to each other, x,y,z.
Basically the cos of the dot product of the two vectors…
What I now wish to do is to offset “vector a” and “vector b” by another vector, “vector c” and I have tried the following with mixed results.
(1) “vector a” = “vector a” - “vector c”, “vector b” = “vector b” - “vector c”
this returns an answer that is obviously incorrect.
(2) “vector a”= sqrt( “vector a” * “vector c”), “vector b” = sqrt(“vector b” *“vector c”), this returns an answer that seems to be correct as I can swap around the x,y,z values any of the vectors
and the result seem to be consistent.
However I have no way of really proving the result, so can you help please?
Many thanks in advance IMK
Hello, was wondering if someone could help me with a little vector maths problem please.
What I need to find is the angle between two vectors that are derived from/relative to a third vector.
So what I having working/understood is that theta = cos-1(“vector a” * “vector b”) where “vector a” and “vector b” are 3d vectors at 90 deg to each other, x,y,z.
Basically the cos of the dot product of the two vectors…
What I now wish to do is to offset “vector a” and “vector b” by another vector, “vector c” and I have tried the following with mixed results.
(1) “vector a” = “vector a” - “vector c”, “vector b” = “vector b” - “vector c”
this returns an answer that is obviously incorrect.
(2) “vector a”= sqrt( “vector a” * “vector c”), “vector b” = sqrt(“vector b” *“vector c”), this returns an answer that seems to be correct as I can swap around the x,y,z values any of the vectors
and the result seem to be consistent.
However I have no way of really proving the result, so can you help please?
Many thanks in advance IMK
You have two vectors, a and b. You wish to off-set them by a third vector c, and find the angle between the new vectors?
Graph what you are doing. The vector a - c is not parallel to the vector a. Similarly b - c is not parallel to the vector b. What you want is to translate the coordinate axes by some amount. But
when you do that you find that you have the same components for vectors a and b. (Remember that the coordinate form for a vector is the coordinates of the head minus the coordinates of the tail.)
September 4th 2007, 11:03 AM #2
|
{"url":"http://mathhelpforum.com/geometry/18482-combining-vectors.html","timestamp":"2014-04-20T04:40:15Z","content_type":null,"content_length":"36693","record_id":"<urn:uuid:71334484-e946-442d-9a13-edc963b93f27>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vector adding issues...
02-02-2008 #1
Beginner in C++
Join Date
Dec 2007
Vector adding issues...
A few topics down I mentioned I was making a program that computes the average of an unspecified amount of numbers. Here's what I have so far:
#include <iostream>
#include <string>
#include <stdexcept>
#include <vector>
using namespace std;
int main()
cout << "Enter the numbers you want to find the average of, followed by an end-of-file: ";
double Num;
// First vector. This holds the numbers the user has input.
vector<double>& Avg1 = Average1;
while (cin >> Num)
// Second vector. This holds the amount of numbers in the first vector
typedef vector<double>::size_type Average2;
Average2 size = Avg1.size();
// Throw an error if there aren't any numers if there aren't any values stored
if (size == 0)
throw domain_error("Average of an empty vector");
// Compute the average
double Average3 = { ... I need to do this part ... } / size;
cout << "The average of the numbers is: " << Average3;
return 0;
The problem I'm facing is adding the values together in the first vector. Say, for example, I had three numbers in there, and could only hold three. I would add them together like this:
double Average3 = (Avg1[0] + [Avg1[1] + Avg1[size - 1]) / size;
If my indexes are wrong, please tell me because I've never used them before. Right, I obviously can't do the above in my program because I could be adding 2 numbers together, or 5 etc.
Simply put, how do I add together the elements in Average1 without knowing what they are, and how many there are? If that's not possible, I'll try another approach.
I welcome all criticism towards it, but please provide a solution.
The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
The programmer comes home with 12 loaves of bread.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
I've never used Iterators, and don't yet know how to use them (only up to chapter 4 in Accelerated C++) Can you post the solution relevent to my program with an explanation please? I hate to ask,
but I don't have my book on the C++ Standard library handy, and won't untill tomorrow night and I'd like to make changes to it during the course of tomorrow.
Last edited by Caduceus; 02-02-2008 at 01:42 PM.
I can but I will not
If you need to do this without iterators, use the size() member function and do a loop.
double total = 0;
for (size_type i = 0; i != your_vector.size(); ++i) {
total += your_vector[i];
The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
The programmer comes home with 12 loaves of bread.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
Thanks for the solution. This isn't actually homework, I was watching a beginner's video on youtube that made an average program with three numbers, and I just wondered how to do it with an
unknown amount of numbers.
02-02-2008 #2
02-02-2008 #3
Beginner in C++
Join Date
Dec 2007
02-02-2008 #4
02-02-2008 #5
Beginner in C++
Join Date
Dec 2007
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/98586-vector-adding-issues.html","timestamp":"2014-04-17T21:48:56Z","content_type":null,"content_length":"56818","record_id":"<urn:uuid:db8befb8-e414-44fc-8bb4-c914ff64d660>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How far Thai Binh vn and Quang Binh vn
Khoảng cách đoạn đường từ Thai Binh, Vietnam đến Quang Binh, Vietnam xa bao nhiêu km? Xem bản đồ chỉ đường đi từ Thai Binh đến Quang Binh.
(What is the distance from Thai Binh, Vietnam to Quang Binh, Vietnam in kilometers? Watch the map to know the way how to drive from Thai Binh to Quang Binh, if any)
Khoảng cách đường chim bay là:
(distance as the crow flies or flight distance between the two places)
206 dặm, tương đương 332 km
(206 miles or 332 kilometers)
Lưu ý: khoảng cách đường bộ xa hơn.
Bear in mind that driving distance may be farther
How far is Quang Binh, Vietnam from Thai Binh, Vietnam? How many miles and how many kilometers is it between Thai Binh and Quang Binh? Get the distance below, and look at the map. If we can drive
from Thai Binh to Quang Binh, the map will show you how to drive from Thai Binh to Quang Binh.
FLIGHT DISTANCE (as the crow flies)
206 miles or 332 kilometers
From Thai Binh, Vietnam to Quang Binh, Vietnam
Thai Binh, Vietnam:
Latitude: 20°26'43.00"N
Longtitude: 106°20'30.86"E
Quang Binh, Vietnam:
Latitude: 17°27'45.93"N
Longtitude: 106°15'7.97"E
Same questions for the above travel distance:
* How far is it from Thai Binh, Vietnam to Quang Binh, Vietnam?
* What is the distance between Thai Binh, Vietnam and Quang Binh, Vietnam?
* How many miles / kilometers is it to travel from Thai Binh, Vietnam to Quang Binh, Vietnam?
* How many miles / kilometers is the travel distance between Thai Binh, Vietnam and Quang Binh, Vietnam?
Are you traveling to Quang Binh? We wish you a nice trip and a lot of fun there!
How far Nghe An vn and Cam Ranh vnHow far Vinh vn and Bac Lieu vnHow far Vinh vn and Dien Bien vnHow far Vinh vn and Dien Bien Phu vnHow far Vinh vn and Hoa Binh vnHow far Vinh vn and Quang Tri vnHow
far Vinh vn and Dong Ha vnHow far Vinh vn and Lao Cai vnHow far Vinh vn and Yen Bai vnHow far Vinh vn and Ninh Thuan vnHow far Vinh vn and Phan Rang - Thap Cham vnHow far Vinh vn and Dong Hoi vnHow
far Vinh vn and Quang Binh vnHow far Vinh vn and Son La vnHow far Vinh vn and Mong Cai vnHow far Vinh vn and Tuyen Quang vnHow far Vinh vn and Ha Tinh vnHow far Vinh vn and Tam Ky vnHow far Vinh vn
and Ha Nam vnHow far Vinh vn and Phu Ly vnHow far Vinh vn and Hung Yen vnHow far Vinh vn and Quang Nam vnHow far Vinh vn and Hoi An vnHow far Vinh vn and Vinh Phuc vnHow far Vinh vn and Vinh Yen vn
How far Vinh vn and Bac Giang vnHow far Vinh vn and Ninh Binh vnHow far Vinh vn and Tra Vinh vnHow far Vinh vn and Quang Ngai vnHow far Vinh vn and Kon Tum vnHow far Vinh vn and Ben Tre vnHow far
Vinh vn and Vinh Long vnHow far Vinh vn and Lang Son vnHow far Vinh vn and Dong Thap vnHow far Vinh vn and Cao Lanh vnHow far Vinh vn and Bac Ninh vnHow far Vinh vn and Bao Loc vn
|
{"url":"http://fotoget.blogspot.com/2011/05/how-far-thai-binh-vn-and-quang-binh-vn.html","timestamp":"2014-04-21T10:34:38Z","content_type":null,"content_length":"71576","record_id":"<urn:uuid:6f997ac4-7f17-4066-8c07-859d6ba28af4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Grade 5 » Operations & Algebraic Thinking
Standards in this domain:
Write and interpret numerical expressions.
Use parentheses, brackets, or braces in numerical expressions, and evaluate expressions with these symbols.
Write simple expressions that record calculations with numbers, and interpret numerical expressions without evaluating them.
For example, express the calculation "add 8 and 7, then multiply by 2" as 2 × (8 + 7). Recognize that 3 × (18932 + 921) is three times as large as 18932 + 921, without having to calculate the
indicated sum or product
Analyze patterns and relationships.
Generate two numerical patterns using two given rules. Identify apparent relationships between corresponding terms. Form ordered pairs consisting of corresponding terms from the two patterns, and
graph the ordered pairs on a coordinate plane.
For example, given the rule "Add 3" and the starting number 0, and given the rule "Add 6" and the starting number 0, generate terms in the resulting sequences, and observe that the terms in one
sequence are twice the corresponding terms in the other sequence. Explain informally why this is so
|
{"url":"http://www.corestandards.org/Math/Content/5/OA/","timestamp":"2014-04-18T08:30:29Z","content_type":null,"content_length":"40073","record_id":"<urn:uuid:c5c837d7-2b5d-4c9c-98fc-3110f85ccb06>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Davesh Maulik
Davesh Maulik's Homepage
I am an associate professor of mathematics at Columbia University.
Mailing address:
Department of Mathematics
Columbia University
2990 Broadway
New York, NY 10027
E-mail address: dmaulik[at]math[dot]university-name[dot]edu,
Office: 606 Mathematics Hall
Telephone: (212)-854-3003
I help organize the Columbia Algebraic Geometry Seminar, Fridays, 3.30-4.30 pm, room 207 Mathematics Hall.
Upcoming conferences:
Useful Links:
Arxiv and math.AG
Ravi's conference page
Stacks project
Sloane's Encyclopedia of Integer Sequences
|
{"url":"http://www.math.columbia.edu/~dmaulik/","timestamp":"2014-04-17T21:22:32Z","content_type":null,"content_length":"1755","record_id":"<urn:uuid:071d4b86-185c-4684-bfb1-ba4358f02d38>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fluid Mechanics Pdf
Fluid Mechanics Pdf DOC
Sponsored High Speed Downloads
The 1000 Island Fluids Meeting is a technical meeting and retreat for researchers working in the area of fluid mechanics from Ontario, ... extended abstract for their paper in electronic PDF format
written using the abstract format found on the web page for the meeting given below.
Fluid Mechanics for Chemical Engineers, Third Edition. Errors and Typos known as of 6-07. The common expression is "To err is human, to forgive divine".
Click Here for a PDF Course Outline. With lab. Experimental techniques and measurements for materials, soil properties, fluid mechanics, and environmental engineering. Laboratory reports.
Prerequisite: CEE 241. Corequisites: CEE 320, 357, 370; ENGL 351.
8th World Conference on Experimental Heat Transfer, Fluid Mechanics, and ThermodynamicsJune 16-20, 2013, Lisbon, Portugal . 8. th World Conference on Experimental Heat
Solid Converter PDF. ME 2204 FLUID MECHANICS AND MACHINERY. Two Marks Questions & Answers. UNIT I : INTRODUCTION. 1. Define density or mass density. Density of a fluid is defined as the ratio of the
mass of a fluid to its volume. Density, ρ = mass/volume (Kg/m3)
Apply fundament concepts learned previously (or concurrently) in Mathematics, Statics, Mechanics of Deformable Bodies, and Fluid Mechanics to the solution of fundamental Civil Engineering soil
mechanics analysis/design problems. 2.
CWR 3201: ENGINEERING FLUID MECHANICS . Three (3) Semester Credit Hours. Business Administration – Room 203; 7:30-9:20 a.m. T/Th. GENERAL INFORMATION – SUMMER 2006
Topics involving both fluid mechanics and solid mechanics. Acoustics Chaos in fluid and solid mechanics Continuum mechanics Fluid-structure interaction Mechanics of foams and cellular materials
Multiscale phenomena in mechanics.
The Extended Summary has to be prepared in PDF format and is limited ... Convective phenomena Drops and bubbles Environmental fluid dynamics Experimental methods in fluid mechanics Flow control Flow
in porous media Flow instability and transition Flow in thin ...
Fluid mechanics & thermal science. Material science and engineering . Mechatronics. Nano/micro-scale systems. Solid mechanics and materials processing. System Dynamics. Organized By Interdisciplinary
Areas: Biomedical and Rehabilitation Science and Engineering
MECHANICAL & AEROSPACE ENGINEERING. ME 111 – Fluid Mechanics. 3 Units; Spring 2008. Lecture: TTh 4:00- 5:15 PM, ENG 341 (Section 2) Course # 21179
ME 363 - Fluid Mechanics. Due Friday, April 4, 2008 Spring Semester 2008. 1] Work the Fluent U-tube example at . http://homepages.cae.wisc.edu/~ssanders/me_363/homework_problem_statements/archive/
Fluid Mechanics. ME 461 40 June, 2011 Two Hours Final Examination Paper. ANSWER ALL QUESTIONS [each question = 5 marks] 1- A two-dimensional velocity field is described in terms of Cartesian
components, u = 2xy2, v = 2x2y. Write the equation of the streamline passing through the ...
Fluid Mechanics – EML 3701. 3 credit hours. 2. ... www.fau.edu/regulations/chapter4/4.001_Code_of_Academic_Integrity.pdf. 15. Required texts/reading. Fundamentals of Fluid Mechanics, 7th edition by
Munson, Okiishi, Huebsch and Rothmayer. Wiley & Sons, Inc.
Computational fluid mechanics: Detailed numerical computations can be carried out for fluid mechanics, particle motion, reaction kinetics, heat and mass transport, etc. While research papers have
been published illustrating this, ...
is variously called the “dynamic pressure” or “reduced pressure”, but is a very common concept in fluid mechanics. It obviously simplifies the Navier-Stokes equations and makes their solutions easier
to understand.
The structure of the tutorials is to first reproduce the fundamentals learned in a Fluid Mechanics and Thermo Dynamics courses. One of the first scenarios learned in fluid mechanics is the flow
through a cylindrical pipe.
Solid Converter PDF. Experiment. Hydrostatic Force. on a Submerged Surface. ... Fundamentals of Fluid Mechanics, 4th ed., 2002, Wiley and Sons, New York. Appendix: What About Buoyancy? Is the
buoyancy force being neglected in the analysis of the experimental data?
ME 363 - Fluid Mechanics. Due Friday, April 4, 2008 Spring Semester 2008. 1] Work the Fluent U-tube example at . http://homepages.cae.wisc.edu/~ssanders/me_363/homework_problem_statements/archive/
This paper was developed in connection with heat transfer and fluid mechanics research experiments. ... According to a handwritten note from Abernethy included in the PDF file, this standard was
approved by unanimous Committee vote in 1987, and by world vote in 1988, (17 for, ...
Available: http://csss.enc.org/media/scisafe.pdf>. Cramer, M.S. (2004) Fluid Mechanics. NY: Cambridge Univ. Press. Retrieved July 12, 2006, from http://www.fluidmech.net. Drakos, N. (1997). Computer
Based Learning Unit. Physics 1501 - Modern Technology.
... ASME J. Offshore Mechanics & Arctic Engg., Vol. 131, Article 021602, 2009. FACH, K., BERTRAM, V.: ... Fluid Dynamics. Tel.: +49 40 36149-1552. Fax: +49 40 36149-7760. E-Mail: [email protected]
Title: Dienstleistungsgruppe Hydromechanik
Computational Fluid Dynamics (CFD) Computational Fluid Dynamics (CFD) allows engineers to numerically solve very complex analyses that describe and predict the flow of air, ... The finite-volume
method is popular in fluid mechanics (aerodynamics, hydraulics) because it: ...
Journal of Fluid Mechanics: 72, 401-416. [ 075homogeneousTurbulence.pdf] 76. K. H19. ... [ 152stablefractalsums.pdf ] • Unpublished generalization: Renata CIOCZEK-GEORGES & M. Stable Fractal Sums of
Pulses: the General Case, April 27 1995.
Proceedings of the EUROMECH 529 - Cardiovascular Fluid Mechanics will be available to registered participants. 3. SUBMISSION . Papers submitted and published will only be accepted in Adobe PDF format
with embedded fonts. Aim to have a file size no larger than 4 megabytes.
Structural Mechanics and Structural Optimization - _____ Fluid Mechanics, Aerodynamics and Propulsion - _____ Thermal Science and Energy Systems - _____ Education. Degree(s) University City Country
Major Year GPA. Graduate Record Exam:
In general, mechanics is subdivided into three groups: rigid body mechanics, deformable-body mechanics and fluid mechanics. Consider the two areas of rigid body mechanics: Statics: deals with the
equilibrium of bodies, ... http://www.fsea.org/pdf/questions/BR1QA.pdf. Last accessed on: 5/27/2005. 4.
... (one or two sentence description or a.pdf file reference that is ... (add/remove) Analytical Mechanics I. Surveying. Environmental Engineering. Analytical Mechanics II. ... Hydraulics.
Architectural Engineering Systems. Intro to Analysis and Design. Fluid Mechanics. Urban & Environmental ...
Durst, Franz. 2008. Fluid Mechanics “An intoduction to the theory of fluid flows. Berlin Heidelberg ... Wolverine Tube Heat Transfer Data book. Wolverine Tube, Inc. URL : http://www.wlv.com/products/
databook/ch2_2.pdf. 16 februari 2010. Holman, J.P. 1997. Perpindahan Kalor, Edisi keenam ...
... Mechanics Laboratory 1 ME 370 – Thermodynamics 3 CE 335/L – Structures I and Laboratory 4 CE 408/L – Surveying with GPS Applications and Laboratory 2 ME 390 – Fluid Mechanics 3 MSE 304 ...
Course Units Course Units AM 317 – Mechanics Lab. 1 AM 410 – Mechanical Vibrations 3 CE 335/L – Structures I and Lab. 4 CE 439 – Structural Steel Design 3 ME 370 - Thermodynamics 3 ME 390 – Fluid
Mechanics 3 MSE 304 ...
fluid mechanics - i m 301: mathematics - iii ph 322: thermodynamics metallurgy & materials engineering mme 301: introduction to metallurgy & metals mme 302: princples of extractive metallurgy mme
303: functional materials cse 321:
REVIEW OF COMPUTATIONAL FLUID MECHANICS! Finite Difference and Finite Volume Methods ! Introduction to Gambit and ... Rate-Dependent Models! PDF Models. AEROSOLS ! Review of Drag, Lift, Virtual Mass
and Basset Forces, BBO Equation! Review of Nonspherical Particles! Review of Brownian Motions ...
Analytic models for engineering processes and systems in fluid mechanics, heat transfer, solid mechanics, ... http://www2.sjsu.edu/senate/S04-12.pdf . Plagiarism is defined as, the use of another
person’s original (not common-knowledge) ...
The manual is written in word 2000 – and is converted to a pdf file. This manual will be maintained as the NeqSim program develops and is extended ... We begin by opening the fluid-mechanics toolbar
– by selecting view – toolbars – fluid mechanic toolbar on the main menu.
Journal of Fluid Mechanics: 72, 401-416. 76 [ 076stochasticModels.pdf] K. H19. M 1975w. ... 181 [ asikainen_paper.pdf ] J. ASIKAINEN, Amnon AHARONY, M, Erik RAUSCH, & Juha-Pekka HOVI 2003. Fractal
geometry of critical Potts clusters.
... Integrated Manufacturing and Automation 05MTP22 Design of Heat Exchangers 05MTP151 Computational heat transfer& fluid flow 05MPT145 Production system & control 05MPY11 Theory of Metal Cutting
05MTP11 Advanced Fluid Mechanics 05MES241 Alternate Fuel for IC Engines 05MMD152 Optimum ...
MEEN 4313 THERMAL SYSTEMS DESIGN. COURSE OVERVIEW. Course. Description: ME 4313 - Thermal Systems Design. 3 Credit Hours. This course covers analysis, modeling and design of thermal systems involving
applications of thermodynamics, fluid mechanics, heat transfer, and engineering economics.
View TOC [PDF] by H&P Magazine . Basic Fluid Power 363pp. Pippenger, John; Dudley Pease. 1987. Control Strategies for Dynamic Systems Lumkes, John Jr. 2001 . Control of Fluid Power 2nd Edition 499pp.
... Schaum’s 2500 Outline of Fluid Mechanics and Hydraulics, Giles, Randall, Liu, Cheng.
Traditional fluid mechanics suggests that dolphins must possess incredibly powerful muscles in order to swim as well as ... http://www.lanl.gov/PScache/physics/pdf/9907 /9907041.pdf ) Title: Drag
Reduction and the Effects of Internal Waves on Bottle-Nosed Dolphins Author: honors Last modified by:
Laptop wrap: Fluid mechanics contains an introductory presentation on some basic principles of fluid mechanics. ... Instructions are provided for students to save this online fact sheet as a PDF.
Fluid Mechanics: The study of the properties and behavior of matter in fluid (gas or liquid) state. ... Modified from: http://www2.ohlone.edu/people/fbarish/ho2.pdf. Title: Physics and its Branches
Author: Kevin Riffel Last modified by: Saskatoon Catholic Schools
ONLY use File, Save As, PDF (requires full version of Acrobat) File, Print to PDF is not accessible. Scanned text is not accessible PDF. ... Example: CATEA at Georgia Tech, Model of Accessible Course
Design, Intro to Fluid Mechanics Course. http://www.catea.gatech.edu/grade/mecheng/mehome.htm.
NEW TRENDS IN FLUID MECHANICS RESEARCH Proceeding of the Sixth International Conference on Fluid Mechanics, June 30-July 3, 2011, Guangzhou, China
http://www.staff.ul.ie/obriens/papers/matheng89.pdf. Particle removal by surface tension forces, (1989), ... (Flow in a thin film, Dutch fluid mechanics and heat flow contact group). Modellen voor
schoonmaken in de industrie, Nederlands warmte en stromingsleer contaktgroep, Eindhoven 1989.
ME 3663- Fluid Mechanics. ME 3813- Mechanics of Solids. ME 3823- Machine Element Design. ME 4293- Thermodynamics II. ME 4313- Heat Transfer. ME 4812- Senior Design I. ME 4813- Senior Design II. Check
the courses in the following list that you can serve as a Laboratory Assistant:
Fundamental concepts of fluid mechanics required for the . solutions of air pollution problems, water resource problems and transportation problems. ... http://www.eng.utoledo.edu/civil/heydinger/
soil%20mechanics/smsyl.pdf. Course Objectives: ...
The engineering science of Hydraulics is an area of fluid mechanics in ... To help students understand the application of the fundamental principles of fluid to ... Fundamentals of Engineering (FE)
Supplied-Reference Handbook, 5th Edition (© 2001), available for free as a pdf from ...
The exam is offered in five (5) subject areas: fluid mechanics, heat transfer, solid mechanics, dynamics and vibrations, and mathematics. A student has passed the qualifying exam when they have
passed two areas.
Fluid Mechanics, Volume 1. Wiley (1977). Joseph, D.D. Fluid Dynamics of Viscoelastic Liquids. Springer (1990). Macosko, C.W. Rheology: Principles, Measurements, and Applications. Wiley-VCH (1994).
Boger, D.V. and Walters, K. Rheological Phenomena in Focus, Elsevier Science Publishers (1993).
|
{"url":"http://ebookily.org/doc/fluid-mechanics-pdf","timestamp":"2014-04-23T15:07:28Z","content_type":null,"content_length":"42732","record_id":"<urn:uuid:cbf591b0-59fd-45a9-98c3-8117ec48250e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need help with factor problem
I'm making a program that prompts a user for an int value. The value input by the user must be a multiple of 16. If it is not, I must return an error message. How would I go about doing this? I'm new
to C++.
Any help is appreciated! Thanks!
What have you written so far? Be sure to put the code [code]between code tags[/code] so it formats properly.
Last edited on
This is almost a complete program. You need to put this code into a main function and also add the includes and such.
On line 5, you cannot cin into endl ;)
On line 8, look up the modulus operator - I am sure you will quickly figure out how to use it.
On line 14, you may want a space at the beginning of the string so there is a space between the number and the text. I guess this space somehow ended up at the end of line 10.
I would like to add onto FatalSleep's post and explain what the computer is doing when you use the modulus operator.
Let's use the example of 2 % 7.
First, the computer calculates 7/2.
Then, the computer sees if there are any units (ones, numbers. e.g: the number 2 has 2 units) left over.
If there are, the computer returns the number of units left over, mathematically called a remainder, if not, the computer returns 0.
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/general/111508/","timestamp":"2014-04-17T22:10:45Z","content_type":null,"content_length":"12378","record_id":"<urn:uuid:ecc36e61-7769-40e7-a915-dd1ba49a68a9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mesquite, TX Algebra 2 Tutor
Find a Mesquite, TX Algebra 2 Tutor
...Please contact me, and let's get started. I have a very specific course for the GED. After taking a GRE course for graduate school, I started designing my own course for the GED, SAT, and ACT.
29 Subjects: including algebra 2, reading, Spanish, GRE
...My area of expertise is Biology and Chemistry, but I can easily teach any of the other math or science subjects. I also taught AP English Literature for three years as an undergraduate at MIT,
so I am very comfortable with teaching English as well. So whether you want to emphasize a specific su...
30 Subjects: including algebra 2, reading, chemistry, English
...I never give up on a student. I love to help anybody achieve a personal goal.I hold a Masters Degree in Education with emphasis on instruction in math and science for grades 4th through 8th. I
have taken courses in pre-algebra, algebra I and II, Matrix Algebra, Trigonometry, pre-calculus, Calculus I and II, Geometry and Analytical Geometry, Differential Equations.
11 Subjects: including algebra 2, geometry, algebra 1, precalculus
...I teach the best strategies and tips for learning Persian/ Farsi. The lessons include grammar, writing, and proper pronunciation. I teach beginner, intermediate, and advanced level Farsi.
26 Subjects: including algebra 2, geometry, ESL/ESOL, algebra 1
...We can see it change. We can feel what it is made of. I find that this makes geology more meaningful to many students, and any good teacher will tell you that we love teaching material that
students find personally meaningful.
15 Subjects: including algebra 2, chemistry, geometry, algebra 1
|
{"url":"http://www.purplemath.com/Mesquite_TX_Algebra_2_tutors.php","timestamp":"2014-04-19T07:23:02Z","content_type":null,"content_length":"23865","record_id":"<urn:uuid:74159bd2-88fc-45e6-b447-238f695c1c7d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
|
storablevector-0.2.1: Fast, packed, strict storable arrays with a list interface like ByteString Source code Contents Index
A module containing semi-public StorableVector internals. This exposes the StorableVector representation and low level construction functions. Modules which extend the StorableVector system will need
to use this module while ideally most users will be able to make do with the public interface modules.
The Vector type and representation
A space-efficient representation of a vector, supporting many efficient operations.
Instances of Eq, Ord, Read, Show, Data, Typeable
SV !(ForeignPtr a) !Int !Int
Unchecked access
unsafeHead :: Storable a => Vector a -> a Source
A variety of head for non-empty Vectors. unsafeHead omits the check for the empty case, so there is an obligation on the programmer to provide a proof that the Vector is non-empty.
unsafeTail :: Storable a => Vector a -> Vector a Source
A variety of tail for non-empty Vectors. unsafeTail omits the check for the empty case. As with unsafeHead, the programmer must provide a separate proof that the Vector is non-empty.
unsafeLast :: Storable a => Vector a -> a Source
A variety of last for non-empty Vectors. unsafeLast omits the check for the empty case, so there is an obligation on the programmer to provide a proof that the Vector is non-empty.
unsafeInit :: Storable a => Vector a -> Vector a Source
A variety of init for non-empty Vectors. unsafeInit omits the check for the empty case. As with unsafeLast, the programmer must provide a separate proof that the Vector is non-empty.
unsafeIndex :: Storable a => Vector a -> Int -> a Source
Unsafe Vector index (subscript) operator, starting from 0, returning a single element. This omits the bounds check, which means there is an accompanying obligation on the programmer to ensure the
bounds are checked in some other way.
unsafeTake :: Storable a => Int -> Vector a -> Vector a Source
A variety of take which omits the checks on n so there is an obligation on the programmer to provide a proof that 0 <= n <= length xs.
unsafeDrop :: Storable a => Int -> Vector a -> Vector a Source
A variety of drop which omits the checks on n so there is an obligation on the programmer to provide a proof that 0 <= n <= length xs.
Low level introduction and elimination
create :: Storable a => Int -> (Ptr a -> IO ()) -> IO (Vector a) Source
Wrapper of mallocForeignPtrArray.
createAndTrim :: Storable a => Int -> (Ptr a -> IO Int) -> IO (Vector a) Source
Given the maximum size needed and a function to make the contents of a Vector, createAndTrim makes the Vector. The generating function is required to return the actual final size (<= the maximum
size), and the resulting byte array is realloced to this size.
createAndTrim is the main mechanism for creating custom, efficient Vector functions, using Haskell or C functions to fill the space.
createAndTrim' :: Storable a => Int -> (Ptr a -> IO (Int, Int, b)) -> IO (Vector a, b) Source
unsafeCreate :: Storable a => Int -> (Ptr a -> IO ()) -> Vector a Source
A way of creating Vectors outside the IO monad. The Int argument gives the final size of the Vector. Unlike createAndTrim the Vector is not reallocated if the final size is less than the estimated
fromForeignPtr :: ForeignPtr a -> Int -> Vector a Source
O(1) Build a Vector from a ForeignPtr
toForeignPtr :: Vector a -> (ForeignPtr a, Int, Int) Source
O(1) Deconstruct a ForeignPtr from a Vector
inlinePerformIO :: IO a -> a Source
Just like unsafePerformIO, but we inline it. Big performance gains as it exposes lots of things to further inlining. Very unsafe. In particular, you should do no memory allocation inside an
inlinePerformIO block. On Hugs this is just unsafePerformIO.
Produced by Haddock version 2.4.2
|
{"url":"http://hackage.haskell.org/package/storablevector-0.2.1/docs/Data-StorableVector-Base.html","timestamp":"2014-04-18T19:54:55Z","content_type":null,"content_length":"29387","record_id":"<urn:uuid:7aaac12d-7e64-4157-b00c-2ab2b1d31580>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On 021-Avoiding Ascent Sequences
[Ascent sequences were introduced by Bousquet-Mélou, Claesson, Dukes and Kitaev in their study of $(\bf{2+2})$-free posets. An ascent sequence of length $n$ is a nonnegative integer sequence $x=x_{1}
x_{2}\ldots x_{n}$ such that $x_{1}=0$ and $x_{i}\leq {\rm asc}(x_{1}x_{2}\ldots x_{i-1})+1$ for all $1<i\leq n$, where ${\rm asc}(x_{1}x_{2}\ldots x_{i-1})$ is the number of ascents in the sequence
$x_{1}x_{2}\ldots x_{i-1}$. We let $\mathcal{A}_n$ stand for the set of such sequences and use $\mathcal{A}_n(p)$ for the subset of sequences avoiding a pattern $p$. Similarly, we let $S_{n}(\tau)$
be the set of $\tau$-avoiding permutations in the symmetric group $S_{n}$. Duncan and Steingrímsson have shown that the ascent statistic has the same distribution over $\mathcal{A}_n(021)$ as over
$S_n(132)$. Furthermore, they conjectured that the pair $({\rm asc}, {\rm rmin})$ is equidistributed over $\mathcal{A}_n(021)$ and $S_n(132)$ where ${\rm rmin}$ is the right-to-left minima
statistic. We prove this conjecture by constructing a bistatistic-preserving bijection.]
Full Text:
|
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v20i1p76","timestamp":"2014-04-19T13:17:38Z","content_type":null,"content_length":"15914","record_id":"<urn:uuid:44317651-39f9-43d7-b0e4-7f7122c8263a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
s Flashcard
• Shuffle
Toggle On
Toggle Off
• Alphabetize
Toggle On
Toggle Off
• Front First
Toggle On
Toggle Off
• Both Sides
Toggle On
Toggle Off
• Read
Toggle On
Toggle Off
How to study your flashcards.
Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key
Up/Down arrow keys: Flip the card between the front and back.down keyup key
H key: Show hint (3rd side).h key
A key: Read text to speech.a key
10 Cards in this Set
is formed by the intersection of two lines and is generally measured in degrees
a closed plane where all its points are an equal distance from the center
a line is considered perpendicular when it crosses another line at a 90 degree angle
Perpendicular Line
a line is considered parallel to another line if both lines are in the same plane, and they are the same distance apart over their entire length
Parallel Line
a straight line that connects two points to each other without extending past those points
Line Segment
a certain place on a plane
an object that is straight and infinitely long and thin
a transformation that turns a figure around a certain point
a transformation that flips a figure across a line
a transformation that slides each point of a figure the same distance in the same direction
|
{"url":"http://www.cram.com/flashcards/geometry-terms-2360178","timestamp":"2014-04-20T04:55:40Z","content_type":null,"content_length":"60861","record_id":"<urn:uuid:f66b5999-c07b-465e-8382-f3df3934c514>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ratios (page 1 of 7)
Sections: Ratios, Proportions, Checking proportionality, Solving proportions
Proportions are built from ratios. A "ratio" is just a comparison between two different things. For instance, someone can look at a group of people, count noses, and refer to the "ratio of men to
women" in the group. Suppose there are thirty-five people, fifteen of whom are men. Then the ratio of men to women is 15 to 20.
Notice that, in the expression "the ratio of men to women", "men" came first. This order is very important, and must be respected: whichever word came first, its number must come first. If the
expression had been "the ratio of women to men", then the numbers would have been "20 to 15".
Expressing the ratio of men to women as "15 to 20" is expressing the ratio in words. There are two other notations for this "15 to 20" ratio:
odds notation: 15 : 20
fractional notation: ^15/[20]
You should be able to recognize all three notations; you will probably be expected to know them for your test.
Given a pair of numbers, you should be able to write down the ratios. For example:
• There are 16 ducks and 9 geese in a certain park. Express the ratio of ducks to geese in all three formats.
• Consider the above park. Express the ratio of geese to ducks in all three formats.
The numbers were the same in each of the above exercises, but the order in which they were listed differed, varying according to the order in which the elements of the ratio were expressed. In
ratios, order is very important.
Let's return to the 15 men and 20 women in our original group. I had expressed the ratio as a fraction, namely, ^15/[20]. This fraction reduces to ^3/[4]. This means that you can also express the
ratio of men to women as ^3/[4], 3 : 4, or "3 to 4".
This points out something important about ratios: the numbers used in the ratio might not be the absolute measured values. The ratio "15 to 20" refers to the absolute numbers of men and women,
respectively, in the group of thirty-five people. The simplified or reduced ratio "3 to 4" tells you only that, for every three men, there are four women. The simplified ratio also tells you that, in
any representative set of seven people (3 + 4 = 7) from this group, three will be men. In other words, the men comprise ^3/[7] of the people in the group. These relationships and reasoning are what
you use to solve many word problems:
• In a certain class, the ratio of passing grades to failing grades is 7 to 5. How many of the 36 students failed the course? Copyright © Elizabeth Stapel 2001-2011 All Rights Reserved
The ratio, "7 to 5" (or 7 : 5 or ^7/[5]), tells me that, of every 7 + 5 = 12 students, five failed. That is, ^5/[12] of the class flunked. Then ( ^5/[12] )(36) = 15 students failed.
• In the park mentioned above, the ratio of ducks to geese is 16 to 9. How many of the 300 birds are geese?
The ratio tells me that, of every 16 + 9 = 25 birds, 9 are geese. That is, ^9/[25] of the birds are geese. Then there are ( ^9/[25] )(300) = 108 geese.
Generally, ratio problems will just be a matter of stating ratios or simplifying them. For instance:
• Express the ratio in simplest form: $10 to $45
This exercise wants me to write the ratio as a reduced fraction:
.^10/[45] = ^2/[9].
This reduced fraction is the ratio's expression in simplest fractional form. Note that the units (the "dollar" signs) "canceled" on the fraction, since the units, "$", were the same on both values.
When both values in a ratio have the same unit, there should generally be no unit on the reduced form.
• Express the ratio in simplest form: 240 miles to 8 gallons
When I simplify, I get (240 miles) / (8 gallons) = (30 miles) / (1 gallon), or, in more common language, 30 miles per gallon.
In contrast to the answer to the previous exercise, this exercise's answer did need to have units on it, since the units on the two parts of the ratio, the "miles" and the "gallons", do not "cancel"
with each other.
Conversion factors are simplified ratios, so they might be covered around the same time that you're studying ratios and proportions. For instance, suppose you are asked how many feet long an American
football field is. You know that its length is 100 yards. You would then use the relationship of 3 feet to 1 yard, and multiply by 3 to get 300 feet. For more on this topic, look at the "Cancelling /
Converting Units" lesson.
Ratios are the comparison of one thing to another (miles to gallons, feet to yards, ducks to geese, et cetera). But their true usefulness comes in the setting up and solving of proportions....
Top | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Return to Index Next >>
Cite this article as: Stapel, Elizabeth. "Ratios." Purplemath. Available from
http://www.purplemath.com/modules/ratio.htm. Accessed
|
{"url":"http://www.purplemath.com/modules/ratio.htm","timestamp":"2014-04-18T04:28:13Z","content_type":null,"content_length":"37308","record_id":"<urn:uuid:038ebc63-ce25-4bc4-81d0-042dc2eabc33>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Placement Test
Paine College uses the COMPASS test, prepared by ACT, as the math placement exam.
What is the test like?
• The questions may cover basic skills and move to more advanced questions that require students to apply math to given situations and to analyze math scenarios in areas ranging from basic
numerical skills (using integers, fractions, decimals, exponents, square roots, scientific notation, ratios and proportions, percentages, and averages) through pre-algebra, algebra, geometry,
and beyond (including substituting values in an algebraic equation, setting up equations, factoring polynomials, solving equations, using exponents and radicals, using rational expressions,
solving equations with two variables, and more).
• The computer will proceed through the different levels in response to the student’s performance on the previous questions. This means that if the student has many correct answers the skills
will advance, but if a student seems to be having difficulty the questions will adjust to his or her ability level.
• Students will be able to use the online calculator during the test. There will be a button on the screen to access it. Students will not be able to use their own calculators, cell phones, or
other calculating devices during the test.
• Students will be able to use scrap paper, which will be provided by the proctor. Students should bring a pencil or pen, however, because one will not be provided.
When will students find out the results?
When the student meets with his or her advisor to confirm course selection, the advisor will share the results of this testing. These results will help the student and the advisor determine the best
math course placement. Students whose scores on this test show that there is some weakness in math will be advised to seek additional help and support, including but not limited to specialized
tutoring and placement in specific courses that will help the student develop those math skills.
How can students prepare for the math placement test?
• Students should spend some time reviewing their math skills before taking the placement test in order to refresh their skills. This review can include going over problems from previous classes,
working out problems in a math review book, or looking for problems on the Internet. Many math websites are available. One that we find to be helpful is: http://www.purplemath.com/
• To see some sample exercises similar to those found on the COMPASS math test, click on the following links: http://www.act.org/compass/sample/math.html, http://www.act.org/compass/sample/
• Students should also become familiar with using the online calculator (available in Windows) so that they can use it comfortably if needed during the testing.
What do students need to know about the testing session?
The COMPASS math test is untimed, so students will be able to work at their own speed. This means that students should be prepared to take as much time as they need to complete the test. It will be
impossible to tell them exactly when they will be finished. As a general guide, students should plan to spend about an hour per test, although the actual time may vary from student to student.
|
{"url":"http://www.paine.edu/admissions/apply/testing/math.aspx","timestamp":"2014-04-19T22:05:26Z","content_type":null,"content_length":"52292","record_id":"<urn:uuid:a96d69e8-729a-48d1-b7a8-063f4fdee190>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probabilities With Renaming
Suppose the outcome of each of a series of trials can be one of the events E[i] (i=1 to n) with the respective probabilities Pr{E[i]}. Prior to the first trial the events are renamed by applying the
inverse of a particular permutation "M" to the indices. Thus, if event E[i] is renamed E[j], then M[j] = i. Since there are n indices there are n! possible permutations, which we can denote by M[i]
(i = 1 to n!). Each of these has a known probability, denoted by Pr{M[i]}.
Now suppose the first k trials yield the events with the permuted indices g[1], g[2],..., g[k]. For example, if the outcome of the third trial is E[7] then g[3] = 7. (Here the “7” is the renamed
index, so this is not necessarily the original E[7] whose probability is given.) Given the values of all Pr{E[i]} for i = 1 to n (representing the unpermuted indices), and the probabilities Pr{M[i]},
i = 1 to n! of the possible permutations, and given the results g[i] (i=1 to k) of the first k trials, what are the probabilities that g[k+1] = s for any given index s?
If we knew the indices were not permuted at all, we would immediately know the probabilities for each of the events, but since the indices are permuted, we must infer something about the
probabilities of the permuted events from the k outcomes already observed.
The answer is the ratio of {the sum of the probabilities of all the possible sequences of events (including choice of permutation) with the given values of g[i], i = 1 to k+1} divided by {the sum of
the probabilities of all possible sequences with the given values of g[i], i = 1 to k}. Symbolically this can be written in the form
For example, consider the case of n = 2 with Pr{E[1]} = 1/3 and Pr{E[2]} = 2/3. Let M[1] denote the identity permutation and M[2] the reversal, and suppose we are given that Pr{M[1]} = Pr{M[2]} = 1/
2. Also we have k = 1 with g[1] = 1. The probability that g[2] equals 1 is
Now suppose Pr{A} = 1/3, Pr{B} = 2/3, and we are told that the events are being renamed, with a 50% chance that they will be kept that same, i.e., that p(A') = 1/3 and p(B') = 2/3, and a 50% chance
that they will be switched, resulting in p(A') = 2/3 and p(B') = 1/3. After the die is rolled we are told that event A' occurred. What is the probability that on the next roll, A' occurs again? More
generally, what if Pr{A} = p, Pr{B} = q = 1-p, Pr{A'=A} = r, and Pr{A'=B} = s = 1-r ? The answer is that A' occurs again with probability
Thus if we have p = 1/3, q = 2/3, r = s = 1/2, the result is 5/9.
Return to MathPages Main Menu
|
{"url":"http://mathpages.com/home/kmath033/kmath033.htm","timestamp":"2014-04-21T02:06:05Z","content_type":null,"content_length":"7647","record_id":"<urn:uuid:0e9c5b11-ee71-4b78-8117-bb5a1c44d33f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Product Categories
Top Picks
Give your brain a work out with 16 different activities!
Chart, analyze, & animate data for publications, presentations, & web.
Featured Software
Let your kids learn math Funny! Efficiently!
The most comprehensive prep for GMAT math available.
The alternative to a hand-held calculator in the classroom.
Excel Math to multiple cells with formulas, adding, subtracting, multi
|
{"url":"http://mathking.com/","timestamp":"2014-04-16T10:10:56Z","content_type":null,"content_length":"22474","record_id":"<urn:uuid:f50d32f6-ad81-4f97-b98b-986f84cbf32a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fast and Robust Fixed-Point Algorithms for Independent Component Analysis
Results 1 - 10 of 389
- Neural Computing Surveys , 2001
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For
computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Cited by 1492 (93 self)
Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For
computational and conceptual simplicity, such a representation is often sought as a linear transformation of the original data. Well-known linear transformation methods include, for example,
principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation
is the one that minimizes the statistical dependence of the components of the representation. Such a representation seems to capture the essential structure of the data in many applications. In this
paper, we survey the existing theory and methods for ICA. 1
- NEURAL NETWORKS , 2000
"... ..."
- Neural Computation , 1997
"... Abstract. Independent Subspace Analysis (ISA; Hyvarinen & Hoyer, 2000) is an extension of ICA. In ISA, the components are divided into subspaces and components in different subspaces are assumed
independent, whereas components in the same subspace have dependencies.In this paper we describe a fixed- ..."
Cited by 429 (19 self)
Add to MetaCart
Abstract. Independent Subspace Analysis (ISA; Hyvarinen & Hoyer, 2000) is an extension of ICA. In ISA, the components are divided into subspaces and components in different subspaces are assumed
independent, whereas components in the same subspace have dependencies.In this paper we describe a fixed-point algorithm for ISA estimation, formulated in analogy to FastICA. In particular we give a
proof of the quadratic convergence of the algorithm, and present simulations that confirm the fast convergence, but also show that the method is prone to convergence to local minima. 1
, 2000
"... Introduction In blind source separation an N-channel sensor signal x(t) arises from M unknown scalar source signals s i (t), linearly mixed together by an unknown N M matrix A, and possibly
corrupted by additive noise (t) x(t) = As(t) + (t) (1.1) We wish to estimate the mixing matrix A and the M- ..."
Cited by 193 (32 self)
Add to MetaCart
Introduction In blind source separation an N-channel sensor signal x(t) arises from M unknown scalar source signals s i (t), linearly mixed together by an unknown N M matrix A, and possibly corrupted
by additive noise (t) x(t) = As(t) + (t) (1.1) We wish to estimate the mixing matrix A and the M-dimensional source signal s(t). Many natural signals can be sparsely represented in a proper signal
dictionary s i (t) = K X k=1 C ik ' k (t) (1.2) The scalar functions ' k
, 2002
"... this paper, we assume that we have n observations, each being a realization of the p- dimensional random variable x = (x 1 , . . . , x p ) with mean E(x) = = ( 1 , . . . , p ) and covariance
matrix E{(x )(x = # pp . We denote such an observation matrix by X = i,j : 1 p, 1 ..."
Cited by 87 (0 self)
Add to MetaCart
this paper, we assume that we have n observations, each being a realization of the p- dimensional random variable x = (x 1 , . . . , x p ) with mean E(x) = = ( 1 , . . . , p ) and covariance matrix E
{(x )(x = # pp . We denote such an observation matrix by X = i,j : 1 p, 1 n}. If i and # i = # (i,i) denote the mean and the standard deviation of the ith random variable, respectively, then we will
often standardize the observations x i,j by (x i,j i )/ # i , where i = x i = 1/n j=1 x i,j , and # i = 1/n j=1 (x i,j x i )
, 2000
"... Separation of complex valued signals is a frequently arising problem in signal processing. For example, separation of convolutively mixed source signals involves computations on complex valued
signals. In this article it is assumed that the original, complex valued source signals are mutually statis ..."
Cited by 84 (1 self)
Add to MetaCart
Separation of complex valued signals is a frequently arising problem in signal processing. For example, separation of convolutively mixed source signals involves computations on complex valued
signals. In this article it is assumed that the original, complex valued source signals are mutually statistically independent, and the problem is solved by the independent component analysis (ICA)
model. ICA is a statistical method for transforming an observed multidimensional random vector into components that are mutually as independent as possible. In this article, a fast xed-point type
algorithm that is capable of separating complex valued, linearly mixed source signals is presented and its computational efficiency is shown by simulations. Also, the local consistency of the
estimator given by the algorithm is proved.
- IEEE Trans. On Audio, Speech and Lang. Processing , 2007
"... Abstract—An unsupervised learning algorithm for the separation of sound sources in one-channel music signals is presented. The algorithm is based on factorizing the magnitude spectrogram of an
input signal into a sum of components, each of which has a fixed magnitude spectrum and a time-varying gain ..."
Cited by 81 (10 self)
Add to MetaCart
Abstract—An unsupervised learning algorithm for the separation of sound sources in one-channel music signals is presented. The algorithm is based on factorizing the magnitude spectrogram of an input
signal into a sum of components, each of which has a fixed magnitude spectrum and a time-varying gain. Each sound source, in turn, is modeled as a sum of one or more components. The parameters of the
components are estimated by minimizing the reconstruction error between the input spectrogram and the model, while restricting the component spectrograms to be nonnegative and favoring components
whose gains are slowly varying and sparse. Temporal continuity is favored by using a cost term which is the sum of squared differences between the gains in adjacent frames, and sparseness is favored
by penalizing nonzero gains. The proposed iterative estimation algorithm is initialized with random values, and the gains and the spectra are then alternatively updated using multiplicative update
rules until the values converge. Simulation experiments were carried out using generated mixtures of pitched musical instrument samples and drum sounds. The performance of the proposed method was
compared with independent subspace analysis and basic nonnegative matrix factorization, which are based on the same linear model. According to these simulations, the proposed method enables a better
separation quality than the previous algorithms. Especially, the temporal continuity criterion improved the detection of pitched musical sounds. The sparseness criterion did not produce significant
improvements. Index Terms—Acoustic signal analysis, audio source separation, blind source separation, music, nonnegative matrix factorization, sparse coding, unsupervised learning. I.
- J. Machine Learning Research , 2006
"... In recent years, several methods have been proposed for the discovery of causal structure from non-experimental data. Such methods make various assumptions on the data generating process to
facilitate its identification from purely observational data. Continuing this line of research, we show how to ..."
Cited by 54 (23 self)
Add to MetaCart
In recent years, several methods have been proposed for the discovery of causal structure from non-experimental data. Such methods make various assumptions on the data generating process to
facilitate its identification from purely observational data. Continuing this line of research, we show how to discover the complete causal structure of continuous-valued data, under the assumptions
that (a) the data generating process is linear, (b) there are no unobserved confounders, and (c) disturbance variables have non-Gaussian distributions of non-zero variances. The solution relies on
the use of the statistical method known as independent component analysis, and does not require any pre-specified time-ordering of the variables. We provide a complete Matlab package for performing
this LiNGAM analysis (short for Linear Non-Gaussian Acyclic Model), and demonstrate the effectiveness of the method using artificially generated data and real-world data.
- IEEE Trans. Pattern Anal. Mach. Intell , 2005
"... Using a recently proposed geometric representation of planar shapes, we present algorithmic tools for: (i) hierarchical clustering of imaged objects according to the shapes of their boundaries,
(ii) learning of probability models for clustered shapes, and (iii) testing of observed shapes under co ..."
Cited by 51 (6 self)
Add to MetaCart
Using a recently proposed geometric representation of planar shapes, we present algorithmic tools for: (i) hierarchical clustering of imaged objects according to the shapes of their boundaries, (ii)
learning of probability models for clustered shapes, and (iii) testing of observed shapes under competing probability models. Clustering at any level of hierarchy is performed using a mimimum
dispersion criterion and a Markov search process. Statistical means of clusters provide shapes to be clustered at the next higher level, thus building a hierarchy of shapes.
, 2002
"... An important approach in visual neuroscience considers how the function of the early visual system relates to the statistics of its natural input. Previous studies have shown how many basic
properties of the primary visual cortex, such as the receptive fields of simple and complex cells and the sp ..."
Cited by 49 (10 self)
Add to MetaCart
An important approach in visual neuroscience considers how the function of the early visual system relates to the statistics of its natural input. Previous studies have shown how many basic
properties of the primary visual cortex, such as the receptive fields of simple and complex cells and the spatial organization (topography) of the cells, can be understood as efficient coding of
natural images. Here we extend the framework by considering how the responses of complex cells could be sparsely represented by a higher-order neural layer. This leads to contour coding and
end-stopped receptive fields. In addition, contour integration could be interpreted as top-down inference in the presented model.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.50.4731","timestamp":"2014-04-16T06:24:02Z","content_type":null,"content_length":"37453","record_id":"<urn:uuid:4fade767-d7c3-4ee3-a1f3-59f2a837a6b6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Guest Post: Higgs ? 126 GeV, Said The Four Colour Theorem
The following text has been offered as a followup of the Higgs observation by the LHC experiments, which finds a signal at a mass compatible with the pre-discovery predictions made some time ago by
Vladimir Khachatryan - ones which I published in this blog. - T.D.
Considerations following the Higgs boson discovery - Ashay Dharwadker
We are pleased to know that our theoretical
predictionof the Higgs Boson mass
of 126 GeV, also announced in a
guest post
by Vladimir Khachatryan on this blog, is indeed in very good agreement with the recent announcements by the
experiments at CERN. Even as rumours were afoot about the impending discovery of the Higgs Boson at the LHC, physicist Marni Sheppeard wrote a series of posts about our theory (cf.
through the looking glass
so,a condensate Higgs
"So the observed Higgs Boson mass simply agrees with Dharwadker and Khachatryan's condensate formula
m[H] = ( m[W ]- + m[Z ]0+ m[W ]+ ) / 2
where a pair of Higgs Bosons form a Cooper pair and undergo Bose condensation attaining the lowest energy state possible. Anyway, a Standard Model Higgs was always essentially a condensate. But if we
can elaborate further onthe structure of this condensate, perhaps with our zoo of mirror particles, then in what sense does the Higgs exist? It exists because it reproduces the SM cross section
correctly, as observed at the LHC. That's what matters. After all these decades, the Standard Model finally finds its home."
Let us briefly summarize our Grand Unified Theory based upon the proofof the Four Color Theorem . We show that the mathematical proof of the four color theorem directly implies the existence of the
standard model, together with quantum gravity in its physical interpretation. Conversely,the experimentally observable standard model and quantum gravity show that Nature applies the mathematical
proof of the four colour theorem at the most fundamental level. We preserve all the established working theories of physics: Planck's Quantum Mechanics, the Schrödinger wave equation, Einstein's
Special and General Relativity, Maxwell's Electromagnetism, Feynman's Quantum Electrodynamics (QED), the Weinberg-Salam-Ward Electroweak model and Glashow-Iliopoulos-Maiani's Quantum Chromodynamics
(QCD). We build upon these theories, unifying all of them with Einstein's law of gravity. Quantum gravity is a direct and unavoidable consequence of the theory. The main construction of the Steiner
system in the proof of the four colour theorem already defines the gravitational fields of all the particles of the standard model.
Our first goal is to construct all the particles constituting the classic standard model, in exact agreement with Veltman and 't Hooft's description. In this description, the standard model is
already renormalized, so we have no quarrel with physicists who favor the perturbative approach. We are able to predict the exact mass of the Higgs Boson and the CP violation and mixing angle of weak
interactions, aka the Weinberg angle. We are also able to calculate the values of the Cabibbo angle and CKM matrix for strong interactions. Our second goal is to construct the gauge groups and
explicitly calculate the gauge coupling constants of the force fields. We show how the gauge groups are embedded in a sequence along the cosmological timeline in the grand unification:
SU(5) → SU(3) → SU(2) → U(1)
Then, we calculate the mass ratios of the particles of the standard model. Thus, the mathematical proof of the four color theorem shows that the grand unification of the standard model with quantum
gravity is complete, and rules out the possibility of finding any other kinds of particles.
We also show that the theory can be obtained entirely in terms of the Poincarégroup of isometries of space-time . We define the space and time chiralities of all spin 1/2 Fermions in agreement with
Dirac's relativistic wave equation. All the particles of the standard model now correspond to irreducible representations of the Poincaré group according to Wigner's classification. We construct the
Steiner system of Fermions and show how the Mathieu group acts as the group of symmetries of the fundamental building blocks of matter.
Finally, we show how to calculate Einstein's Cosmological Constant using the topological properties of the gauge in the Grand Unified Theory. We calculate the exact percentages of ordinary baryonic
matter, dark matter and dark energy in the universe. These values are in perfect agreement with the seven-year Wilkinson Microwave Anisotropy Probe(WMAP) observations. Thus dark matter, dark energy
and the cosmological constant are intrinsic properties of the gauge in the Grand Unified Theory.
Risperdal is off patent now I hear. I have to imagine prescriptions are affordable even in India....
Anonymous (not verified) | 07/20/12 | 12:48 PM
They were not the only ones to get the correct value:
Daniel Rocha | 07/20/12 | 13:01 PM
Come on, this was submitted on 11 Dec 2011!
Anonymous (not verified) | 07/20/12 | 13:34 PM
Whatever, it was said in this blog long before that. Anyone can make a prediction. Bookies got it right a year ago.
Proving it was a lot more work than slapping up a paper on arXiv.
Hank Campbell | 07/20/12 | 13:56 PM
The bookies didn't get it right - they said the highest probability was below 130 gev. Whoopdy-do.
Obviously this four-color-theorem theory is nonsense. But can someone please explain why its nonsense? Considering that they did, in fact, predict 126 gev in late 2010, I think they've earned a
careful refutation by someone who knows what they're talking about.
Anonymous (not verified) | 07/20/12 | 14:20 PM
http://snarxivblog.blogspot.com/2012/01/dharwadker-and-khachatryans-pred... is a start.
P.S. Tommaso could you please insert some spaces in Ashay Dharwadker's post, so all his words don't run together?
Mitchell Porter (not verified) | 07/20/12 | 20:23 PM
Another peregrine thing was an equation from de Vries, using a pair of quadratic equations, to predict the mass of W from Z; this was using the positive solution; when looking at the negative
solutions, they were (times i) 123 GeV and 178 GeV, somehow above and below the top and the higgs. I think I reported this point back in 2009 too.
Alejandro Rivero (not verified) | 07/20/12 | 15:04 PM
Hmm, no, my report was from 2006 http://arxiv.org/abs/hep-ph/0606171
Input mass: 91.1874GeV
Output masses: 80.3717, 176.154 i and 122.384 i GeVs. So as I said, a little "over the top" and a little under the Higgs. I was fascinated because Hans de Vries only was aiming to secure the Weinberg
angle, ie the proportion between W and Z; he never told about the negative solutions .
Alejandro Rivero (not verified) | 07/20/12 | 15:11 PM
Ashay Dharwadker also claimed a polynomial-time algorithm for maximum clique and a number of other hard combinatorial problems (which implies P=NP, of course). I sent him a simple counterexample with
explanations of his elementary errors back in 2009, and now I see he self-published a book still claiming the same! He is probably not even a crackpot, but a scam artist.
Stas Busygin (not verified) | 07/20/12 | 19:46 PM
When you design the configuration space to which your calorimeters are designed for, then you hope you capture all the probabilities of the experiment? So you then can say it fits all the parameters
up to that point. If you design the space then you are inviting what you have known all along? :)
PLato (not verified) | 07/20/12 | 22:55 PM
Is this "Four Colour Theorum" the one about coloring a map only using four colors?
Tara Li (not verified) | 07/21/12 | 10:20 AM
I don't like numerology at all, but I must admit that the mass formula for the Higgs looks quite cool. Random coincidence? Maybe...
If the four-color "theory" is based on SU(5), I would like to know how they deal with neutrinos, and whether they can predict their masses (and mixing parameters as well, if at all possible).
Anonymous (not verified) | 07/21/12 | 11:10 AM
So, kea is now "explaining" the higgs after years of ranting against its existence?
unit (not verified) | 07/22/12 | 04:31 AM
I am a bit sceptical and actually reminded of:
BEDEMIR: And that, my liege, is how we know the Earth to be banana-shaped.
ARTHUR: This new learning amazes me, Sir Bedemir. Explain again how sheeps' bladders may be employed to prevent earthquakes.
BEDEMIR: Oh, certainly, sir.
Yatima (not verified) | 07/22/12 | 05:08 AM
What is the trials factor on algebraic relations?
Andre David (not verified) | 07/22/12 | 05:54 AM
The referred article not only gives the proper mass (or better said the relations between masses). It also explains generations.
See: http://www.dharwadker.org/standard_model/
In this respect also the following preprint is interesting: www.dharwadker.org/space_time/preprint.pdf (This is also referred above)
This article reasons more around the Dirac equation than the Schroedinger equation. The notion of Schroedinger disk is still used.
The paper about the cosmological constant adds some interesting corrections. Only the first of the twenty four Schroedinger disks describe visible matter. The other discs concern dark matter.
If you think, think twice
Hans van Leunen | 07/22/12 | 16:51 PM
Barry Adams | 07/23/12 | 07:54 AM
In the papers about the Grand Unified Theory of the four colors seem to relate to charges. The colors are painting so called Schroedinger discs. The particle frame contains 24 of such disks. Colored
fields in these disks relate to particles. Borders of the Schroedinger disks relate to photons, gluons or gravitons.
The center represents the Higgs.
Here follow three generations of the fermions:
If you think, think twice
Hans van Leunen | 07/23/12 | 08:22 AM
A non-computer proof of the four color theorem would constitute a huge breakthrough in graph theory and combinatorics. It should be noted that the claimed four-color proof on which the author's grand
unified theory supposedly rests has never been subjected to peer review and would be rejected by any competent referee. One can find detailed discussion of the flaws in the proof - in actuality, the
absence of even an attempt at a proof - on many web sites, including the comments to the earlier post about the model on this blog.
Will Orrick (not verified) | 07/23/12 | 21:32 PM
The article www.dharwadker.org/space_time/preprint.pdf does not rely on the proof of the four color theorem. Instead it uses the Poincare group. I think that the author should have used the Einstein
group (introduced by Mendel Sachs) instead. (Still the article uses the particle frame and the Schroedinger disks).
See the paper “SYMMETRY IN ELECTRODYNAMICS” of M. Sachs.
If you think, think twice
Hans van Leunen | 07/24/12 | 04:44 AM
|
{"url":"http://www.science20.com/quantum_diaries_survivor/guest_post_higgs_126_gev_said_four_colour_theorem-92289","timestamp":"2014-04-20T08:14:24Z","content_type":null,"content_length":"62264","record_id":"<urn:uuid:d4bd1c1b-dc5b-4e9c-8b35-fac9d5cf9dba>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Get Close to the Median Shape
"... We consider the problem of approximating a set P of n points in R d by a j-dimensional subspace under the ℓp measure, in which we wish to minimize the sum of ℓp distances from each point of P to
this subspace. More generally, the Fq(ℓp)-subspace approximation problem asks for a j-subspace that minim ..."
Cited by 12 (7 self)
Add to MetaCart
We consider the problem of approximating a set P of n points in R d by a j-dimensional subspace under the ℓp measure, in which we wish to minimize the sum of ℓp distances from each point of P to this
subspace. More generally, the Fq(ℓp)-subspace approximation problem asks for a j-subspace that minimizes the sum of qth powers of ℓp-distances to this subspace, up to a multiplicative factor of (1 +
ɛ). We develop techniques for subspace approximation, regression, and matrix approximation that can be used to deal with massive data sets in high dimensional spaces. In particular, we develop
coresets and sketches, i.e. small space representations that approximate the input point set P with respect to the subspace approximation problem. Our results are: • A dimensionality reduction method
that can be applied to Fq(ℓp)-clustering and shape fitting problems, such as those in [8, 15]. • The first strong coreset for F1(ℓ2)-subspace approximation in high-dimensional spaces, i.e. of size
polynomial in the dimension of the space. This coreset approximates the distances to any j-subspace (not just the optimal one). • A (1 + ɛ)-approximation algorithm for the j-dimensional F1(ℓ2)
-subspace approximation problem with running time nd(j/ɛ) O(1) + (n + d)2 poly(j/ɛ). • A streaming algorithm that maintains a coreset for the F1(ℓ2)-subspace approximation problem and uses a space
log n
- In proceedings of FSTTCS , 2006
"... The problem received the title of `Buridan's sheep. ' The biological code was taken from a young merino sheep, by the Casparo-Karpov method, at a moment when the sheep was between two feeding
troughs full of mixed fodder. This code, along with additional data about sheep in general, was fed into COD ..."
Cited by 8 (1 self)
Add to MetaCart
The problem received the title of `Buridan's sheep. ' The biological code was taken from a young merino sheep, by the Casparo-Karpov method, at a moment when the sheep was between two feeding troughs
full of mixed fodder. This code, along with additional data about sheep in general, was fed into CODD. The machine was required: a) to predict which trough the merino would choose, and b) to give the
psychophysiological basis for this choice. The mystery of the hind leg, Arkady and Boris Strugatsky Given a set P of n points on the real line and a (potentially in nite) family of functions, we
investigate the problem of nding a small (weighted) subset S ⊆ P, such that for any f ∈ F, we have that f(P) is a (1 ± ε)-approximation to f(S). Here, f(Q) = ∑ q∈Q w(q)f(q) denotes the weighted
discrete integral of f over the point set Q, where w(q) is the weight assigned to the point q. We study this problem, and provide tight bounds on the size S for several families of functions. As an
application, we present some coreset constructions for clustering. 1
, 2007
"... We re-examine relative ε-approximations, previously studied in [Pol86, Hau92, LLS01, CKMS06], and their relation to certain geometric problems. We give a simple constructive proof of their
existence in general range spaces with finite VC-dimension, and of a sharp bound on their size, close to the be ..."
Cited by 5 (2 self)
Add to MetaCart
We re-examine relative ε-approximations, previously studied in [Pol86, Hau92, LLS01, CKMS06], and their relation to certain geometric problems. We give a simple constructive proof of their existence
in general range spaces with finite VC-dimension, and of a sharp bound on their size, close to the best known one. We then give a construction of smaller-size relative ε-approximations for range
spaces that involve points and halfspaces in two and higher dimensions. The planar construction is based on a new structure—spanning trees with small relative crossing number, which we believe to be
of independent interest. We also consider applications of the new structures for approximate range counting and related problems.
, 2009
"... We present a new optimization technique that yields the first FPTAS for several geometric problems. These problems reduce to optimizing a sum of non-negative, constant descriptioncomplexity
algebraic functions. We first give an FPTAS for optimizing such a sum of algebraic functions, and then we appl ..."
Cited by 4 (0 self)
Add to MetaCart
We present a new optimization technique that yields the first FPTAS for several geometric problems. These problems reduce to optimizing a sum of non-negative, constant descriptioncomplexity algebraic
functions. We first give an FPTAS for optimizing such a sum of algebraic functions, and then we apply it to several geometric optimization problems. We obtain the first FPTAS for two fundamental
geometric shape matching problems in fixed dimension: maximizing the volume of overlap of two polyhedra under rigid motions, and minimizing their symmetric difference. We obtain the first FPTAS for
other problems in fixed dimension, such as computing an optimal ray in a weighted subdivision, finding the largest axially symmetric subset of a polyhedron, and computing minimum-area hulls. 1
"... We consider the problem of projective clustering in Euclidean spaces of non-fixed dimension. Here, we are given a set P of n points in R m and integers j ≥ 1, k ≥ 0, and the goal is to find j
k-subspaces so that the sum of the distances of each point in P to the nearest subspace is minimized. Observ ..."
Cited by 3 (0 self)
Add to MetaCart
We consider the problem of projective clustering in Euclidean spaces of non-fixed dimension. Here, we are given a set P of n points in R m and integers j ≥ 1, k ≥ 0, and the goal is to find j
k-subspaces so that the sum of the distances of each point in P to the nearest subspace is minimized. Observe that this is a shape fitting problem where we wish to find the best fit in the L1 sense.
Here we will treat the number j of subspaces we want to fit and the dimension k of each of them as constants. We consider instances of projective clustering where the point coordinates are integers
of magnitude polynomial in m and n. Our main result is a randomized algorithm that for any ε> 0 runs in time O(mn polylog(mn)) and outputs a solution that with high probability is within (1 + ε) of
the optimal solution. To obtain this result, we show that the fixed dimensional version of the above projective clustering problem has a small coreset. We do that by observing that in a fairly
general sense, shape fitting problems that have small coresets in the L ∞ setting also have small coresets in the L1 setting, and then exploiting an existing construction for the L∞ setting. This
observation seems to be quite useful for other shape fitting problems as well, as we demonstrate by constructing the first “regular ” coreset for the circle fitting problem in the plane. 1
"... We give a randomized bi-criteria algorithm for the problem of finding a k-dimensional subspace that minimizes the Lp-error for given points, i.e., p-th root of the sum of p-th powers of
distances to given points, for any p ≥ 1. Our algorithm runs in time Õ ( mn · k 3 (k/ɛ) p+1) and produces a subset ..."
Add to MetaCart
We give a randomized bi-criteria algorithm for the problem of finding a k-dimensional subspace that minimizes the Lp-error for given points, i.e., p-th root of the sum of p-th powers of distances to
given points, for any p ≥ 1. Our algorithm runs in time Õ ( mn · k 3 (k/ɛ) p+1) and produces a subset of size Õ ( k 2 (k/ɛ) p+1) from the given points such that, with high probability, the span of
these points gives a (1 + ɛ)-approximation to the optimal k-dimensional subspace. We also show a dimension reduction type of result for this problem where we can efficiently find a subset of size Õ (
k p+3 + (k/ɛ) p+2) such that, with high probability, their span contains a k-dimensional subspace that gives (1 + ɛ)-approximation to the optimum. We prove similar results for the corresponding
projective clustering problem where we need to find multiple k-dimensional subspaces. 1
, 2007
"... The linear least trimmed squares (LTS) estimator is a statistical technique for estimating the line (or generally hyperplane) of fit for a set of points. It was proposed by Rousseeuw as a robust
alternative to the classical least squares estimator. Given a set of n points in R d, in classical least ..."
Add to MetaCart
The linear least trimmed squares (LTS) estimator is a statistical technique for estimating the line (or generally hyperplane) of fit for a set of points. It was proposed by Rousseeuw as a robust
alternative to the classical least squares estimator. Given a set of n points in R d, in classical least squares the objective is to find a linear model (that is, nonvertical hyperplane) that
minimizes the sum of squared residuals. In LTS the objective is to minimize the sum of the smallest 50 % squared residuals. LTS is a robust estimator with a 50%-breakdown point, which means that the
estimator is insensitive to corruption due to outliers, provided that the outliers constitute less than 50 % of the set. LTS is closely related to the well known LMS estimator, in which the objective
is to minimize the median squared residual, and LTA, in which the objective is to minimize the sum of the smallest 50 % absolute residuals. LTS has the advantage of being statistically more efficient
than LMS. Unfortunately, the computational complexity of LTS is less well understood than LMS. In this paper we present new algorithms, both exact and approximate, for computing the LTS estimator. We
also present hardness results for exact and approximate LTS and LTA.
"... data into tiny data: ..."
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=5104537","timestamp":"2014-04-24T06:48:35Z","content_type":null,"content_length":"33230","record_id":"<urn:uuid:027efce4-0f09-494d-b1ca-ffea3a75b751>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
|
algebraic inequalities & graphing on a number line
algebraic inequalities & graphing on a number line
Solve each.
a. -3x = - 6
b. -3x > - 6
Equation VS. Inequality
Though the two statements above are similar, they are also different.
The first statement, -3x = - 6, is an equation, a statement of equality, a statement with the verb "is" or "equals". It says, "The product of -3 and a number is -6."
The second statement, -3x > - 6, is an inequality. It says, "The product of -3 and a number is greater than -6."
To learn how to solve equations see solve linear equation w/paper & pencil. After you have learned about solving equations, read this page.
Consider each of the following inequalities written in symbols and in words.
-3x > - 6, "The product of -3 and a number is greater than -6."
-3x < - 6, "The product of -3 and a number is less than -6."
-3x > - 6, "The product of -3 and a number is greater than or equal to -6."
-3x < - 6, "The product of -3 and a number is less than or equal to -6."
To solve the equation -3x = -6, one divides both sides by -3 and gets x = 2.
To solve the inequations above, one divides both sides by -3 and "FLIPS THE RELATION SYMBOL" and gets the following results.
x < 2, "The number is less than 2."
x > 2, "The number is greater than 2."
x < 2, "The number is less than or equal to 2."
x > 2, "The number is greater than or equal to 2."
When you multiply or divide an inequality by a negative number, you must "flip" the sign to make the statement true. This is the ONLY difference between solving an equation and solving an inequality.
The text box below will explain why this is necessary.
The symbols [ and ] are used to indicate that the endpoint is included. The symbols mean the same as the filled-in circle.
The symobls ( and ) are used to indicate the endpoint is not included. The symbols mean the same as the hollow circle.
Inequalities are expressed in other ways. Each of these says the same thing.
a.) The set of all numbers such that the number is greater than 2.
b.) {x | x > 2}   
Vocabulary & Stuff
set    integers number line    multiples    is greater than graph paper
|
{"url":"http://www.mathnstuff.com/math/algebra/ainequ.htm","timestamp":"2014-04-19T23:05:58Z","content_type":null,"content_length":"6410","record_id":"<urn:uuid:1a07fdb1-00b9-47be-9c88-b29f294747b0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Julie opened a lemonade stand and sold lemonade in two diff
Question Stats:
63%36% (00:47)based on 46 sessions
tim415 wrote:
Julie opened a lemonade stand and sold lemonade in two different sizes, a 52-cent (12oz) and a 58-cent (16oz) size. How many 52-cent (12oz) lemonade drinks did Julie sell?
(1) Julie sold a total of 9 lemonades
(2) The total value of the lemonade drinks Julie sold was $4.92
Cost of 12 oz drink =52 cents, lets assumte total number sold are N
Cost of 16 oz drink =58 cents, lets assumte total number sold are M
Statement 1: N+M =9
So it could be that N=1, M=8 or that N=2, M=7 etc. Clearly not sufficient.
Statement 2: N*0.52 + M*0.58 = 4.92
Or to simplify it: N*52 + M*58 = 492
N*26 + M*29 = 246
M*29 = 246-N*26
This is true only for one value of M and N, when M=4 and N=5. (Assuming number of drinks to be only integers and hoping Julie's stand is not a unique stand that sells 0.732, 0.981 drinks
Hence sufficient to answer.
Ans B it is.
Lets Kudos!!!
Black Friday Debrief
|
{"url":"http://gmatclub.com/forum/julie-opened-a-lemonade-stand-and-sold-lemonade-in-two-diff-141870.html","timestamp":"2014-04-16T04:11:33Z","content_type":null,"content_length":"205134","record_id":"<urn:uuid:29fc9496-1ae1-4d4f-866b-7136144ea468>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
|
trig question (correct me) [part 1]
June 26th 2009, 07:13 AM #1
Super Member
Jun 2009
here's my function :
let $\mathbb{E} = [-\pi ,+\pi]$
$x \in \mathbb{E}$$;$$f(x) = \sqrt{\frac{1+cos(x)}{1-cos(x)}}$
and here's my questions :
Prove that $(\forall x \in \mathbb{D})$$f(x) = \left |\tan (\frac{\pi}{2}-\frac{x}{2}) \right |$
Study the sign of $\tan (\frac{\pi}{2}-\frac{x}{2})$ in $]0,+\pi]$
i proceeded like this :
for the first one,we have $\forall x\in \mathbb{R},1+cos(x) = 2cos^{2}(\frac{x}{2})$ and $1-cos(x) = 2sin^{2}(\frac{x}{2})$
$\Rightarrow \sqrt{\frac{1+cos(x)}{1-cos(x)}} =$$\sqrt{\frac{2cos^{2} (\frac{x}{2})}{2sin^{2}(\frac{x}{2})}}=$
$| \frac{cos(\frac{x}{2})}{sin(\frac{x}{2})}|$$= \left |\frac{1}{\tan (\frac{x}{2})} \right |$$= \left |\tan (\frac{\pi}{2}-\frac{x}{2}) \right |$
for the second one,am i correct writing ; $\forall x \in \mathbb{D},f(x) = tan(\frac{\pi}{2}-\frac{x}{2})$ ( i already studied $f$ )
thanks for everyone
by the way i'm trying to improve myself for writing proofs so if there is any small mistake please tell me
well to me it seems that interval of D is not given and we are separetely suppose to check its sign in interval ]0,+pi]
(you could have wrote - where D is given by ]0,+pi] )
else seems correct to me
sorry about later,i forgot to mention the interval of $\mathbb{D}$.
$\mathbb{D} =$$[-\pi,0)\cup(0,+\pi]$
June 26th 2009, 08:49 AM #2
June 26th 2009, 09:05 AM #3
Super Member
Jun 2009
|
{"url":"http://mathhelpforum.com/algebra/93787-trig-question-correct-me-part-1-a.html","timestamp":"2014-04-19T10:28:22Z","content_type":null,"content_length":"40295","record_id":"<urn:uuid:02b25c92-6799-4094-bd73-80c328c96583>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What does the Eigenvalue of a linear system actually tell you?
hi newclearwintr!
no, the mathematical model is fine …
suppose the equation is L = Iω (angular momentum = moment of inertia times angular velocity)
the I here is a tensor, not a number, and L is generally not parallel to ω
only if ω is parallel to a principal axis of the body, is L parallel to ω
ie the only modes with L parallel to ω are rotations about the principal axes (the eigenvectors)
for any other mode, even if angular momentum is constant (conserved), the angular velocity won't be … the rotation won't stay in the mode it started in!
|
{"url":"http://www.physicsforums.com/showthread.php?p=4240257","timestamp":"2014-04-17T07:24:33Z","content_type":null,"content_length":"39916","record_id":"<urn:uuid:481dc6bb-9d34-4a53-9624-05420f57906a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Harvard Psychological Studies, Volume 1 Summary
Since r’ (the speed of the disc) is always positive, and s is always greater than p (cf. p. 173), and since the denominator is a square and therefore positive, it follows that
D_{[tau]}W > 0
or that W increases if r increases.
Furthermore, if W is a wide band, s is the wider sector. The rate of increase of W as r increases is
r’(s +- p)
D_{[tau]}W = -----------
(r’ +- r) squared
which is larger if s is larger (s and r being always positive). That is, as r increases, ’broad bands widen relatively more than narrow ones.’
3. Thirdly (p. 174, No. 3), “The width of The bands increases if the speed of the revolving disc decreases.” This speed is r’. That the observed fact is equally true of the geometrical bands is clear
from inspection, since in
rs — pr’
W = --------- ,
r’ +- r
as r’ decreases, the denominator of the right-hand member decreases while the numerator increases.
4. We now come to the transition-bands, where one color shades over into the other. It was observed (p. 174, No. 4) that, “These partake of the colors of both the sectors on the disc. The wider the
rod the wider the transition-bands.”
We have already seen (p. 180) that at intervals the pendulum conceals a portion of both the sectors, so that at those points the color of the band will be found not by deducting either color alone
from the fused color, but by deducting a small amount of both colors in definite proportions. The locus of the positions where both colors are to be thus deducted we have provisionally called (in the
geometrical section) ‘transition-bands.’ Just as for pure-color bands, this locus is a radial sector, and we have found its width to be (formula 6, p. 184)
W = --------- ,
r’ +- r
Now, are these bands of bi-color deduction identical with the transition-bands observed in the illusion? Since the total concealing capacity of the pendulum for any given speed is fixed, less of
either color can be deducted for a transition-band than is deducted of one color for a pure-color band. Therefore, a transition-band will never be so different from the original fusion-color as will
either ‘pure-color’ band; that is, compared with the pure color-bands, the transition-bands will ’partake of the colors of both the sectors on the disc.’ Since
W = --------- ,
r’ +- r
it is clear that an increase of p will give an increase of w; i.e., ‘the wider the rod, the wider the transition-bands.’
Since r is the rate of the rod and is always less than r’, the more rapidly the rod moves, the wider will be the transition-bands when rod and disc move in the same direction, that is, when
|
{"url":"http://www.bookrags.com/ebooks/16266/133.html","timestamp":"2014-04-19T05:16:12Z","content_type":null,"content_length":"34546","record_id":"<urn:uuid:f13f8942-5f21-4229-9c58-d09292f9f1bc>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hello! Coming to you with an interesting problem related to set theory
June 13th 2012, 05:03 AM
Hello! Coming to you with an interesting problem related to set theory
my name is Alex, I am 35 years old, engineer and econometrician, from Germany. I have problem that I have discussed with several friends and colleagues. See the following description of the
Consider a country with 120.000 individuals. We want to create groupings (=cohorts) of individuals. These are the detailed characteristics and constraints:
1. There will be a multitude of groupings. In theory, and the solution should consider this, there will be an unlimited number of groupings. In reality, and it will be appreciated to have a
solution for this, there will be not more than 50 different groupings.
2. The groupings are created independently from each other, at least one group contains all individuals.
3. Each cohort of individuals will have at least 3 individuals. For each cohort we will know the total gross salary. The term “salary” is a placeholder for any sensitive, personal data that can
be subject of additive, subtractive, or multiplicative algebraic operations. This data fact is hereafter referred to as “data”.
4. Individuals in any cohort are not necessarily geographically close, i.e. an individual from the southernmost location of the country can be grouped with individuals from the northernmost
location of the country.
5. We know from each individual the geographic coordinates.
The problem for which a solution is needed: Create an algorithm that checks all possible combinations of involved structures and cohorts to prevent the identification of one individuals’ data,
given the multitude of structures and cohorts as described above.
The complexity of the problem arises (or at least seems to arise) from the combinations of existing groupings with potential (re-)combinations of others. The assumption is formulated that the
number of combinations between groupings increases exponentially as the number of groupings increases. To further illustrate the problem, consider in the easiest case these two different
□ Grouping 1 contains the individuals 1, 2, 3, and 4 and the data attached to this cohort is €17.150.
□ Grouping 2 contains the individuals 1, 2, and 3 and the data attached to this cohort is €8.250.
□ By subtracting the data of grouping 2 from the data of grouping 1 we disclose the data of individual 4.
Consider another example:
□ Grouping 1 contains the individuals 1, 2, 3, and 4 and the data attached to this cohort is €17.150.
□ Grouping 2 contains the individuals 5, 6, and 7 and the data attached to this cohort is €14.200.
□ Grouping 3 contains the individuals 1, 2, 3, 4, 5, 6, 7, 8 and the data attached to this cohort is €32.050.
□ By subtracting the sum of data of groupings 1 and 2 from the data of grouping 3 we disclose the data of individual 8.
Any help on theoretical approaches are welcome! Thanks in advance, greetings from Germany! Alex. (Bow)
|
{"url":"http://mathhelpforum.com/new-users/199972-hello-coming-you-interesting-problem-related-set-theory-print.html","timestamp":"2014-04-16T19:19:52Z","content_type":null,"content_length":"6370","record_id":"<urn:uuid:c9981c87-5d63-4df4-b1bf-82ea4f708d1e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Carver Math Tutor
...After college I worked as a tutor in the California Public School System. I met with six students over the course of a school year on a fixed, weekly schedule. These students were male and
female, ranging in age from elementary to middle school.
49 Subjects: including calculus, elementary (k-6th), study skills, baseball
...Trying to teach a concept or technique without the right supports beneath is merely postponing the problem until later. So often, once the root difficulty is found (and fixed!), a plethora of
other problems and fears diminish or disappear. Once you understand it, science and math is fun and easy.
12 Subjects: including algebra 1, algebra 2, calculus, chemistry
...When I tutor high school students, I usually choose the library as a setting. This allows for few distractions and greater concentration. I prefer to tutor in 2 hour segments, as it ensures
greater understanding of a subject.
15 Subjects: including prealgebra, algebra 1, reading, writing
...I include a lot of positive reinforcement during my tutoring sessions, reassuring my students that they can learn and succeed. My positive approach allows my students to relax and start
focusing on the material to be learned. I have had excellent success in motivating students and creating real results in academic achievement.
13 Subjects: including algebra 2, American history, calculus, geometry
...I understand the challenges and difficulties involved with ADHD allowing me to identify and address areas of improvement that will considerably help with future success. Messing up over and
over in a single area may have hindered the desire to learn or lowered confidence in some students, with t...
36 Subjects: including algebra 1, algebra 2, chemistry, grammar
|
{"url":"http://www.purplemath.com/north_carver_ma_math_tutors.php","timestamp":"2014-04-16T13:26:24Z","content_type":null,"content_length":"23778","record_id":"<urn:uuid:139a3304-a049-4ec0-9a5f-8eacc0c92b7d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How much are reduced powers different?
up vote 3 down vote favorite
Given two infinite sets $X$ and $I$, and a filter ${\cal F}$ on $I$, one defines as usual the equivalence relation $\approx_{\cal F}$ on $X^I$ and obtains the reduced power $Y = X^I / \approx_{\cal
Question 1 : to what extent do such reduced powers differ when one filter on $I$ is changed to another filter on $I$ ?
Question 2 : consider question 1 in the case of different ultrafilters on $I$, thus in the case of ultrapowers.
Let me specify my question. Given two ultrapowers $X^I/F$ and $Y^J/G$ where $X,I,Y,J$ are arbitrary infinite sets, while $F,G$ are ultrafilters on $I$ and $J$< respectively, the questions is to what
extent :
1) the cardinals of those two ultrapowers can differ ?
2) those two ultrapowers are not isomorphic when $X$ and $Y$ are fields ?
A more precise, and at the same time, more general form of the question is as follows : Given a reduced power X^I / F, where X and I are arbitrary infinite sets, while F is an arbitrary filter on I,
to what extent does the reduced power X^I / F change, when X, I or F are changed ?
1 Are you interested in reduced products of bare naked sets or of structures such as groups etc? – Simon Thomas Jul 3 '10 at 14:58
Actually, I am interested in reduced powers and ultrapowers of the field \mathbb{R} of usual real numbers, but also more generally, of rings. – Elemer E Rosinger Jul 3 '10 at 16:26
add comment
4 Answers
active oldest votes
Easy differences arise if one allows principal ultrafilters, since the ultrapower of $X$ by a principal filter is canonically isomorphic to $X$, but other ultrapowers are not. Another
easy difference arises when $I$ is uncountable, since one filter might concentrate on a countable subset of $I$ and others might not, and this can dramatically affect the size of the
reduced power, making them different.
So the question is more interesting when one considers only non-principal filters and also only uniform filters, meaning that every small subset of $I$ is measure $0$.
up vote 10 In this case, under the Generalized Continuum Hypothesis, the ultrapower of any first order structure is saturated, and thus any two of them will be canonically isomorphic by a
down vote back-and-forth argument. Without the GCH, it is consistent with ZFC to have ultrafilters on the same set leading to nonisomorphic ultrapowers.
Also relevant is the Keisler-Shelah theorem, which asserts that two first order structures---such as two graphs, groups or rings---are elementarily equivalent (have all the same first
order truths) if and only if they have an isomorphic ultrapowers.
What´s "small"? – Mariano Suárez-Alvarez♦ Jul 3 '10 at 14:53
In this case, it means size less than the cardinality of $I$. For a filter to give measure $1$ to a strictly smaller set, means in a sense that you have the wrong index set. – Joel
David Hamkins Jul 3 '10 at 14:56
Thank you for the answer. As I commented above, I am interested only in non-principal ultrafilters and in filters which contain the Frechet filter. – Elemer E Rosinger Jul 3 '10 at
Elemer, if by the Frechet filter, you mean the filter of finite sets, this is not good enough, when $I$ is uncountable, since one filter might still concentrate on a countable set
while another does not. What you want is the filter of all co-small sets, making your filter uniform in the sense I mentioned. – Joel David Hamkins Jul 3 '10 at 16:32
1 In fact, the existence of non-isomorphic ultrapowers of the linearly ordered sets $\mathbb{N}$ or $\mathbb{R}$ over a countable index set is not just consistent with $\neg CH$. It is
actually equivalent to $\neg CH$. – Simon Thomas Jul 3 '10 at 16:50
show 5 more comments
Given that you are interested in ultrapowers of $\mathbb{R}$, you might like the following which appears in a joint paper with Kramer, Shelah and Tent.
Theorem: Up to isomorphism, the number of ultrapowers $\prod_{\mathcal{U}} \mathbb{R}$, where $\mathcal{U}$ is a nonprincipal ultrafilter over $\mathbb{N}$, is 1 if $CH$ holds and $2
^{2^{\aleph_{0}}}$ if $CH$ fails.
up vote 9 down
vote Here $CH$ is the Continuuum Hypothesis. (In the case when $CH$ fails, the relevant ultrapowers are already non-isomorphic merely as linearly ordered sets.) The relevant reference is:
L. Kramer, S. Shelah, K. Tent and S. Thomas Asymptotic cones of finitely presented groups, Advances in Mathematics 193 (2005), 142-173.
add comment
Since the question was about reduced powers, not just ultrapowers, it seems worthwhile to point out that, when $F$ is a filter on $I$ but not an ultrafilter, then the reduced power of a
field $k$ with respect to $F$ will not be a field, not even an integral domain. Proof: Since $F$ isn't an ultrafilter, $I$ can be partitioned into two pieces $A$ and $B$, neither of which
is in $F$. Then the characteristic functions of these pieces represent nonzero elements of the reduced power $k^I/F$ whose product is zero.
up vote 6 Thus, reduced powers of fields modulo non-ultra-filters differ very strongly from ultrapowers, since the latter are fields.
down vote
A similar argument shows, for example, that if $X$ is a linearly ordered set with at least two elements, then the reduced power $X^I/F$ is linearly ordered if and only if $F$ is an
add comment
The (latest version of the) question is probably more general than it should be. Since $X$ is allowed to change and since the filters could be principal ultrafilters, the answer is that
absolutely anything can happen. Use a principal ultrafilter $F$ on any set $I$, and vary $X$ at will. If the intention was to prohibit principal ultrafilters, then I can't say that
up vote absolutely anything can happen, but quite a lot can. For example, given any ultrapower of any $X$, one could change $X$ to a set bigger than that ultrapower; any ultrapower (or any reduced
1 down power) of the new $X$ would be bigger than the ultrapower you started with. So it's not clear that anything useful can be said at (or even near) the level of generality of the question.
add comment
Not the answer you're looking for? Browse other questions tagged ultrafilters or ask your own question.
|
{"url":"http://mathoverflow.net/questions/30408/how-much-are-reduced-powers-different/30411","timestamp":"2014-04-20T05:57:40Z","content_type":null,"content_length":"73512","record_id":"<urn:uuid:f2102624-ffe6-4638-8512-045714b8f56e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is there a standard notation for a "shift space" in functional analysis?
up vote 5 down vote favorite
I'm writing up some notes on the nLab about things like embedding spaces and infinite spheres and similar things (can't link to them yet as I haven't put them up yet). One aspect that crops up time
and time again is the contractibility of some big space, such as an infinite sphere, and this almost always boils down to some special property of whatever topological vector space is sitting in the
This special property is the existence of a "shift map" which acts pretty much like the obvious shift map on a sequence space. So I'm going to refer often to pairs $(V,S)$ where $V$ is a locally
convex topological vector space and $S \colon V \to V$ is a "shift map". A little more precisely, we want to have an isomorphism $V \cong V \oplus \mathbb{R}$, so that $S \colon V \to V$ is the
inclusion of the first factor, with certain properties, the main one being that $\bigcap S^k V = \{0\}$. The obvious notation is that $(V,S)$ is a shift space, and that $V$ is a shiftable space, but
if there's an already existent notation then I should use that.
So my question is that: is there a standard notation for any of these concepts? The map itself, the space that admits the map, and the pair.
A closely related concept that I'll also use a bit could be termed a split space. This would be a locally convex topological vector space $V$ with an isomorphism $V \cong V \oplus V$. So: same
question for that.
Edit: As Bill Johnson hasn't heard of these, I've written the relevant pages. It may be that I've included some detail there that I didn't put here. If any further information comes to light, I'll
edit the pages accordingly.
fa.functional-analysis at.algebraic-topology notation
1 I don't know of any standard notation for such things, Andrew. $V\oplus V$ is often called the square of $V$, so for the second one I would just say that $V$ is isomorphic to its square. Being
shiftable looks much stronger for separable Banach spaces than being isomorphic to hyperplanes. Is it really stronger? – Bill Johnson Nov 28 '11 at 22:56
Bill, I think that it is stronger because of the intersection property. If $X$ is isomorphic to a hyperplane in $X$, so $X \cong X \oplus \mathbb{R}$ then for any $Y$ we have the same property for
$X \oplus Y$. But that's not true for a "shiftable space". If you don't know of a standard notation, then I'd take that as fairly definitive so I recommend that you post it as an answer. If
someone comes along later and has an example of this in use then I can always edit the nLab page accordingly (it's a wiki, after all!). – Andrew Stacey Nov 29 '11 at 7:48
add comment
1 Answer
active oldest votes
I don't know of any standard notation for such things, Andrew. $V\oplus V$ is often called the square of $V$, so for the second one I would just say that $V$ is isomorphic to its
up vote 1 down vote
accepted Being shiftable looks much stronger for separable Banach spaces than being isomorphic to hyperplanes. Is it really stronger? Is $\ell_p \oplus \ell_r$ shiftable when $p\not= r$?
Regarding your last comment, on the nLab page I made a modification where I allowed for "shiftable of order k", so then l_p + l_r would be shiftable of order 2. For the
applications I have in mind, I need "enough" spare space and too much is fine. – Andrew Stacey Nov 29 '11 at 19:37
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis at.algebraic-topology notation or ask your own question.
|
{"url":"http://mathoverflow.net/questions/82108/is-there-a-standard-notation-for-a-shift-space-in-functional-analysis?answertab=oldest","timestamp":"2014-04-16T20:13:01Z","content_type":null,"content_length":"56424","record_id":"<urn:uuid:1c776714-716e-47ac-a28e-5454538b59aa>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
29 Threads found on edaboard.com: Dft Calculation
Just for my own cultural background i enterprised to learn a little bit about the fourier series and how it lead to the dft .. I also wanted to undertsand how Fourier came up with the cornerstone
general theorem that all functions can be aproximated by a fourier series my suprise i found that fourier only generalised the Theoreme .which was alr
Mathematics and Physics :: 01.11.2004 12:28 :: eltonjohn :: Replies: 1 :: Views: 945
Hi all, Can anyone help me comment on the below code and explain how the function works.I am not sure how this function works and why is there a mid value in the calculation? Thanks a lot! regards
Digital Signal Processing :: 25.01.2006 03:04 :: scdoro :: Replies: 1 :: Views: 505
Categorize your size into Random Logic Area + MEM Area. You can get the area report from synthesis tool, usually it's based on um^2. (Based on the library you are targeting). The chip area will be
(RLA/cell utilization + MEM Area) * (1+dft overhead)
ASIC Design Methodologies and Tools (Digital) :: 07.05.2006 10:37 :: wangchao :: Replies: 6 :: Views: 1641
hi, friends: i know that dft is a approximation of the CFT or DTFT,. but i cannot have direct connection between some other meanings and dft process, such as every dft bin is bandpass filter, and dft
processing gain and so on. is there anybody who understanding dft well can explain it deeply. thanks!
Mathematics and Physics :: 14.09.2006 23:09 :: beargebinjie :: Replies: 3 :: Views: 3063
Hello, you can calculate the S21 parameter by taking either fast fourier transform (FFT) or descrete fourier transform(dft) of the E field at port two and the incident field at port one and use the
resulting frequency domain transform of the incident field to divide the transform of the E-field at port two. Regards
Electromagnetic Design and Simulation :: 20.07.2007 14:36 :: babinton :: Replies: 5 :: Views: 1102
Hello Why FFT is faster than dft? Regards, N.Muralidhara
Electronic Elementary Questions :: 06.02.2008 00:19 :: muralicrl :: Replies: 5 :: Views: 3145
What device are you talking about? Amplifier, ADC, DAC... in general, you just divide the output by the full-scale (normalize to the full-scale) before taking the dft so that it corresponds to
Analog IC Design and Layout :: 10.02.2011 17:06 :: JoannesPaulus :: Replies: 7 :: Views: 1426
Hello, Im student of 2 year of electronics from Poland. Once I was sitting and waiting on a corridor, and my tutor come to me and chat a while. Finally he gave me a task to think through. After few
hours of intense thinking I started to bang my head over the wall. Friend of mine recomended me that forum. Maybe you have some idea? ---------------
Electronic Elementary Questions :: 20.03.2011 17:05 :: Zaja :: Replies: 4 :: Views: 1560
I tried to read the lecture note or lecture video on youtube but it's hard to understand. there are a lot of mathematics but no concept what they are doing and how they work. Anyone please explain to
me the concept of it. What are dft/FFT doing and how they work? or suggest me the source that is easy to understand about this please. Thank you so mu
Digital Signal Processing :: 16.07.2011 01:31 :: mazdaspring :: Replies: 8 :: Views: 1713
RAKESH E.R, you may see dft Compiler documentation. Tcl is needed for dft insertion script development.
ASIC Design Methodologies and Tools (Digital) :: 17.10.2011 03:58 :: poluekt :: Replies: 8 :: Views: 667
Hey I am a Btech fresher.. just joined as dft engineer ... Can anyone provide a link to some good ebook with which I can get started.. Thanks
ASIC Design Methodologies and Tools (Digital) :: 02.11.2011 07:55 :: shivu90 :: Replies: 4 :: Views: 626
Hello all, I am not able to get the correct amplitude of input signal from dft. My sampling frequency is 128Hz and I am take 128 Samples and computing 128-point FFT. I also have a simulated sine wave
generator which I am using as input. When I vary the input frequency, FFT detects the frequency correctly till 64Hz (i.e. Fs/2) but it g
Digital Signal Processing :: 18.09.2012 12:52 :: karanbanthia :: Replies: 2 :: Views: 458
Spectre's pss is not needed. Simply do transient and dft(fft is not used in calculator). Then take the difference of the harmonics. But that is not the true story if you apply the phase shifter
signal to a mixer input. Then also the harmonis contribute to the phase performance. To get the real performance you have to setup a phase shifter plus a
Linux Software :: 08.12.2003 05:06 :: rfsystem :: Replies: 2 :: Views: 1516
Hi, Littlindian: I am sorry not to be able to respond to you earlier because I had been on vocation. Theoretically, you can get the t-domain near field and do a dft for the f-domain near field.
However, it wil involve too much data. For the current situation, we normally suggest users to save about 10-50 frames of t-domain near field. If you want t
Electromagnetic Design and Simulation :: 19.08.2005 11:23 :: jian :: Replies: 8 :: Views: 1080
ok i am assuming that you are totally new to dsp and as such those two answers that you have already received probably mean nothing to you. Anyway the only relation that there is between
autocorrelation and the fourier transfrom is actually the similarity between the dft (discrete fourier transform) and auto correlation i.e. if you are learning DSP
Digital Signal Processing :: 06.11.2005 13:52 :: anto2 :: Replies: 8 :: Views: 2577
The only transform which we may calculate using our PC or any other machine isdft All the other transforms are theoretical Because the Limits are from -Inf to +Inf Where as the limits in dft are real
Numbers so we calculate only dft, Practically. FFT is nothing but a Optimized technique for calculation of (...)
Digital Signal Processing :: 30.03.2007 04:38 :: scheik :: Replies: 1 :: Views: 452
Though the FFT calculation is discrete, tools can automatically plot by connecting the top with a smooth curve of the magnitude plot against frequency. So it depends sometimes on the tools setting.
To say SIMPLY FFT or dft is a sampled version of continusous spectrum. So I dont think the author faked or something like that. I hope I answered your q
Analog IC Design and Layout :: 07.06.2007 22:05 :: gunturikishore :: Replies: 1 :: Views: 616
what technique for fast and accurate noise analysis of full PLL can we use dft plot
RF, Microwave, Antennas and Optics :: 13.02.2008 06:01 :: dinesh agarwal :: Replies: 10 :: Views: 1341
Hello! The extensive explanation would be quite some pain, and on top of that, the explanation exists in books and also probably on wikipedia, but let's make it simple. Let's say that it simply comes
from the way the dft is simplified in order to reduce the calculations: Suppose you have a number N of samples x(k) (N being a power of 2, 0<
Digital Signal Processing :: 07.07.2009 21:30 :: doraemon :: Replies: 3 :: Views: 2203
Hi all, I am using an example from Hspice manual to study how to simulate a phase shift register. In the given example, Hspice is using G-element to model the phase shift register. According to the
manual, it is using circular convolution to calculate the output response of the G-element. The netlist from the manual is as following: .tran 0
Digital Signal Processing :: 09.02.2010 19:49 :: sfguy :: Replies: 0 :: Views: 499
I don't know spectrum() function in DFII 5.1.41. But it seems that there is no difference between spectrum() function and dft() function. By the way, new waveform viewer, ViVA provide function sets
for evaluation of ADC. The Designer's Guide Community Forum - snr of adc calcu
Analog IC Design and Layout :: 07.05.2011 10:04 :: pancho_hideboo :: Replies: 1 :: Views: 584
Hi all, I am trying to do post place and route simulation using Modelsim XE with ISE 11. My code is a Discrete fourier transform implementation which contains three 1024 bytes dual port memories, and
other logic for calculation of dft. When I run a post route simulation it takes 1 hour to simulate few hundreds of nano seconds, why is it happen
PLD, SPLD, GAL, CPLD, FPGA Design :: 25.05.2011 01:51 :: melexia :: Replies: 2 :: Views: 519
Hi Jeet_rio, Assume you have N samples at regular intervals T. If you obtain their dft, the separation in frequency between bins is 1/(NT). For example: suppose you have N=256 samples taken at
intervals of T=10 seconds. The total duration of the register is NT=2560 seconds. The interval between bins is the inverse of that, i.e. 1/2560 Hz. So
Digital Signal Processing :: 29.07.2011 23:25 :: zorro :: Replies: 7 :: Views: 1764
dtft -- given all points, determine a spectrum. dft -- given a set of points, determine the same number of points from the spectrum. fft -- same. the fft is just any fast calculation method. FFT is
often used interchangeably with "dft" as a term. There are two immediate implications above. The first is that the dft/FFT (...)
Digital Signal Processing :: 03.10.2011 01:40 :: permute :: Replies: 3 :: Views: 340
linear convolution and circular convolution can be easily calculated in matlab and other math packages similarly FFT and IFFT . convolution of two Sequences in time domain is Equal to multiplication
of their Fourier transform Generally in the case of filtering The input signal is transformed into the frequency domain using
Digital Signal Processing :: 04.12.2011 13:17 :: blooz :: Replies: 4 :: Views: 4893
As you might know already, in Cadence Spectre, we can plot the spectrum of a signal (say, output of an amplifier) using "dft" function of the Calculator.Don't mix up Simulation with Post Processing.
They are completely different phase. It is an issue of Cadence Post Processing Environmen
RF, Microwave, Antennas and Optics :: 06.03.2012 06:41 :: pancho_hideboo :: Replies: 5 :: Views: 783
Hi, U have to go through, to how to measure controllability and observability. As this measurement is totally depend on the Gate (AND,OR,NOT,NAND etc). There is some calculation for combinational as
well as sequential for measuring controllability and observability. Than is given in the book named : VLSI test principles
ASIC Design Methodologies and Tools (Digital) :: 06.08.2012 05:28 :: maulin sheth :: Replies: 7 :: Views: 519
Hi, I am trying to use the FFT library in mikroC to calculate the frequency of input signal. The code gets complied but I am getting wrong and random results. The calculated freq does not match the
input simulated sine wave. Following are the details of my code - Sampling freq = 1KHz Signed Fractional Data Output from ADC. Sampling done using
Digital Signal Processing :: 31.08.2012 07:03 :: karanbanthia :: Replies: 0 :: Views: 526
1) I dont think any way to see the controllability and observability for the whole design.But we have a option like set_pindata in the TetraMAX through which we can get the controllability and
observability. 2) Inputs of tetraMAX are correct. 3) Pl tell me the meaning of Simulation and Fault Simulation. 4) Have you a TetraMAX UG?thn u can get all t
ASIC Design Methodologies and Tools (Digital) :: 21.01.2013 04:38 :: maulin sheth :: Replies: 4 :: Views: 288
|
{"url":"http://search.edaboard.com/dft-calculation.html","timestamp":"2014-04-17T15:27:18Z","content_type":null,"content_length":"29840","record_id":"<urn:uuid:52d59dff-cdec-4aea-810b-7e35a805c66e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
|
David G Schaeffer, James B. Duke Professor
Math @ Duke
Math HOME
David G Schaeffer, James B. Duke Professor
Office Location: 215 Physics
Office Phone: (919) 660-2814
Email Address:
Office Hours:
Tuesday and Thursday at 2
B.S., Physics, University of Illinois, 1963 Ph.D., Mathematics, MIT, 1968
Applied Math
Research Interests: Applied Mathematics, especially Partial Differential Equations
Granular flow
Although I worked in granular flow for 15 years, I largely stopped working in this area around 5 years ago. Part of my fascination with this field derived from the fact that typically
constitutive equations derived from engineering approximations lead to ill-posed PDE. However, I came to believe that the lack of well-posed governing equations was the major obstacle to progress
in the field, and I believe that finding appropriate constitutive relations is a task better suited for physicsts than mathematicians, so I reluctantly moved on.
One exception: a project analyzing periodic motion in a model for landslides as a Hopf bifurcation. This work is joint with Dick Iverson of the Cascades Volcanic Laboratory in Vancouver
Washington. This paper [1] was a fun paper for an old guy because we were able to solve the problem with techniques I learned early in my career--separation of variables and one complex variable.
Fluid mechanics
In my distant bifurcation-theory past I studied finite-length effects in Taylor vortices. Questions of this sort were first raised by Brooke Benjamin. My paper [2] shed some light on these
issues, but some puzzles remained. Over the past few years I have conducted a leisurely collaboration with Tom Mullin trying to tie up the loose ends of this problem. With the recent addition of
Tom Witelski to the project, it seems likely that we will soon complete it.
Mathematical problems in electrocardiology
About 10 years ago I began to study models for generation of cardiac rhythms. (Below I describe how I got interested in this area.) This work has been in collaboration with Wanda Krassowska
(BME), Dan Gauthier (Physics) and Salim Idress (Med School). Postdocs Lena Tolkacheva and Xiaopeng Zhou contributed greatly to the projects, as well as grad students John Cain and Shu Dai. The
first paper [3], with Colleen Mitchell was a simple cardiac model, similar in spirit and complexity to the FitzHugh-Nagumo model, but based on the heart rather than nerve fibers. Other references
[4--9] are given below.
A general theme of our group's work has been trying to understand the origin of alternans. This term refers to a response of the heart at rapid periodic pacing in which action potentials
alternate between short and long durations. This bifurcation is especially interesting in extended tissue because during propagation the short-long alternation can suffer phase reversals at
different locations, which is called discordant alternans. Alternans is considered a precursor to more serious arrythmias.
Let me describe one current project [9]. My student, Shu Dai, is analyzing a weakly nonlinear modulation equation modeling discordant alternans that was proposed by Echebarria and Karma. First we
show that, for certain parameter values, the system exhibits a degenerate (codimension 2) bifurcation in which Hopf and steady-state bifurcations occur simultaneously. Then we show, as expected
on grounds of genericity (see Guckenheimer and Holmes, Ch. 7) that chaotic solutions can appear. The appearance of chaos in this model is noteworthy because it contains only one space dimension;
by contrast the usual route to chaos in cardiac systems is believed to be through the breakup of spiral or scroll waves, which of course requires two or more dimensions.
Other biologogical problems
Showing less caution than appropriate for a person my age, I have recently begun to supervise a student, Kevin Gonzales, on a project modeling gene networks. Working with Paul Magwene (Biology),
we seek to understand the network through which yeast cells, if starved for nitrogen, choose between sporulation and pseudohyphal growth. (Whew!) This work is an outgrowth of my participation in
the recently funded Center for Systems Biology at Duke.
I have gotten addicted to applying bifurcation theory to differential equations describing biological systems. For example, my colleagues Harold and Anita Layton are tempting my with some
fascinating bifurcations exhibited by the kidney. Here is a whimsical catch phrase that describes my addiction: "Have bifurcation theory but won't travel". (Are you old enough--and sufficiently
tuned in to American popular culture--to understand the reference?)
Research growing out of teaching
Starting in 1996 I have sometimes taught a course that led to an expansion of my research. The process starts by my sending a memo to the science and engineering faculty at Duke, asking if they
would like the assistance of a group of math graduate students working on mathematical problems arising in their (the faculty member's) research. I choose one area from the responses, and I teach
a case-study course for math grad students focused on problems in that area. In broad terms, during the first half of the course I lecture on scientific and mathematical background for the area;
and during the second half student teams do independent research, with my collaboration, on the problems isolated earlier in the semester. I also give supplementary lectures during the second
half, and at the end of the semester each team lectures to the rest of the class on what it has discovered. This course was written up in the SIAM Review [11].
Topics and their proposers have been:
Lithotripsy L. Howle, P. Zhong (ME)
Population models in ecology W. Wilson (Zoology)
Electrophysiology of the heart I C. Henriquez (BME)
Electrophysiology of the heart II D. Gauthier (Physics).
Lithotripsy is an alternative to surgery for treating kidney stones--focused ultrasound pulses are used to break the stones into smaller pieces that can be passed naturally.
Multiple research publications, including a PhD. thesis, have come out of these courses, especially my work in electrophysiology.
I hope to offer this course in the future. Duke faculty: Do you have a problem area to propose?
□ [1] D.G. Schaeffer and R. Iverson, Steady and intermittent slipping in a model of landslide motion regulated by pore-pressure feedback, SIAM Applied Math 2008 (to appear)
□ [2] Schaeffer, David G., Qualitative analysis of a model for boundary effects in the Taylor problem, Math. Proc. Cambridge Philos. Soc., vol. 87, no. 2, pp. 307--337, 1980 [MR81c:35007]
□ [3] Colleen C. Mitchell, David G. Schaeffer, A two-current model for the dynamics of cardiac membrane, Bulletin Math Bio, vol. 65 (2003), pp. 767--793
□ [4] D.G. Schaeffer, J. Cain, E. Tolkacheva, D. Gauthier, Rate-dependent waveback velocity of cardiac action potentials in a done-dimensional cable, Phys Rev E, vol. 70 (2004), 061906
□ [5] D.G. Schaeffer, J. Cain, D. Gauthier,S. Kalb, W. Krassowska, R. Oliver, E. Tolkacheva, W. Ying, An ionically based mapping model with memory for cardiac restitution, Bull Math Bio, vol.
69 (2007), pp. 459--482
□ [6] D.G. Schaeffer, C. Berger, D. Gauthier, X. Zhao, Small-signal amplification of period-doubling bifurcations in smooth iterated mappings, Nonlinear Dynamics, vol. 48 (2007), pp. 381--389
□ [7] D.G. Schaeffer, X. Zhao, Alternate pacing of border-collision period-doubling bifurcations, Nonlinear Dynamics, vol. 50 (2007), pp. 733--742
□ [8] D.G. Schaeffer, M. Beck, C. Jones, and M. Wechselberger, Electrical waves in a one-dimensional model of cardiac tissue, SIAM Applied Dynamical Systems (Submitted, 2007)
□ [9] D.G. Schaeffer and Shu Dai, Spectrum of a linearized amplitude equation for alternans in a cardiac fiber, SIAM Analysis 2008 (to appear)
□ [10] D.G. Schaeffer, A. Catlla, T. Witelski, E. Monson, A. Lin, Annular patterns in reaction-diffusion systems and their implications for neural-glial interactions (Preprint, 2008)
□ [11] L. Howle, D. Schaeffer, M. Shearer, and P. Zhong, Lithotripsy, The treatment of kidney stones with shock waves, SIAM Review vol. 40 (1998), pp356--371
Current Ph.D. Students (Former Students)
□ John W. Cain
□ Aaron Ashish
□ Shu Dai
□ Kevin E. Gonzales
□ Matthew M Bowen
□ Michael Gordon
□ Lianjun An
□ Feng Wang
□ Risto Lehtinen
□ Maija Kuusela
□ Joseph Fehribach
□ E. Bruce Pitman
□ John Goodrich
Postdocs Mentored
□ Anne Catlla (2006 - 2008)
□ Xiaopeng Zhao (2005 - 2007)
□ Wenjun Ying (2005 - 2008)
□ Elena Tolkacheva (2004 - 2006)
□ J. Matthews (2000/09-2003/06)
Recent Publications (More Publications)
1. K. Gonzales, Omur Kayikci, D.G. Schaeffer, and P. Magwene, Modeling mutant phenotypes and oscillatory dynamics in the \emph{Saccharomyces cerevisiae} cAMP-PKA pathway, PLoS Computational
Biology (Submitted, Winter, 2010)
2. S. Payne, B. Li, H. Song, D.G. Schaeffer, and L. You, Self-organized pattern formation by a pseudo-Turing mechanism (Submitted, Winter, 2010)
3. S. Dai and D.G. Schaeffer, Bifurcation in a modulation equation for alternans in a cardiac fiber, ESAIM Mathematical modelling and numerical analysis, vol. 44 no. 6 (Winter, 2010)
4. Y. Farjoun, D.G. Schaeffer, The hanging thin rod: a singularly perturbed eigenvalue problem, SIAM Sppl. Math. (Submitted, July, 2010)
5. S. Dai and D.G. Schaeffer, Chaos in a one-dimensional model for cardiac dynamics, Chaos, vol. 20 no. 2 (June, 2010)
Recent Grant Support
□ EMSW21-RTG: Enhanced Training and Recruitment in Mathematical Biology, National Science Foundation, DMS-0943760, 2010/09-2014/08.
□ Duke Center for Systems Biology, 2007/07-2012/06.
ph: 919.660.2800 Duke University, Box 90320
fax: 919.660.2821 Durham, NC 27708-0320
|
{"url":"http://fds.duke.edu/db/aas/math/faculty/dgs","timestamp":"2014-04-18T10:46:12Z","content_type":null,"content_length":"19842","record_id":"<urn:uuid:f6444407-5d06-4871-b27e-584091603d79>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
|
even numbers
September 8th 2009, 08:53 PM
even numbers
An evenly even number is a number in the form of 2^m , where m is a positive integer.
Prove that it is impossible the sum of two evenly even numbers to be a perfect square.
September 8th 2009, 09:23 PM
Monkey D. Johnny
but 2^3+2^3=16 is a perfect square
September 9th 2009, 04:22 AM
Bruno J.
As is $2^5+2^2=6^2$. Not a very good conjecture!
By the way the real name for an "evenly even number" is power of two.
September 9th 2009, 06:26 AM
Monkey D. Johnny
In fact, I believe 2^m+2^n is a perfect square if and only if either:
i) both m and n are odd and m=n, or
ii)either m or n is even (say m is even) and n=m+3.
So, the example I gave was part (i) and the example Bruno J. gave was part (ii).
The "if" part is pretty straight forward.
So assume 2^m+2^n is a perfect square. And assume part i) doesn't hold. So we will show part ii) holds. Suppose, m $\leq n$.
Then 2^m+2^n=2^m(1+2^(n-m)). This implies m is even and 1+2^(n-m) is a perfect square. Write 1+2^(n-m)=k^2. Then 2^(n-m)=(k-1)(k+1). This implies k-1 is a power of 2. Write k-1=2^p. Then 2^(n-m)=
2^(2p)+2^(p+1). The right hand side of the last equality can be a power of 2 only if p=1. This implies 2^(n-m)=8 or n=m+3.
|
{"url":"http://mathhelpforum.com/number-theory/101283-even-numbers-print.html","timestamp":"2014-04-19T06:02:12Z","content_type":null,"content_length":"5157","record_id":"<urn:uuid:128fde73-3db9-4bf4-9689-de4200cba1cb>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pi in Base 16 - special formula
Hmm, are you sure that it only works in base 16? It's quite easy to translate between power-of-two bases to generate, for example, a base-2 spigot algorithm instead.
However, since the number of decimal digits for [itex]\frac{1}{2^n}[/itex] is rougly proportional to [itex]n[/itex] it doesn't readily convert to base 10.
Yes, base translation is easy to do, but if you look at what's involved
you'd end up doing arithemtic on the full string during base conversion
so you'd gain nothing.
In other words, this algorithm would quickly compute the 10,000th hex
digit of Pi and suppose that digit was "5". The base conversion of
[tex] 5 \times 16^{(-10,000)} [/tex] would put you back in the position
of doing arithmetic on 10,000+ digit strings.
Edit: Yes, I'm quite sure it is a hexadecimal algorithm. That's one of the
few details that stuck with me for the last 20 years.
|
{"url":"http://www.physicsforums.com/showthread.php?t=83482","timestamp":"2014-04-19T09:47:52Z","content_type":null,"content_length":"61984","record_id":"<urn:uuid:c8f4c530-c291-44a2-a17e-edbc29f7f1f3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trying Fez again. I think I know what I dont know. Help? SPOILERS
I decided to pick up the game again and see what over 6 months of not caring about Fez did to my puzzle solving ability and I found 3 anti cubes. I posted that pic a long time ago asking if I was on
the right track to figuring out the numbers and didn't really get an answer of yes or no. I think it was in the telescope place that I found another order for the numbers that listed 5 different
"numbers" vertically. I've taken a lot of pictures around the world that I feel are relevant to solving a few things to get some progress. Here they are.
I feel a big part of what I'm missing is the numbers corresponding to the number of turns and direction on those long tetris like blocks. Some of those images point to that link. I'm assuming that's
what it means in any case. I haven't found a way to figure out the values of the numbers, but I'm hoping I've found all the relevant pieces I need to figure that out.
Do I have the pieces I need or am I still missing some crucial info?
• Log in to comment
|
{"url":"http://www.giantbomb.com/fez/3030-24768/forums/trying-fez-again-i-think-i-know-what-i-dont-know-h-573734/","timestamp":"2014-04-23T15:34:43Z","content_type":null,"content_length":"212887","record_id":"<urn:uuid:b8e241a3-a6a9-433b-b052-99940213073d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
explain to me fully what is meant by an inexact differential?
Best Response
You've already chosen the best response.
An infinitesimal which is not the differential of an actual function and which cannot be expressed as(SEE BELOW) the way an exact differential can. Inexact differentials are denoted with a bar
through the . The most common example of an inexact differential is the change in heat encountered in thermodynamics.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
RATE IF U LIKE:)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4dd5019fd95c8b0bc82559c4","timestamp":"2014-04-21T04:49:33Z","content_type":null,"content_length":"33653","record_id":"<urn:uuid:1773f3f2-24d3-4f40-98d3-321154e3802e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
6. In rectangle KLMN, KM = 6x + 16, and LN = 49. Find the value of x.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50dd01d8e4b0d6c1d54318d5","timestamp":"2014-04-17T10:06:04Z","content_type":null,"content_length":"86973","record_id":"<urn:uuid:01bc4e1f-f5dd-43c5-9620-0d6756e86d53>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
|
KVR Forum: Forum Topic - When are double-precision samples needed?
30 posts since 22 Dec, 2010
by codehead; Sun Feb 19, 2012 1:58 am
aciddose wrote:actually, that is true.
Hard to know what you're referring to without a quote. I hope you're not referring to my last sentence (if so, you didn't get my point; if thermal noise is a problem—and it already dominates waaaayyy
before you get out to 64 bits—adding addition bits does nothing to help in that regard, so saying it might not be enough makes no sense).
556 posts since 7 Jan, 2009, from Gloucestershire
by DaveHoskins; Sun Feb 19, 2012 5:26 am
codehead wrote:
thevinn wrote:Under what circumstances is it important to represent and process samples double precision floating point representation (versus single precision)?
Are you going to record the samples? If so, with what? The noise floor certainly drops off as you get right up on absolute zero, but I'm just not sure how you're going to manage that recording
So, are you going to calculate the samples? OK, you can do that, but what are you going to play them back on?
Single precision floating point gives you 25-bit of precision (23-bit mantissa, one implied bit due to normalization, and a sign bit). That's 150.5 dB. If your output is 1 V p-p, the least
significant bit would be, what, less than a tenth of a microvolt? What do you think the background (thermal) noise of your gear is?
Sure, there are reasons to do double precision math (high-order IIR filters, where small errors in coefficients and calculations in the feedback can become large, for instance). There's always a
way around needing double precision, but the bottom line is that double precision is usually next to free (most processors already work in double precision), though it can be costly if you're
going to store everything in double-precision.
Plus, it sound really good to some people to hear, "The audio signal path is 64-bit from input to output, for maximum resolution and headroom", "...delivers the sonic purity and headroom of a
true 64-bit, floating-point digital audio path", "...'s 64-bit asynchronous audio engine with industry leading audio fidelity, 64-bit audio effects and synths, the forthcoming...which provides
native 64-bit ReWire integration, allow for an end-to-end 64-bit audio path". Yeah, I did a search and pulled those marketing quotes from real products. What happens when a software maker starts
touting extended precision 80-bit floating point (not nearly-free)? Will everyone rush to 80-bit?
Plus, I'm sure that the issue of a workstation's (and plug-ins') support of 64-bit (memory addressing) operating system adds a layer of confusion to some people.
BTW, this topic made me look up some references, such as an article explaining why 64-bit is a good idea. All of the arguments I've seen use flawed logic, showing a misunderstand of the math—such
as explaining the range of 64 bits as 2 to the 24th power, and saying it helps guard again overflow (hello, it's floating point...). Another said that 64-bit floating point might not even be
enough for a clean sound, because there's the problem of thermal noise (!!!).
I've already mentioned the power of suggestion, although I took the leap to 128 bits before you, so I guess my plug will sound better, as 'more is more', right?
The only time I've noticed a problem is with IIRs and oscillators that use 2D rotation techniques. But I do wonder what the overall problems are if, say, a daw is playing 128 tracks added up, that
all have effects added in. Wouldn't all those additions and multiplications have accumulated problems?
628 posts since 4 Mar, 2007
by jkleban; Sun Feb 19, 2012 5:57 am
Not sure if this applies but when you mathematically change NOTES of sampled instruments, lowering the pitch of said samples the smoothness of the samples would diminish for in essence the lowered
pitched is changing the playback sample rate... so the higher the bit rate of the sample the better the pitch shifted samples, no?
I do admit that I really don't know what I am talking about but the theory above does have some logic behind it.
The keeper of the Shrine.
The Lamb Laid Down on MIDI
556 posts since 7 Jan, 2009, from Gloucestershire
by DaveHoskins; Sun Feb 19, 2012 6:33 am
jkleban wrote:Not sure if this applies but when you mathematically change NOTES of sampled instruments, lowering the pitch of said samples the smoothness of the samples would diminish for in
essence the lowered pitched is changing the playback sample rate... so the higher the bit rate of the sample the better the pitch shifted samples, no?
I do admit that I really don't know what I am talking about but the theory above does have some logic behind it.
When a sample is played back at a slower rate, it is stepped though using interpolation. Look up 'audio interpolation' on the web for tons of info.
755 posts since 1 Apr, 2003
by JonHodgson; Sun Feb 19, 2012 7:43 am
DaveHoskins wrote:I've already mentioned the power of suggestion, although I took the leap to 128 bits before you, so I guess my plug will sound better, as 'more is more', right?
You're both behind the curve.
My plugins are going to be 127 bits from now on, because you have to use a prime number of bits if you want to avoid audible rasonances in the maths.
767 posts since 30 Nov, 2008
by thevinn; Sun Feb 19, 2012 8:15 am
Okay so just to be clear, the consensus is that storing sample arrays using double precision floating point representation provides no tangible benefit over single precision?
The corollary is that intermediate calculations should be done in double precision: for example, filter coefficients.
2156 posts since 4 Sep, 2006, from 127.0.0.1
by antto; Sun Feb 19, 2012 9:14 am
yup, for storage (of audio signals) 32bit float is enough
i even use 16bit (truncated mantissa) float in some situations, to save space..
It doesn't matter how it sounds..
..as long as it has BASS and it's LOUD!
1241 posts since 3 Dec, 2008
by andy-cytomic; Sun Feb 19, 2012 9:17 am
antto wrote:it might make sense in a modular host i guess
..if you're able to make feedback loops between the plugins..
No, note even then. It is only things like the intermediate internal results of additions, subtractions, and accumulations (and then further values that result from these as an input) that need
double precision in certain situations.
1241 posts since 3 Dec, 2008
by andy-cytomic; Sun Feb 19, 2012 9:24 am
thevinn wrote:Okay so just to be clear, the consensus is that storing sample arrays using double precision floating point representation provides no tangible benefit over single precision?
The corollary is that intermediate calculations should be done in double precision: for example, filter coefficients.
Yes to the first bit, but for the second bit you only need double precision for certain filter structures under certain circumstances. I recommend to never use a direct form 1 or 2 biquad ever,
period, but if you must then you really do need double precision for all coefficients and memory since the structure is so bad it adds significant quantization error to your signal, as well as the
cutoff being nowhere near where you want it. Check out the plots I've done here that compare different digital linear filter topologies: www.cytomic.com/technical-papers
324 posts since 2 Oct, 2002, from Finland, Europe
by Jesse J; Sun Feb 19, 2012 9:36 am
Do some advanced wavefolding and/or FM and you'll probably benefit from the added precision a lot.
1241 posts since 3 Dec, 2008
by andy-cytomic; Sun Feb 19, 2012 9:40 am
Thanks for you post Codehead, I agree with everything you have posted, but just want to make a clarification on the point below specifically to do with the word "usually":
codehead wrote: There's always a way around needing double precision, but the bottom line is that double precision is usually next to free (most processors already work in double precision),
though it can be costly if you're going to store everything in double-precision.
Vector instructions on Intel chips can process twice as many floats as doubles. This is similar on Texas Instruments floating point DSP chips, you can process twice as many floats as doubles, but it
is a very different architecture. If you are calculating scalar results on an intel chip, and memory / cache issues don't come into it, then yes a float op takes the same time as a double op, and you
just throw away the other 3 float ops you could have used.
In my experience you get around x2 to x3 speedup from writing vectorized float sse code that computes 4 floats at once as there is usually some data giggling and other overheads involved in using
30 posts since 22 Dec, 2010
by codehead; Sun Feb 19, 2012 10:55 am
DaveHoskins wrote:But I do wonder what the overall problems are if, say, a daw is playing 128 tracks added up, that all have effects added in. Wouldn't all those additions and multiplications
have accumulated problems?
Yes they do, but note that while the summed error from the digits at the least significant bits grows, so does the summed signal with the more significant non-error bits. That is, if you add two
similar values, you have to allow for the possibility that the error doubles, essentially moving to the left one bit. But you have to allow for the total result to grow one bit too. So in the
completely general case, you'd then divide the result by two, so that your signal doesn't grow, and you've found that now the error didn't grow either.
Extend that to 128 tracks—your error might grow 128x, but so does your signal, so you didn't change the signal to noise ratio.
30 posts since 22 Dec, 2010
by codehead; Sun Feb 19, 2012 11:04 am
andy_cytomic wrote:Vector instructions on Intel chips can process twice as many floats as doubles.
Excellent point—yes, I was considering normal CPUs (not DSPs), but I forgot about vector processing. I guess the bottom line is that if a processing unit is built to handle only one floating point
operation at a time, double will execute as fast as single; but if it's built to do multiple parallel operations, it's going to optimize real estate...
8807 posts since 7 Dec, 2004
by aciddose; Sun Feb 19, 2012 11:10 am
codehead wrote:
aciddose wrote:actually, that is true.
Hard to know what you're referring to without a quote. I hope you're not referring to my last sentence (if so, you didn't get my point; if thermal noise is a problem—and it already dominates
waaaayyy before you get out to 64 bits—adding addition bits does nothing to help in that regard, so saying it might not be enough makes no sense).
because of noise, you get dither. when you average dither the number of bits increases by one every time you filter away 6db of noise.
that means we can for example input a signal to a 1-bit ADC (a compare > 0) and mix it with high-frequency noise.
by filtering away this noise with a low-pass filter, every time we decrease the noise level by half we gain one bit.
let's say we use 40khz sample rate. we want at least 1khz frequency for our input.
to get 25-bits, we need a filter with a cutoff of 1khz we need a filter with at least 30db/o slope. easy to achieve.
also, yes float does have "virtually" 25 bits, but it isn't because of any phantom bit. it's because we take away half the range of precision from the exponent and apply that to the mantissa by
normalizing the mantissa to a half range. (0.5 - 1.0, not 0.0 to 1.0.)
so yes it's true to say that float has at least 25 bits accuracy in a normalized range like 0.0 - 1.0, but it isn't smart to say this accuracy comes from magic; which is what "normalization and
implied bit" tends to sound like. easier to just describe it as having 23 bits that apply to half as much range.
30 posts since 22 Dec, 2010
by codehead; Sun Feb 19, 2012 11:35 am
thevinn wrote:Okay so just to be clear, the consensus is that storing sample arrays using double precision floating point representation provides no tangible benefit over single precision?
The corollary is that intermediate calculations should be done in double precision: for example, filter coefficients.
Right. Now, to elaborate on filters a little: (First, don't read that as saying this is the only case you need that extra precision, by any means—I'm just address why you'd need the extra precision
in part of the math, but not necessarily the entire audio path.) First, it's the feedback path that's the issue. You're multiplying an output sample by a coefficient and feeding it back to get summed
with the input (which produces a new output, which gets multiplied by the coefficient again, which feeds back to the input...).
So, you have an audio sample and a coefficient. The audio sample by itself is suitable for playback in single precision. The coefficient may or may not be adequate at single precision—it depends on
the filter type, order, filter setting relative to the sample rate. For instance, a direct-form IIR requires more and more digits of precision as you more a pole to a frequency that a smaller
relative to the sample rate. Put another way, if you use, say, 8 bits to the right of the decimal, as you count down through all the possibly values for that coefficient, the corresponding pole
positions (where the feedback occurs in frequency) spread out more and more. So, you can get in situation where a pole isn't really where you specified it—it's been quantized to a less desirable
position, maybe even on the unit circle yielding stability problems. Higher order filters will have more problems, because the poles won't be in the correct spots relative to each other. And at
higher sample rates, the problems gets worse, because setting a filter to 100 Hz at 192k is much worse than setting to 100 Hz at 44.1k.
For the coefficients, you could just go with double precision—problem solved. But it's not the only thing you can do—there are other filter forms that are equivalent, but either have a more
homogenous quantization errors (the coefficient spacing is the same everywhere), or have the opposite sensitivity (higher density at the low end, worse at the high end—but that's a good tradeoff if
it's a low-end filter).
The other part is the math and the feedback. Part one is: When you multiply two numbers, you end up with more bits. If you multiply two floats, it should take a double to hold the result. The fact
that in modern hardware float * float = float means that the precision is getting truncated—that's error. The higher the IIR order, the more sensitive to that error. You can go all-double in your
calculation, or you can do some other tricks where you essentially noise-shape that error by saving and feeding it back in with another filter. Part two has to do with adding: if you add a very small
float to a large one, the small one disappears because there's aren't enough mantissa bits (again that can be fixed with a better filter form).
I'm writing a book here, sorry—the bottom line is that you can either twiddle your architecture to be less error-sesitive, or just up the precision with doubles. But, you can just keep that inside
the filter, and pass on the output sample as a float.
|
{"url":"http://www.kvraudio.com/forum/viewtopic.php?p=4834853","timestamp":"2014-04-20T00:47:21Z","content_type":null,"content_length":"99777","record_id":"<urn:uuid:c7bbbcc9-38da-4f9a-9719-2cf4155a64ad>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: GLLAMM and rescaling sampling weights
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: GLLAMM and rescaling sampling weights
From Susanna Makela <susanna.m.makela@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: GLLAMM and rescaling sampling weights
Date Fri, 24 Jun 2011 16:14:31 +0530
Stas, thanks so much for your response. I'm a little confused though -
you say the question arises when there are sampling weights for PSUs
and then separate sampling weights for individuals within PSUs - but I
only have sampling weights for women within states. Since, as you say,
I don't have these weights in DHS, do I not need to worry about
weighting at all? Or do the state-level women's weights somehow
combine these two?
I have also looked at the Pfefferman (1998) paper, but I didn't quite
get all of that either... I guess I'll see how things look with option
2 below for now. I was going to use -pwigls-, but that seems to be
only for use with two-level models, though rescaling the weights by
hand isn't hard.
Your response brings up another question: is it bad (as in bad
statistical practice/against statistical theory) to model states as a
random effect if they are also used for stratification? States aren't
really samples from a "population" of all "possible" states in the way
that PSUs in the DHS are a sample from all PSUs, so in that sense you
could argue against modeling them as random effects - but how does
stratification play into it? I thought it would be important to
account for states as another level since the PSUs are nested in
states, and including state-level variables without that structure
could lead to the atomistic fallacy (making incorrect inferences about
state-level variables based on individual-level data).
Thanks again for your help!
The advice on rescaling the weights is tangly at best. Pfeffermann et.
al. (1998, http://www.citeulike.org/user/ctacmo/article/711637) show
that the bias of the variance estimates depends on how you define your
weights. The question arises in the following situation: you have
sampling weights for PSUs, and you have separate sampling weights for
individuals within PSUs (which you don't have in DHS, so you can stop
reading here unless you are really curious about what Rabe-Hesketh and
Skrondal meant in that phrase). Then there's been some inconclusive
research into the following options:
1. leave the individual weights as they are (summing up to the size of the PSU)
2. rescale them so that they sum up to the sample size (# of
observations sampled from a PSU)
3. rescale them to the so-called "effective" weights so that their
sums of squares equals the sample size. (In estimation of the variance
components, it is the squares of the weights that matter.)
Some papers claim the second option gives the least biased estimates;
others, that the third one is the way to go. Other re-weighting
options can be entertained, too. See Pfeffermann et. al. (1998) for
in-depth analysis. There were also papers by Asparouhov (the primary
programmer at Mplus) and Stapleton (multilevel researcher at UMBC),
but I am less convinced by these as they review the existing cook-book
recipes rather than try to provide a theory-based advice (unlike the
Pfeffermann's paper which does derive explicit small sample bias
expressions which are more complicated that just the sums of squares
of weights, and involve response variables, too).
I would tend to think that states are used as stratification variables
in DHS, and modeling them as random effects sounds rather odd to me. I
would specify my model as children nested in mothers nested in
clusters, and would not go above clusters that in the random part of
the model (although of course if you have state-specific variables
that are needed in your analysis, you should include them in your
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-06/msg01077.html","timestamp":"2014-04-18T10:51:00Z","content_type":null,"content_length":"10681","record_id":"<urn:uuid:94daccd8-1a8a-45b4-b887-f0b080df16fb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Symmetry, Separation, Super-integrability and Special
Special Issue on Symmetry, Separation, Super-integrability and Special Functions (S^4)
The Guest Editors for this special issue are
Ernie Kalnins (University of Waikato, Hamilton, New Zealand)
Niky Kamran (McGill University, Montreal, Canada)
Peter Olver (University of Minnesota, Minneapolis, USA)
Pavel Winternitz (Universite de Montreal, Canada)
On September 17–19, 2010, the University of Minnesota hosted a conference honoring Willard Miller, Jr., on the occasion of his retirement. Entitled “Symmetry, Separation, Super-integrability and
Special Functions” (S^4 for short), the talks covered Willard's wide-ranging research interests, and featured most of his close friends, collaborators, students, and colleagues. Willard's many
fundamental contributions and influence in these areas have been felt far and wide. Remarkably, while continuing to be incredibly active throughout his researcher career, which continues into
retirement, he also served in many administrative capacities at the University of Minnesota, including Department Head, Associate Director and Director of the I.M.A., Associate Dean, and acting Dean.
During the meeting, a scholarship fund for students in a combined Bachelors/Masters' program at Minnesota was established in Willard's honor. See the conference web site for full details: http://
The organizers decided it would be worth assembling a special issue of SIGMA devoted to the conference's mathematical themes, and agreed to serve as editors. The 21 papers gathered in this volume
represent contributions closely related to many of the themes of research to which Willard has made fundamental contributions over the years, including separation of variables, special functions,
integrable systems, and their associated geometric structures, all of which reflect the wide spectrum and the remarkable breadth of Willard's scientific interests. Thus, classical special functions,
q-special functions and harmonic analysis appear in the contributions of Tom Koornwinder, Luc Vinet and Alexei Zhedanov, Kurt Bernardo Wolf and Luis Edgar Vicent, Amalia Torre, Howard Cohl; symmetry
and separation of variables are the main theme of the papers by Antoni Sym and Adam Szereszewski, and Alberto Carignano, Lorenzo Fatibene, Ray McLenaghan and Giovanni Rastelli, Philip Broadbridge and
Peter Vassiliou; classical and quantum integrable and super-integrable models are the substance of the papers by Alexander Turbiner, Yannis Tanoudis and Costas Daskaloyannis, Angel Ballesteros,
Alberto Enciso, Francisco J. Herranz, Orlando Ragnisco and Danilo Riglioni, Claudia Chanu, Luca Degiovanni and Giovanni Rastelli, Sarah Post, Christiane Quesne, Ernie G. Kalnins, Jonathan M. Kress
and Willard Miller, Jr.; finally the papers by Michael G. Eastwood and A. Rod Gover, Charles P. Boyer, Andrei A. Malykh and Mikhail B. Sheftel, Stephen C. Anco, Sajid Ali and Thomas Wolf, Serge
Preston, Yuri Bozhkov and Peter J. Olver, are related to the more geometric aspects of Willard's interests, notably the role of symmetries and geometric structures in differential equations. The high
quality of all these papers reflects the high esteem in which Willard is held by his friends and colleagues throughout the world. We would like to thank all the authors for these very fine
Ernie Kalnins, Niky Kamran, Peter Olver, Pavel Winternitz
Papers in this Issue:
|
{"url":"http://www.emis.de/journals/SIGMA/S4.html","timestamp":"2014-04-20T21:48:59Z","content_type":null,"content_length":"12250","record_id":"<urn:uuid:5aa2e5b9-0b87-43f8-8210-8fb56042504f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Imputation-Based Analysis of Association Studies: Candidate Regions and Quantitative Traits
We introduce a new framework for the analysis of association studies, designed to allow untyped variants to be more effectively and directly tested for association with a phenotype. The idea is to
combine knowledge on patterns of correlation among SNPs (e.g., from the International HapMap project or resequencing data in a candidate region of interest) with genotype data at tag SNPs collected
on a phenotyped study sample, to estimate (“impute”) unmeasured genotypes, and then assess association between the phenotype and these estimated genotypes. Compared with standard single-SNP tests,
this approach results in increased power to detect association, even in cases in which the causal variant is typed, with the greatest gain occurring when multiple causal variants are present. It also
provides more interpretable explanations for observed associations, including assessing, for each SNP, the strength of the evidence that it (rather than another correlated SNP) is causal. Although we
focus on association studies with quantitative phenotype and a relatively restricted region (e.g., a candidate gene), the framework is applicable and computationally practical for whole genome
association studies. Methods described here are implemented in a software package, Bim-Bam, available from the Stephens Lab website http://stephenslab.uchicago.edu/software.html.
Author Summary
Ongoing association studies are evaluating the influence of genetic variation on phenotypes of interest (hereditary traits and susceptibility to disease) in large patient samples. However, although
genotyping is relatively cheap, most association studies genotype only a small proportion of SNPs in the region of study, with many SNPs remaining untyped. Here, we present methods for assessing
whether these untyped SNPs are associated with the phenotype of interest. The methods exploit information on patterns of multi-marker correlation (“linkage disequilibrium”) from publically available
databases, such as the International HapMap project or the SeattleSNPs resequencing studies, to estimate (“impute”) patient genotypes at untyped SNPs, and assess the estimated genotypes for
association with phenotype. We show that, particularly for common causal variants, these methods are highly effective. Compared with standard methods, they provide both greater power to detect
associations between genetic variation and phenotypes, and also better explanations of detected associations, in many cases closely approximating results that would have been obtained by genotyping
all SNPs.
Citation: Servin B, Stephens M (2007) Imputation-Based Analysis of Association Studies: Candidate Regions and Quantitative Traits. PLoS Genet 3(7): e114. doi:10.1371/journal.pgen.0030114
Editor: David B. Allison, University of Alabama at Birmingham, United States of America
Received: December 14, 2006; Accepted: May 30, 2007; Published: July 27, 2007
Copyright: © 2007 Servin and Stephens. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original author and source are credited.
Funding: This work was supported by NIH grant RO1 HG02585–01 to MS.
Competing interests: The authors have declared that no competing interests exist.
Abbreviations: BF, Bayes factor; df, degree of freedom; IBD, identity by descent; LD, linkage disequilibrium; MCMC, Markov Chain Monte Carlo; QTN, quantitative trait nucleotide
Although the development of cheap high-throughput genotyping assays have made large-scale association studies a reality, most ongoing association studies genotype only a small proportion of SNPs in
the region of study (be that the whole genome, or a set of candidate regions). Because of correlation (linkage disequilibrium, LD) among nearby markers, many untyped SNPs in a region will be highly
correlated with one or more nearby typed SNPs. Thus, intuitively, testing typed SNPs for association with a phenotype will also have some power to pick up associations between the phenotype and
untyped SNPs. In practice, typical analyses involve testing each typed SNP individually, and in some cases combinations of typed SNPs jointly (e.g., haplotypes), for association with phenotype, and
hoping that these tests will indirectly pick up associations due to untyped SNPs. Here, we present a framework for more directly and effectively interrogating untyped variation.
In outline, our approach improves on standard analyses by exploiting available information on LD among untyped and typed SNPs. Partial information on this is generally available from the
International HapMap project [1]; in some cases more detailed information (e.g., resequencing data) may also be available, either through public databases (e.g., SeattleSNPs [2]), or through data
collected as a part of the association study design (e.g., [3]). Our approach combines this background knowledge of LD with genotypes collected at typed SNPs in the association study, to explicitly
predict (“impute”) genotypes in the study sample at untyped SNPs, and then tests for association between imputed genotypes and phenotype. We use statistical models for multi-marker LD to perform the
genotype imputation, with uncertainty, and a Bayesian regression approach to perform the test for association, allowing for potential errors in the imputed genotypes. Although we focus specifically
on methods for analyzing quantitative phenotypes in candidate gene studies, the same general framework can also be applied to discrete traits, and/or genome-wide scans.
These imputation-based methods can be viewed as a natural analysis complement to the “tag SNP” design strategy for association studies, which attempts to choose SNPs that are highly correlated with,
and hence good predictors of, untyped SNPs. We are simply directly exploiting this property, together with recently developed statistical models for multi-locus LD ([4,5]) to infer the untyped SNP
genotypes. Our approach is also somewhat analogous to multipoint approaches to linkage mapping (e.g., [6]), in which observed genotypes at multiple markers predict patterns of identity by descent
(IBD) at nearby positions without markers, and test for correlation between these patterns of IBD and observed phenotypes. In the association context, we are predicting identity by state rather than
IBD, and the methods of predicting identity by state versus IBD differ greatly, but the approaches share the idea of using multipoint information to predict single-point information, and, at least in
their simplest form, subsequently assessing correlation with phenotype at the single-point level. This strategy provides a clean and rigorous way to avoid the “curse of dimensionality” that can
plague haplotype-based analyses, without making ad hoc decisions such as pooling rare haplotypes into a single class.
Although our methods are developed in a Bayesian framework, they can also be used to compute p-values assessing significance of observed genotype–phenotype associations. Our approach should therefore
be of interest to practitioners whether or not they favor Bayesian procedures in general. It has two main advantages over more standard approaches. First, it provides greater power to detect
associations. Part of this increased power comes from incorporating extra information (knowledge on patterns of LD among typed and untyped SNPs), but, unexpectedly, we also found an increased power
of our Bayesian approach even when all SNPs were actually typed. Second, and perhaps more importantly, it provides more interpretable explanations for potential associations. Specifically, for each
SNP (typed and untyped), it provides a probability that it is causal. This contrasts with standard single-SNP tests, which provide a p-value for each SNP, but no clear way to decide which SNPs with
small p-values might be causal.
We focus on an association study design in which genotype data are available for a dense set of SNPs on a panel of individuals, and genotypes are available for a subset of these SNPs (which for
convenience we refer to as “tag SNPs”) on a cohort of individuals who have been phenotyped for a univariate quantitative trait. We assume the cohort to be a random sample from the population, and
consider application to other designs in the discussion.
Our strategy is to use patterns of LD in the panel, together with the tag SNP genotypes in the cohort, to explicitly predict the genotypes at all markers for members of the cohort, and then analyze
the data as if the cohort had been genotyped at all markers. There are thus two components to our approach: (i) predicting (“imputing”) cohort genotypes, and (ii) analyzing association between cohort
genotypes and phenotypes. For (i), we use existing models for population genetic variation across multiple markers [4,5], which perform well at estimating missing genotypes, and provide a
quantitative assessment of the uncertainty in these estimates [5]. For (ii), we introduce a new approach based on Bayesian regression, and describe how this approach can yield not only standard
Bayesian inference, but also p-values for testing the null hypothesis of no genotype–phenotype association. We chose to take a Bayesian approach partly because it provides a natural way to consider
uncertainty in estimated genotypes. However, the Bayesian approach has other advantages; in particular, it provides a measure of the strength of the evidence for an association (the Bayes factor, BF)
that is, in some respects, superior to conventional p-values. Furthermore, in our simulations, p-values from our Bayesian approach provide more powerful tests than standard tests, even if the cohort
is actually genotyped at all markers (including all causal variants).
Bayesian Regression Approach
We now provide further details of our Bayesian regression approach. The literature on Bayesian regression methods is too large to review here, but papers particularly relevant to our work include [7–
For simplicity, we focus on the situation where cohort genotypes are known at all SNPs (tag and non-tag). Extension to the situation, where the cohort is genotyped only at tag SNPs and other
genotypes are imputed using sampling-based algorithms such as PHASE [10,11] or fastPHASE [5], is relatively straightforward (see below).
Let G denote the cohort genotypes for all n individuals in the cohort, and y = (y[1], …, y[n]) denote the corresponding (univariate, quantitative) phenotypes. We model the phenotypes by a standard
linear regression:
where y[i] is the phenotype measurement for individual i, μ is the phenotype mean of individuals carrying the “reference” genotype, the x[ij]s are the elements of a design matrix X (which depends on
the genotype data; see below), the β[j]s are the corresponding regression coefficients, and ɛ[i] is a residual. We assume ɛ[i]s are independent and identically distributed ~N(0,1/τ), where τ denotes
the inverse of the variance, usually referred to as the precision (we choose this parameterization to simplify notation in later derivations). Thus and:
We assume a genetic model where the genetic effect is additive across SNPs (i.e., no interactions) and where the three possible genotypes at each SNP (major allele homozygote, heterozygote, and minor
allele homozygote) have effects 0, a + ak and 2a, respectively [12]. We achieve this by including two columns in the design matrix for each SNP, one column being the genotypes (coded as 0, 1, or 2
copies of the minor allele), and the other being indicators (0 or 1) for whether the genotype is heterozygous. The effect of SNP j is then determined by a pair of regression coefficients (β[j1], β[j]
[2]), which are, respectively, the SNP additive effect a[j] and dominance effect d[j] = a[j]k[j]. While there are other ways to code the correspondence between genotypes and the design matrix, we
chose this coding to aid specifying sensible priors (see below).
Priors for (β, μ, τ).
Prior specification is intrinsically subjective, and specifying priors that satisfy everyone is probably a hopeless goal. Our aim is to specify “useful” priors, which avoid some potential pitfalls
(discussed below), facilitate computation, and have some appealing properties, while leaving some room for context-specific subjective input. In particular, we describe two priors below, which we
refer to as prior D[1] and D[2], that were developed based on the following considerations: (i) inference should not depend on the units in which the phenotype is measured; (ii) even if the phenotype
is affected by SNPs in this region, the majority of SNPs will likely not be causal; (iii) for each causal variant there should be some allowance for deviations from additive effects (i.e., dominant/
recessive effects) without entirely discarding additivity as a helpful parsimonious assumption; and (iv) computations should be sufficiently rapid to make application to genome-wide studies practical
(this last consideration refers to prior D[2]).
Priors on the phenotype mean and variance.
The parameters μ and τ relate to the mean and variance of the phenotype, which depend on units of measurement. It seems desirable that estimates (and, more generally, posterior distributions) of
these parameters scale appropriately with the units of measurement, so, for example, multiplying all phenotypes by 1,000 should also multiply estimates of μ by 1,000. Motivated by this, for prior D
[1] we used Jeffreys' prior for these parameters:
This prior is well known to have the desired scaling properties in the simpler context where observed data are assumed to be N(μ, 1/τ) [13], and we conjecture that our prior D[1] also possesses these
desired scaling properties in the more complex context considered here, although we have not proven this.
For prior D[2] we used a slightly different prior, based on assuming a prior for (μ, τ) of the form
Specifically, our prior D[2] assumes the limiting form of this prior as κ,λ → 0 and . In Protocol S1 we show that the posterior distributions obtained using this limiting prior scale appropriately.
Both prior distributions above are “improper” (meaning that the densities do not integrate to a finite value). Great care is necessary before using improper priors, particularly where one intends to
compute BFs to compare models, as we do here. However, we believe results obtained using these priors are sensible. For prior D[2], as we show in Protocol S1 the posteriors are proper, and the BF
tends to a sensible limit. For prior D[1] we believe this to be true, although we have not proven it.
Prior on SNP effects.
For brevity, we refer to SNPs that affect phenotype as QTNs, for quantitative trait nucleotides. Our prior on the SNP effects has two components: a prior on which SNPs are QTNs and a prior on the QTN
effect sizes.
Prior on which SNPs are QTNs.
We assume that with some probability, p[0], none of the SNPs is a QTN; that is, the “null model” of no genotype–phenotype association holds. Otherwise, with probability (1 − p[0]), we assume there
are l QTNs, where l has some distribution p(l) on {1, 2, …, n[s]} where n[s] denotes the number of SNPs in the region. Given l, we assume all subsets of l SNPs are equally likely. Both p[0] and p(l)
can be context-dependent, and choice of appropriate values is discussed below.
Prior on effect sizes.
If SNP j is a QTN, then its effect is modeled by two parameters, a[j] and d[j] = a[j]k[j]. The parameter a[j] measures a deviation from the mean μ and will depend on the unit of measurement of the
phenotype. To reflect this, we scale the prior on a[j] by the phenotypic standard deviation within each genotype class, . Specifically, our prior on a[j] is , where σ[a] reflects the typical size of
a QTN effect compared with the phenotype standard deviation within each genotypic class. Choice of σ[a] may be context-dependent, and is discussed below.
The parameter d[j] = a[j]k[j] measures the dominance effect of a QTN. If k[j] = 0, then the QTN is additive: the heterozygote mean is exactly between the means of the two homozygotes. If k[j] = 1
(respectively, −1), allele 1 (respectively, 0) is dominant. The case corresponds to overdominance of allele 1 or allele 0. We investigate two different priors for the dominance effect:
Prior D[1]: We assume that k[j] is a priori independent of a[j], with . We chose σ[k] = 0.5, which gives , reflecting a belief that overdominance is relatively rare.
Prior D[2]: We assume that d[j] is a priori independent of a[j], with , where we took σ[d] = 0.5σ[a]. This prior on d[j] induces a prior on k[j] in which k[j] is not independent of a[j].
Prior D[1] has the attractive property that the prior probability of overdominance is independent of the QTN additive effect a[j]. However, the posterior distributions of a[j] and k[j] must be
estimated via a computationally intensive Markov Chain Monte Carlo (MCMC) scheme (see Protocol S2). (An alternative, which we have not yet pursued, would be to approximate BFs under prior D[1] by
numerical methods, such as Laplace Approximation; e.g., [14]). Prior D[2] is more convenient, as, when combined with the priors on μ and τ in Equation 4, posterior probabilities of interest can be
computed analytically (Protocol S1).
For both priors D[1] and D[2] we assume effect parameters for different SNPs are, a priori, independent (given the other parameters).
Choice of p[0], p(l), and σ[a].
The above priors include “hyperparameters,” p[0] and σ[a], and a distribution p(l) that must be specified. The hyperparameter p[0] gives the prior probability that the region contains no QTNs. While
choice of appropriate value is both subjective and context-specific, for candidate regions we suggest p[0] will typically fall in the range 10^−2 to 0.5. If data on multiple regions are available,
then it might be possible to estimate p[0] from the data, although we do not pursue this here. Instead, we mostly sidestep the issue of specifying p[0] by focusing on the BF (described below), which
allows readers of an analysis to use their own value for p[0] when interpreting results.
In specifying the prior, p(l), for the number of QTNs, we suggest concentrating most of the mass on models with relatively few QTNs. Indeed, here we focus mainly on the extreme case in which p(l) is
entirely concentrated on l = 1: that is, the “alternative” model is that the region contains a single QTN. Although rather restrictive, this seems a good starting point in practice, particularly
since our results show that it can perform well even if multiple QTNs are present. Nonetheless, there are advantages to considering models with multiple QTNs, and so we also consider a prior where p(
l) puts equal mass on l = 1, 2, 3, or 4. This prior suffices to illustrate the potential of our approach, although in practice it would probably be preferable to place decreasing probabilities on
larger numbers of QTNs (e.g., p(l = 2) < p(l = 1)). An alternative would be to sidestep specifying p(l) by computing BFs comparing, say, 4-QTN, 3-QTN, 2-QTN, and 1-QTN models versus the “null” model.
However, interpreting and acting on these BFs will inevitably correspond to implicit assumptions about the relative prior plausibility of these multi-QTN models.
Finally, specification of the standard deviation of the effect size, σ[a], involves subtle issues. Although it may seem tempting to use “large” σ[a] to reflect relative “ignorance” about effect sizes
[15], we believe this is inadvisable. Although large σ[a] yields a flat prior on effect sizes, this prior is far from uninformative, in that it places almost all its mass on large effect sizes. The
result would essentially allow only zero effects (i.e., the “null” model), or large effects (the “alternative” model). If in truth the causal SNPs have relatively small effect, which is probably
generally realistic, then (for realistic sample sizes) the null model would be strongly favored over the alternative, because the data would be more consistent with zero effects than with large
effects. Choice of σ[a] can thus strongly affect inference, particularly the BF, which we use to summarize evidence for the region containing any QTNs. Partly because of this, in practice we suggest
averaging results over several values for σ[a] (equivalent to placing a prior on σ[a]). It may also be helpful to examine sensitivity of results to σ[a]. For example, if the BF is small for all
values of σ[a], then there is no evidence for any QTN in the region; if the BF is large for some values and small for others, then the evidence depends on the extent to which you believe in large
versus small effects. However, for simplicity, all results in this paper were obtained using a fixed value of σ[a] = 0.5.
We focus on two key inferential problems: (i) detecting association between genotypes and phenotype, and (ii) explaining observed associations. In the model of Equation 1, these translate to
answering (i) are any β[j]s non-zero? and (ii) which β[j]s are non-zero and how big are they? We view the ability to address both questions within a single framework to be an advantage of our
Detecting association.
To measure the evidence for any association between genotypes and phenotypes, we use the BF, [16] given by
where H[0] denotes the null hypothesis that none of the SNPs is a QTN (a[j] = d[j] = 0 for all j), and H[1] denotes the complementary event (i.e., at least one SNP is a QTN). Computing the BF
involves integrating out unknown parameters, as described in Protocols S1 and S2. In interpreting a BF, it is helpful to bear in mind the formula “posterior odds = prior odds × BF,” so, for example,
if the prior odds are 1:1 (i.e., p[0] = 0.5, so association with genetic variation in the region is considered equally plausible, a priori, as no association) then a BF of 10 gives posterior odds of
10:1, or ~91% probability of an association.
In the special case where we allow at most one QTN, Equation 5 reduces to
where H[j] denotes the event that SNP j is the QTN. The jth term in this sum corresponds to the BF for H[j] versus the null model, and involves the genotype data at SNP j only. We refer to these
terms as the “single-SNP” BFs, so in this special case the overall BF is the mean of the single-SNP BFs. This natural way for combining information across (potentially correlated) SNPs is an
attractive property of BFs compared with single-SNP p-values. Furthermore, in terms of detecting a genotype–phenotype association it can work well even if multiple QTNs are present (see Results).
The Bayes/non-Bayes compromise.
From a Bayesian viewpoint, the BF provides the measure of the strength of evidence for genotype–phenotype association. That is, if one accepts our prior distributions and modeling assumptions, then
the BF is all that is necessary to decide whether a genuine association is present. However, given the potential for debate over prior distributions, and for deviations from modeling assumptions, it
is helpful to note that a p-value for testing H[0] can be obtained from a BF through permutation. Specifically, one can compute the BF for the observed data, and for artificial data sets created by
permuting observed phenotypes among cohort individuals, and obtain a p-value as the proportion of permuted data sets for which the BF exceeds the BF for the observed data. Being based on permutation,
the resulting p-value is valid irrespective of whether the model or priors are appropriate. This p-value also provides a helpful way to compare our approach with standard tests of association, and,
as we show below, tests based on BF appear to perform well in a wide variety of situations. Using BFs as test statistics to obtain p-value is referred to as the “Bayes/non-Bayes compromise” by Good [
Explaining and interpreting associations.
To “explain” observed associations we compute posterior distributions for SNP effects (a[j] + d[j] and 2a[j] for the heterozygote and minor-allele homozygote, respectively), with particular focus on
the posterior probability that each SNP is a QTN P(a[j] ≠ 0). Here, our Bayesian regression approach has an important qualitative advantage over standard multiple regression. Specifically, if a
genetic region contains multiple highly correlated SNPs, each highly correlated with the phenotype, then the correct conclusion would be any of these SNPs could be causal, without identifying which
one. This will be reflected in the posterior distribution of the effects: the overall probability that at least one SNP is a QTN will be high, but (at least in the simplest case where we assume at
most one QTN) this probability will be spread out over the multiple correlated SNPs. In contrast, if multiple highly correlated SNPs are included in a standard multiple regression it is possible that
no one of them will produce a significant p-value.
We also argue that the imputation-based approach brings us closer to being able to interpret estimated effects for each SNP as actual causal effects, rather than simply associations. Indeed, the key
to making the leap from association to causality is controlling for all potential confounding factors, and by imputing genotypes at nearby SNPs, the imputation-based approach controls for one
important set of confounding factors (the nearby SNPs), which would otherwise be ignored. Thus, while functional studies provide the ultimate route to convincingly demonstrating causal effects, our
approach may help target such studies on the most plausible candidate SNPs.
Imputing genotypes
In the tagSNP design, observed genotypes G[obs] consist of panel genotypes at all SNPs and cohort genotypes at tagSNPs only. To apply our methods in this situation, we use sampling-based algorithms
(PHASE [10,11], or fastPHASE [5]) to generate multiple imputations for the complete genotype data (all individuals at all SNPs) by sampling from . We then incorporate these imputations into our
inference: for prior D[1], this involves adding a step in the MCMC scheme to sample the imputed genotypes from their posterior distribution given all the data; for prior D[2] it involves simply
averaging relevant calculations over imputations. Details are given in Protocols S1 and S2.
Availability of Software
Methods described here are implemented in a software package, Bim-Bam (Bayesian IMputation-Based Association Mapping), available from the Stephens Lab website http://stephenslab.uchicago.edu/
“Power” and Comparisons with Other Approaches
We compared the power of our approach to other common approaches via simulation. We simulated genotype and phenotype data (with μ = 0 and τ = 1) for genetic regions of length 20 kb containing a
single QTN, and genetic regions of length 80 kb containing four QTNs, as follows:
(1) Using a coalescent-based simulation program, msHOT [18], simulate 600 haplotypes from a constant-sized random mating population, under an “infinite sites” mutation model, with (population-scaled)
mutation rate θ = 0.4/kb and “background” recombination rate ρ = 0.8/kb, and a recombination hotspot (width 1 kb; recombination rate 50ρ per kb) in the center of the region.
(2) Form genotypes for a “panel” of 100 individuals by randomly pairing 200 haplotypes, and a “cohort” of 200 individuals by randomly pairing the other 400 haplotypes.
(3) Select tag SNPs from the panel data using the approach of Carlson et al. [19] with an r^2 cutoff of 0.8. As in Carlson et al. [19], SNPs with panel minor allele frequency (MAF) <0.1 were not
(4) Select which SNPs are QTNs, and their effect sizes, and simulate phenotype data for each cohort individual according to Equation 1. We considered four scenarios: (A) a “common” (MAF>0.1) QTN,
with a range of effect sizes a = 0.2, 0.3, 0.4, 0.5 and “mild” dominance for the minor allele (d = 0.4a); (B) a common QTN, with a = 0.3 and “strong” dominance for the major allele (d = −a); (C) a
“rare” (MAF 0.01 − 0.05) QTN, with a = 1 and no dominance (d = 0); (D) four common, relatively uncorrelated, QTNs, each with a = 0.3 and d = 0.4a. In each situation, we randomly chose a QTN
satisfying the relevant MAF requirements (in the 600 sampled haplotypes), except under scenario (D) we first chose four tag SNP “bins” at random and then randomly chose a QTN satisfying the MAF
requirement in each bin, thereby ensuring the four QTNs were relatively uncorrelated. (While real data may contain multiple highly correlated QTNs, we did not explicitly consider this case, since
their effect would be similar to a single QTN.)
We compared power of tests based on the BF (under prior D[2], allowing at most one QTN, using Equation 6 with four other significance tests:
(1) Two tests based on p[min], the minimum p-value obtained from testing each SNP individually (via standard ANOVA-based methods) for association with the phenotype. These two tests differed in
whether the single SNP p-values were obtained using the 1 degree-of-freedom (df) “allelic” test, which assumes an additive model where the mean phenotype of heterozygotes lies midway between the two
homozygotes (equivalent to linear regression of phenotype on genotype), or the 2 df “genotype” test, which treats the mean of the heterozygotes as a free parameter.
(2) A test based on p[reg], the global p-value obtained from linear regression of phenotype on all SNP genotypes (using the standard F statistic, coding the genotypes as 0,1 and 2 at each SNP, and
assuming additivity across SNPs). See Chapman et al. [20] for example.
(3) A test based on BF[max], the maximum single-SNP BF. We included this test for comparison with the mean single-SNP BF (Equation 6), to examine whether averaging information across SNPs in Equation
6 improved power.
For each test, we analyzed each dataset in two ways: as if data had been collected using (i) a “resequencing design” (i.e., all individuals were completely resequenced, so genotype data are available
at all SNPs in all individuals); and (ii) a “tag SNP design” (i.e., in panel individuals genotype data are available at all SNPs, but in cohort individuals genotype data are available at tag SNPs
only). For the tag SNP design, we assumed haplotypic phase is known in the panel (as it is, mostly, for the HapMap data for example), but not in the cohort; however our approach can also deal with
unknown phase in the panel. For p[reg] and p[min], tests were performed on all SNPs for the resequencing design, and on tag SNPs only for the tag SNP design. For BF and BF[max], single-SNP BFs were
computed for all SNPs in both designs (averaging over imputed genotypes for non-tag SNPs in the tag SNP design). For p[reg], we computed a p-value assessing significance using the standard asymptotic
distribution for the F statistic; for the other tests we found p-values by permutation, using 200–500 random permutations of phenotypes assigned to cohort individuals. (The relatively small number of
permutations limits the size of the smallest possible p-value, causing discontinuities near the origin in Figure 1).
Figure 1. Power Comparisons
(A) single common variant, modest dominance; (B) single common variant, strong dominance for minor allele; (C) single rare variant, no dominance; (D) multiple common variants.
Each colored line shows power of test varying with significance threshold (type I error). Black: BF from our method (prior D[2]); Green: p[min] (allelic test); Red: p[min] (genotype test); Blue: p
[reg], multiple regression; Grey: BF[max]. Each column of figures shows results for data analyzed under the “resequencing design” (left) and the “tag SNP design” (right). Each row shows results for
the four different simulation scenarios.
Figure 1 shows power of each test versus type I error under both resequencing and tag SNP designs. For Scenario (A) (a single common QTN), the relative performances of methods were similar for all
four effect sizes examined (unpublished data), and so we pooled these results in the figure.
Comparing p[min] and p[reg], the single-SNP tests (p[min]) were more powerful when all variants (including the causal variant) were typed, or when the QTN was a common SNP and therefore “tagged” by a
tag SNP, while the regression-based approach (p[reg]) was more powerful when the QTN was a rare SNP not “tagged” by any tag SNP. Among the two single-SNP tests, the 1 df allelic test performed as
well as, or better than, the 2 df genotypic test, except in Scenario (B), where the major allele exhibits strong dominance. In particular, for Scenario (A), where the causal variant exhibits
dominance, the allelic test (which assumes no dominance), performed better than the genotypic test. This is presumably because, with the effect and sample sizes considered, the extra parameter
estimated in the genotypic test does not sufficiently improve model fit. Although relative performance of p[min] and p[reg] in the tag SNP design could depend on tag SNP selection scheme (and the one
we used, based on pairwise LD, would seem to favor p[min]), it seems reasonable to expect single-SNP tests to be effective at detecting “direct” associations between the phenotype and a causal
variant, or “near-direct” association between a SNP that tags a causal variant, and the regression-based approach to be better at detecting indirect associations between a phenotype and a variant not
“tagged” by a single SNP (the intuition, from Chapman et al. [20], is that such variants can be highly correlated with linear combinations of tag SNPs, and thus be detected by linear regression). In
principle, p[reg] could also effectively capture “direct” associations, but our empirical results suggest that it is less effective at this than the single SNP tests. (However, poor performance of p
[reg] under the resequencing design may be due in part to inadequacy of the asymptotic theory when large numbers of correlated covariates are used. This might be alleviated by assessing significance
of p[reg] by permutation.)
Turning now to our approach, except for Scenario (B) in the tag SNP design, the test based on the BF is as powerful or more powerful than the other tests. Thus, unlike p[reg] and p[min], the BF
performs well in detecting both “direct” and “indirect” associations: if the QTN is typed, the BF detects it using observed genotype data at that SNP; otherwise, it detects it using the imputed
genotype data at the QTN. In Scenario (B), where the major allele exhibits strong dominance, our approach suffered slightly in power compared with the genotypic test, presumably because our prior
places relatively low weight on strong dominance. However, the power loss was small compared with that of the allelic test. Thus our prior “allows” for dominance without suffering the full penalty
incurred by the extra parameter in the genotypic test when dominance is less strong (Scenario [A]).
In Scenario (D), which involved multiple QTNs, tests based on the BF clearly outperformed other tests considered, even though the BF was computed allowing at most one QTN. Our explanation is that the
BF, being the average of single-SNP BFs, has greater opportunity to capture the presence of multiple QTNs than does the minimum p-value. This explanation is supported by the fact that the maximum BF,
BF[max], performs less well than BF. To examine whether power might be further increased by explicitly allowing for multiple QTNs, we compared power for BFs computed using 1-QTN and 2-QTN models (in
the 2-QTN model p(l = 1) = p(l = 2) = 0.5). We found little difference in power, although BFs for the 2-QTN model tended to be larger than BFs for the 1-QTN model, so allowing for multiple QTNs may
help if the BF itself, rather than a p-value based on the BF, is used to measure the strength of evidence for association. In addition, considering multiple-QTN models should have advantages when
attempting to explain an association (see below).
A second, and perhaps more surprising, situation where the BF outperforms other methods is when all SNPs are typed and tested (i.e., Scenario (A), resequencing design). Here, in contrast to Scenario
(D), BF[max] performs similarly to the standard BF, suggesting that the power gain is due not to averaging, but to an intrinsic property of single-SNP BFs that makes them better measures of evidence
than single-SNP p-values. Our explanation is that the BF tends to be less influenced by less informative SNPs (e.g., those with very small MAF, of which there are many in the resequencing design),
whereas p-values tend to give equal weight to all SNPs, regardless of information content. Specifically, BFs for relatively uninformative SNPs will always lie close to 1, and should not greatly
influence either the maximum or the average of the single-SNP BFs (or, more precisely, will not greatly influence differences in these test statistics among permutations of phenotypes). In contrast,
p-values for each SNP are forced, by definition, to have a uniform distribution under H[0], and so p-values from a large number of uninformative SNPs unassociated with the phenotype could swamp any
signal generated by a single informative SNP associated with the phenotype. Although the resequencing design is currently uncommon, this observation suggests that it may generally be preferable to
rank SNPs according to their BFs, rather than by p-values (e.g., in genome scans). It also highlights a general (rarely considered, and perhaps underappreciated) drawback of p-values as a measure of
evidence: the strength of evidence of a given p-value depends on the informativeness of the test being performed, or, more specifically, on the distribution on the p-values under the alternative
hypothesis, which is generally not known. Thus, for example, a p-value of 10^−5 in a study involving few individuals may be less impressive than the same p-value in a larger study. In contrast, the
interpretation of a BF does not depend on study size or similar factors.
Resequencing versus Tag SNP Designs
An important feature of Figure 1 is that, for Scenarios (A), (B), and (D), where the causal SNPs are common, power is similar for the resequencing and tag SNP designs. Indeed, in these cases most
other aspects of inference are also similar. For example, Figure 2 shows that, under Scenarios (A) and (B), estimated effect sizes, BFs, and posterior probability that the actual causal variant is a
QTN, are typically similar for both designs. Thus under these scenarios, our imputation-based approach effectively recreates results that would have been obtained by resequencing all individuals.
Figure 2. Comparison of Results for Resequencing Design (x-axis) and Tag SNP Design (y-axis)
Panels show: (a) errors in the estimates (posterior means) of the heterozygote effect (a + d); (b) errors in the estimates (posterior means) of the main effect (a); and (c) posterior probability of
being a QTN (P((a, d) ≠ (0, 0))) assigned to the causal variant.
In contrast, when the causal variant is rare, there is a noticeable drop in power for the tag SNP design versus the resequencing design, and the BFs, posterior probabilities, and effect size
estimates under the two designs often differ substantially (unpublished data). This may seem slightly disappointing: one might have hoped that, even with tag SNPs chosen to capture common variants,
they might also capture some rare variants. Indeed, this can happen: in some simulated data sets the rare causal variant was clearly identified by our approach, presumably because it was highly
correlated with a particular haplotype background, and could thus be accurately predicted by tag SNPs. However, this occurred relatively rarely (just a few simulations out of 100).
We wondered whether a different tagging strategy, aimed at capturing rare variants, might improve performance when the causal variant is rare. The development of such strategies lies outside the
scope of this paper, but, to assess potential gains that might be achieved, we analyzed rare-variant simulations assuming that all SNPs except the causal variant were typed in the cohort. Power from
this approach (Figure 3) gives a conservative upper bound on what could be achieved using a more effective tagging design, without actually typing the causal variant. Although power was higher than
with the r^2-based tag SNP selection, it remained substantially lower than in the resequencing design, where the causal variant is typed.
Figure 3. Examination of Potential Effect of Different Tag SNP Strategies on Power, When the Causal Variant is Rare (0.01 < MAF < 0.05)
Solid line: Resequencing design; dashed line: tag SNP design, with tags selected using method from [19]; and dotted line: tag SNP design, with all SNPs except the causal SNP as tags.
We also wondered whether a different approach to impute missing genotypes (in the cohort at non-tag SNPs) might improve performance. For results above, we used the software fastPHASE [5] to impute
the genotypes, so we re-ran the analysis using a different imputation algorithm [10,11]. Results for these two approaches (Figure 4) show little difference in terms of power, consistent with previous
results [5] suggesting the two approaches have similar accuracy in imputing missing genotypes.
Figure 4. Power of the Multipoint Approach in the Rare Variant Scenario for Two Different Imputation Algorithms
In summary, imputation-based methods appear to increase power of the tag SNP design to detect rare variants, but nevertheless remain notably less powerful than BFs based on the complete resequencing
Comparison of Prior D[1] and D[2]
Priors D[1] and D[2] differ in their assumed correlation between the dominance effect (d = ak) and main effect a: in D[1] the prior probability of overdominance is independent of a, whereas under D
[2] overdominance is more likely for small a than for large a (Figure 5). In this respect, D[1] is perhaps more sensible than D[2]; however, D[2] is computationally much simpler. To examine the
effects of these priors on inference, we compared (i) the BF and (ii) the posterior probability assigned to the actual causal variant under each prior for the datasets from Scenarios (A) and (B).
Results agreed quite closely (Figure 6), suggesting prior D[2] provides a reasonable approximation to prior D[1] in the scenarios considered. This is important, since prior D[2] is computationally
practical for computing BFs for very large datasets (e.g., genome-wide association studies with hundreds of thousands of SNPs), for which sampling posterior distributions of parameters using an MCMC
scheme would be computationally daunting.
Figure 5. Scatter Plot of Samples from Prior Distribution of a (x-axis) and a + d (y-axis), for Priors D[1] (Black) and D[2] (Blue)
The solid yellow line corresponds to d = 0 (additivity). The dashed red lines are the limits above and below which a SNP exhibits over-dominance.
Figure 6. Comparison of Inferences using Prior D[1] and D[2] for the BF (Left) and the Posterior Probability Assigned to the Causal Locus Being a QTN (Right)
Results shown are for all datasets for the common variant Scenario (A) and (B) and for both the resequencing design and the tag SNP design. The discrepancy between the larger estimated BFs is caused
by the fact that we used insufficient MCMC iterations to accurately estimate very large BFs (>10^6) under prior D[1].
Allowing for Multiple Causal Variants
When analyzing a candidate region, one would ideally like not only to detect any association, but also to identify the causal variants (QTNs). Since a candidate region could contain multiple QTNs, we
implemented an MCMC scheme (using prior D[1]) to fit multi-QTN models where the number of QTNs is estimated from the data; here, we consider a multi-QTN model with equal prior probabilities on 1, 2,
3, or 4 QTNs. (A similar MCMC scheme could also be implemented for prior D[2], and could exploit the analytical advantages of this prior to reduce computation. Indeed, for regions containing a modest
number of SNPs it would be possible to examine all subsets of SNPs, and entirely avoid MCMC.)
We compare this multi-QTN model with a one-QTN model on a dataset simulated with four QTNs (scenario [D]). The estimated BF for a one-QTN model was ~6,000, while for the multi-QTN model it was >10^5
(we did not perform sufficient iterations to estimate how much bigger than 10^5). Thus, if a region contains multiple causal variants, then allowing for this possibility may provide substantially
higher BFs. Figure 7 shows the marginal posterior probabilities for each SNP being a QTN, under the one-QTN and multi-QTN models, conditional on at least one SNP in the region being a QTN.
(Summarising the more complex information on posterior probabilities for combinations of SNPs is an important future challenge.) Under the one-QTN model, only one of the four causal SNPs has a large
marginal posterior probability, whereas under the multi-QTN model all four are moderately large. Of course, other SNPs correlated with the four QTNs were also associated with the phenotype, and so
have elevated posterior probabilities. This example illustrates the potential for the multi-QTN model to provide fuller explanations for associations.
Figure 7. Illustration of How a Multi-QTN Model Can Provide Fuller Explanations Than a One-QTN Model for Observed Associations
The figure shows, for each SNP in a dataset simulated under Scenario (D), the estimated posterior probability that it is a QTN, conditional on an association being observed. Left: Results from
one-QTN model. Right: Results from multi-QTN model allowing up to four QTNs. The four actual QTNs are indicated with a star. Colors of the vertical lines indicate tag SNP “bins” (i.e., groups of SNPs
tagged by the same variant).
SCN1A Polymorphism and Maximum Dose of Carbamazepine
We applied our method to data from association studies involving the SCN1A gene and the maximum dose of carbamazepine in epileptic patients [21,22]. For this analysis, the “panel” consisted of
parents from 32 trios of European descent from the CEPH Utah collection [21] and the “cohort” consisted of 425 patients of European descent for whom the maximum dose of carbamazepine had been
determined [22]. Genetic data on the trios were available for 15 polymorphisms,comprising 14 SNPs and one indel, which corresponded to snps 1–15 and indel12 in Table 2 of Weale et al. [21]. For
cohort individuals, genotype data are available at four tag SNPs: snp1 (rs590478), snp5 (rs8191987), snp7 (rs3812718), and snp9 (rs2126152). These SNPS were chosen to summarize haplotype diversity at
the 15 panel polymorphisms (for details, see Tate et al. [22]).
We first estimated haplotypes in 64 parents using the trio option in PHASE [23]. Since trio information allows haplotypes to be accurately determined [23] we assumed these estimated panel haplotypes
were correct in subsequent analyses. We then applied our method to compute a BF for overall association between genetic data and the phenotype, and to compute, for each SNP, the posterior probability
that it was a QTN. In applying our method we used PHASE to impute the genotypes in the cohort at non-tag SNPs, and performed analyses under priors D[1] and D[2].
BFs for priors D[1] and D[2] were, respectively, 3.15 and 2.33, and the corresponding p-values (estimated using 1,000 permutations) were 0.006 and 0.019, respectively. We also computed p-values using
single SNP tests at tag SNPs and obtained 0.007 for the allelic test and 0.019 for the genotype test. (These are essentially the two tests performed by Tate et al. [22], who reported the smallest p-
values uncorrected for multiple comparisons.) These BFs represent only modest evidence for an association. If one were initially even somewhat skeptical about SCN1A as a candidate for influencing
this phenotype, one might remain somewhat skeptical after analyzing these data. For example, with a 20% prior probability on variation in SCN1A influencing phenotype, the posterior probability of
association under either prior is <50%. (Prior probability of 0.2 gives prior odds of 0.2:(1–0.2), or 1:4; a BF of 3 then gives posterior odds of 3:4, which translates to a posterior probability of 3
/7.) On the other hand, SCN1A might be considered a relatively good candidate for influencing response to carbamazepine, since it is the drug's direct target. And, depending on follow-up costs and
potential benefits of finding a functional variant, posterior probabilities of very much <50% might be deemed worth following-up.
Among the 15 SNPs analyzed, snp7 was assigned the highest posterior probability of being a QTN (Figure 8). This SNP, which is a tag SNP, was also implicated by the analysis in Tate et al. [22].
However, the posterior probability of this SNP represents only 34 % of the posterior mass. Six additional SNPs are needed to encompass 90% of the posterior mass: snp6 (rs3812719), snp8 (rs490317),
snp9 (rs2126152), snp10 (rs7601520), snp11 (rs2298771) and snp13 (rs7571204). The posterior distributions of the main effect, a, for each of these seven SNPs, conditional on it being a QTN, are very
similar (Figure 8).
Figure 8. Results for the SCN1A Dataset
Left panel shows the posterior probability assigned to each SNP being a QTN, with filled triangles denoting tag SNPs and open circles denoting non-tag SNPs. The right panel shows (in gray) estimated
posterior densities of the additive effect for each of the seven SNPs assigned the highest posterior probabilities of non-zero effect (representing 90% of the posterior mass). The average of these
curves is shown in black.
In summary, these data provide modest evidence of association between SCN1A and maximum dose of carbamazepine, and, among the SNPs analyzed, snp7 (rs3812718) appears to be the best candidate for
being causal. A recent follow-up study appears to confirm this variant as being functionally important [24].
We described a new approach for analysis of association studies, with two important components: (i) it uses imputation of unknown genotypes, based on statistical modeling of patterns of LD, to allow
untyped SNPs to be directly assessed for association with phenotype; (ii) it uses BFs, rather than p-values, to assess genotype–phenotype association.
The idea of trying to find associations between phenotypes and untyped variants is old, and underlies many existing methods for assessing association. In some cases this aim is implicit (e.g.,
testing for association between haplotypes and phenotypes can be thought of as an attempt to indirectly test untyped variants that may lie on a particular haplotype background), and in others it is
explicit (to give just one example, Zöllner and Pritchard [25] place mutations on an estimated tree, and test resulting genotypes for association with phenotype). A key difference between our
approach and these existing methods is that we focus on testing variants about which something is known (i.e., SNPs that are known to exist, and have documented patterns of LD), and exploiting this
information. This idea, which seems in many ways more compelling than testing hypothetical untyped variants about which nothing is known, has been recently developed by several groups [5,26–31].
While there are, no doubt, multiple effective ways to implement the general strategy, attractive key features of our approach include the use of flexible statistical models for multi-locus LD to
estimate missing genotypes, with uncertainty; and the use of Bayesian methods to account for uncertainty in estimated genotypes.
While several papers have suggested Bayesian approaches to association studies (e.g., [15,32,33]), our work includes some distinctive contributions. First, our prior distributions for single-SNP
effects have a number of desirable properties: (i) they scale appropriately with changes in measurement units of the phenotype, (ii) they center on an additive model while allowing for dominance, and
(iii) they facilitate rapid calculations. This last feature means that our work can form the foundation of simple Bayesian analyses in genome-wide association studies, e.g., computing a single-SNP BF
for each SNP, as a Bayesian analogue of single-SNP hypothesis tests. This option is available in our software, but to further facilitate its use by others, and to emphasize the simplicity of the
analytical calculations, we give R code for computing the BF for typed SNPs under prior D[2] (see Protocol S1). A second distinctive contribution is that we compare our Bayesian approach directly
with standard p-value based approaches, providing both qualitative insight and quantitative support for several advantages of single-SNP BFs over single-SNP p-values. These advantages include: (i)
the BF allows for both additive and dominant effects without the additional degree of freedom incurred by the general 2 df hypothesis test; (ii) the BF better reflects the informativeness of each
SNP, in particular, that SNPs with small MAF are typically less informative than SNPs with larger MAF (this advantage presumably being greatest for SNP panels containing many SNPs with small MAF);
(iii) it provides a principled way to take into account prior information on each SNP, e.g., whether it lies in or near a gene whose function is believed likely to influence the trait; and (iv)
averaging single-SNP BFs provides a convenient, and in some ways effective, approach to combining information across multiple SNPs in a region.
Perhaps the most important disadvantage of BFs compared with p-values is that a BF is strictly “valid” only under the assumption that both the prior and the model are “correct.” Since this is never
the case in practice, BFs are never strictly valid, Our hope is to make the prior and model sufficiently accurate that resulting BFs are “useful.” (Note that p-values may be valid but useless: e.g.,
p-values simulated from a uniform distribution independent of phenotype and genotype data are valid, in that they are uniformly distributed under the null hypothesis, but useless.) Here, it is
helpful to distinguish two different uses of BFs: as test statistics to compute permutation-based p-values, as in the power comparisons in this paper, and as direct measures of evidence (e.g., in
“posterior odds = BF × prior odds”). Our limited experience is that p-values obtained from BFs are relatively robust to prior and modeling assumptions, but that the absolute values of BFs are
substantially more sensitive. In particular, BFs tend to be sensitive to both (i) choice of σ[a], σ[d]; and (ii) the normality assumption in the phenotype model. We now discuss each of these issues
in turn.
Choice of σ[a], σ[d] corresponds to quantifying prior beliefs about likely additive and dominance effect sizes. In this paper, we used (in prior D[2]) σ[a] = 0.5 and σ[d] = σ[a]/2. We now believe
these values are likely larger than appropriate for most studies of complex phenotypes, placing too little weight on small, but realistic, effect sizes. Our current suggested “default” procedure is
to average BFs computed with σ[a] = 0.05, 0.1, 0.2, and 0.4, and σ[d] = σ[a]/4, which places more weight on smaller effect sizes, and less weight on overdominance. We would expect to modify these
values in the light of further information about typical effect sizes for particular traits. It could also be argued that, in addition to allowing a continuum of deviations from the additive model,
it may make sense to specify prior probabilities for “pure” recessive or dominant models (i.e., d = a, −a). BFs under these models can be computed easily by simply replacing all heterozygous
genotypes with homozygous genotypes for the major or minor allele.
Regarding the normality assumption, following a suggestion by Mathew Barber (personal communication), in practical applications, we are currently applying a normal quantile transform to phenotypes
(replacing the rth biggest of n observations with the (r − 0.5)/nth quantile of the standard normal distribution) before applying our methods and computing BFs. Imposing normality on our phenotype in
this way is different from the normality assumption in our phenotype model, which states that the residuals are normally distributed. However, in this context, where effect sizes are expected to be
generally rather small, normality of phenotype and normality of residuals are somewhat similar assumptions, suggesting that this transform may be effective.
Throughout this paper, we have assumed a “population” sampling design in which phenotype and genotype data are available on a random sample from a population, and perform analyses conditional on the
observed genotype data. An alternative common design involves collecting genotypes only on individuals whose phenotypes lie in the tails of the distribution [34]. To apply our methods to such
designs, we suggest conditioning on unordered observed phenotypes, denoted {y}, in addition to conditioning on the genotypes G, and to perform inference for the genetic effects parameters, β, based
on the conditional likelihood L(β) =P(y | {y}, G, β). However, this conditional likelihood does not appear to be analytically tractable, and so analysis of this design may require development of
computationally tractable approximations. Similarly, adapting our approach to standard case-control designs will require development of appropriate priors and computational algorithms, and represents
an important area for future work.
Supporting Information
Protocol S1. Analytical Computations for Prior D[2]
(100 KB PDF)
Protocol S2. MCMC Sampling for Prior D[1]
(66 KB PDF)
Accession Numbers
The National Center for Biotechnology Information (NCBI) Entrez (http://www.ncbi.nlm.nih.gov/gquery/gquery.fcgi) Gene ID of the SCN1A gene is 6323.
We thank D. Goldstein for access to the SCN1A data, and M. Weale and S. Tate for providing the data in a convenient electronic form. We thank N. Patterson for pointing us to the I. Good reference, J.
Marchini and P. Donnelly for helpful conversations, and J. Pritchard and two anonymous referees for useful comments on earlier versions of the manuscript. Computing support was provided by the
University of Washington Center for Study of Demography and Ecology, High Performance Computing Cluster Cooperative.
Author Contributions
BS and MS conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, and wrote the paper.
|
{"url":"http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.0030114&imageURI=info%3Adoi%2F10.1371%2Fjournal.pgen.0030114.g007","timestamp":"2014-04-21T10:29:50Z","content_type":null,"content_length":"189157","record_id":"<urn:uuid:9baa89eb-570d-416a-86fd-3fa86abcd7b5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This thread is becoming very machistic. An awful lot more information is required to be able to work it out logically and accurately, for example:
1. Where are the loos siutuated along the route?
2. Where, when and for how long will we stop for morning coffee?
3. Is there a Sainsbury's or Marks and Spencers along the route (will need to pick up a few things for dinner)?
4. If passing a dress shop with a sale on and some fantastic bargains are to be had, how long will the convoy stop?
5. If best friends' house is two miles along a side street, is a detour permitted?
6. Will those nasty helmets which flatten the best of hairdos be mandatory?
7. Will the motorcyle have heated handgrips?
8. Can the convoy go on its own if it's raining?
9. When and where is lunch being served? (Burger King or McDonalds not being options).
You see, you men never, ever give enough important information!!
|
{"url":"http://www.crosswordsolver.org/forum/15502/offset/160","timestamp":"2014-04-16T07:42:18Z","content_type":null,"content_length":"32668","record_id":"<urn:uuid:9e19eb09-0700-4ee7-a592-048734ecf016>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Beastie Zone contains detailed information about lots of useful integrated circuits.
Each entry gives you a pin connection diagram for the integrated circuit, or 'beastie', and shows you exactly how to wire it up to achieve basic operation.
In addition, there are circuits showing typical applications and links to other parts of the DOCTRONICS Beastie Zone database and to topics within Design Electronics, the on-line electronics
subsystems components linear ICs CMOS logic TTL logic LINKS
If you can't find out what you need to know, Email
Back to Contents
INPUT SWITCHES Push button switches, keypads and debounced switches
LIGHT/DARK SENSOR LDR sensors
INFRA-RED SENSORS Photodiode, phototransistor and passive infra-red (PIR) sensors
TEMPERATURE SENSOR Thermistor and integrated circuit sensors
MOISTURE SENSOR Is it raining?
FLOAT SENSOR Is the bath full?
MOVEMENT SENSOR Tilt switches, reed switches and pressure pads
SOUND SENSOR That's a microphone...
INPUT VOLTAGE Providing a reference level
NOT GATE Logic gate with applications and circuits
BUFFER Logic gate with applications and circuits
AND GATE Logic gate with applications and circuits
OR GATE Logic gate with applications and circuits
NAND GATE Logic gate with applications and circuits
NOR GATE Logic gate with applications and circuits
EXOR GATE Logic gate with applications and circuits
EXNOR GATE Logic gate with applications and circuits
DATA SELECTOR Solving logic problems
ASTABLE Produces pulses
BISTABLE Two stable states
MONOSTABLE Single pulse when triggered
TIME DELAY Wait a moment...
TRANSDUCER DRIVER Electronic switches and driver circuits
OPTO-ISOLATOR Separating subsystems for safety, or to prevent interference..
VOLTAGE COMPARATOR Which voltage is bigger?
VOLTAGE FOLLOWER Voltage out=voltage in.. Why?
NON-INVERTING AMPLIFIER Op-amp circuit
INVERTING AMPLIFIER Op-amp circuit
AUDIO AMPLIFIER Let's hear it!
RAMP GENERATOR Up and down we go!
ADC Analogue to digital converter
DAC Digital to analogue converter
COUNTER Binary, BCD and decade counters
BCD TO 7-SEGMENT DECODER Converting from BCD to drive a 7-segment display
LED INDICATORS What's happening?
7-SEGMENT DISPLAY Show me the numbers..
MATRIX DISPLAY Making patterns.
LAMPS Let there be light!
LOUDSPEAKER Hear this!
BUZZER Buzz!
PIEZO TRANSDUCER Bleep!
MOTOR Where are you going?
SOLENOID Needing a push?
RELAY Switching a high voltage/high current circuit
FAN Keeping cool...
Linear integrated circuits
This group includes the 555 timer, op-amps, audio amplifiers and other special purpose integrated circuits.
CMOS logic
4000 series CMOS logic integrated circuits have the advantages of wide power supply range (3-15 V) and low power consumption, making them ideal for battery operation.
In all CMOS circuits, you need to provide power supply decoupling, connecting 47 µF or 100 µF capacitors across the power supply close to the power supply pins of the integrated circuits themselves.
All the inputs of a CMOS integrated circuit must be connected either to HIGH or LOW.
The table below lists CMOS integrated circuits in numerical order. A second table lists these devices by function.
4000 Dual 3-input NOR gate + inverter
4001 Quad 2-input NOR gate
4002 Dual 4-input NOR gate
4008 4-bit binary full adder
4011 Quad 2-input NAND gate
4012 Dual 4-input NAND gate
4013 Dual D-type flip-flop
4014 8-stage shift register
4015 Dual 4-stage shift register
4016 Quad bilateral switch
4017 Divide-by-10 counter (5-stage Johnson counter)
4018 Presettable divide-by-n counter
4019 Quad 2-input multiplexer (data selector)
4020 14-stage binary counter
4021 8-bit static shift register
4022 Divide-by-8 counter (4-stage Johnson counter)
4023 Triple 3-input NAND gate
4024 7-stage binary counter
4025 Triple 3-input NOR gate
4026 BCD counter with decoded 7-segment output
4027 Dual JK flip-flop
4028 BCD to decimal (1-of-10) decoder
4029 Binary/BCD up/down counter
4040 12-stage binary ripple counter
4042 Quad D-type latch
4043 Quad NOR R/S latch
4044 Quad NAND R/S latch
4046 Phase-locked loop
4047 Monostable/astable
4049 Hex inverter/buffer (NOT gate)
4050 Hex non-inverting buffer
4051 Analogue multiplexer/demultiplexer (1-of-8 switch)
4052 Analogue multiplexer/demultiplexer (Dual 1-of-4 switch)
4053 Analogue multiplexer/demultiplexer (Triple 1-of-2 switch)
4060 14-stage binary ripple counter and oscillator
4066 Quad bilateral switch
4067 16-channel analogue multiplexer/demultiplexer (1-of-16 switch)
4068 8-input NAND gate
4069 Hex inverter (NOT gate)
4070 Quad 2-input EXOR gate
4071 Quad 2-input OR gate
4072 Dual 4-input OR gate
4073 Triple 3-input AND gate
4075 Triple 3-input OR gate
4076 Quad D-type register with tristate outputs
4077 Quad 2-input EXLUSIVE-NOR gate
4078 8-input NOR gate
4081 Quad 2-input AND gate
4082 Dual 4-input AND gate
4082 4-wide 2-input AND-OR-invert gate
4093 Quad 2-input Schmitt trigger NAND gate
4503 Hex inverter (NOT gate) with tristate outputs
4508 Dual 4-bit latch with tristate outputs
4510 BCD up/down counter
4511 BCD to 7-segment latch/decoder/driver
4512 8-input multiplexer (data selector) with tristate output
4514 1-of-16 decoder/demultiplexer HIGH output
4515 1-of-16 decoder/demultiplexer LOW output
4516 4-bit binary up/down counter
4518 Dual BCD up counter
4519 Quad 2-input multiplexer (data selector)
4520 Dual 4-bit binary up counter
4521 24-stage frequency divider
4526 Programmable 4-bit binary down counter
4532 8-bit priority encoder
4538 Dual precision monostable
4543 BCD to 7-segment latch/decoder/driver
4555 Dual 1-of-4 decoder/demultiplexer HIGH output
4556 Dual 1-of-4 decoder/demultiplexer LOW output
4585 4-bit magnitude comparator
4724 8-bit addressable latch
40106 Hex inverting Schmitt trigger (NOT gate)
This list includes the most popular devices but is not exhaustive. Consult manufacturer's data books and/or internet sites for additional information.
CMOS devices: Functional Index
NAND gates 4011 2-input, 4012 4-input, 4023 3-input, 4068 8-input, 4093 2-input Schmitt trigger
AND gates 4073 3-input, 4081 2-input, 4082 4-input
NOR gates 4000 3-input, 4001 2-input, 4002 4-input, 4025 3-input, 4078 8-input
OR gates 4071 2-input, 4072 4-input, 4075 3-input
Inverters (NOT gates)
TTL logic
Manufacturers data sheets:
From these sources, you will be able to download data sheets for almost any integrated circuit.
Most data sheets are in Adobe Acrobat, Acrobat Reader. You can install this from the cover CD of many computer magazines, or download it direct from Adobe:
Back to Contents
|
{"url":"http://www.doctronics.co.uk/beastie_zone.htm","timestamp":"2014-04-20T21:55:14Z","content_type":null,"content_length":"35120","record_id":"<urn:uuid:a4631500-1234-4d58-9cb4-5367540f5127>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wheatstone’s bridge
The Wheatstone bridge was designed by Charles Wheatstone in order to measure the unknown values of resistance and is also used as a means of calibrating measuring instruments like the voltmeters, ammeters, etc, with the help of a long slide wire. The Wheatstone bridge can be used for measuring extremely low values of resistances in the mili-Ohms range.
What exactly is the Wheatstone bridge?
A Wheatstone bridge is basically an electrical circuit used for the measurement of an unfamiliar electrical resistance by balancing the two legs of a bridge circuit, where one of its legs includes the undetermined component.
The Wheatstone bridge is used for the accurate measurement of resistance. The bridge contains two known resistors, one variable and one undetermined resistor which are connected in the form a bridge as shown in the figure below. The variable resistor is adjusted in such a way that the current through the galvanometer is reduced to zero. Hence, when the current through the galvanometer is reduced to zero, the ratio of the two known resistors is precisely the same as the ratio of the variable resistance and the unknown resistance. Hence, in this way it becomes quite simple to calculate the value of the unknown resistance with the help of Wheatstone bridge.
The Wheatstone bridge circuit is in the form of two simple series which are a sort of parallel arrangements of resistors connected between a terminal of voltage supply and the ground which produces zero voltage difference when the parallel resistor legs are balanced. The
Wheatstone bridge circuit is in the form of a diamond and has two input terminals and two output terminals. It consists of four resistors arranged in the from of a diamond.
For a certain adjustment of Q, V[BD] = 0, therefore no current flows through the galvanometer.
V[B] = V[D] or V[AB] = V[AD]
Or l1.P = l2.R
Likewise, V[BC] = V[DC] Þ l[1].Q = l[2].S
Dividing (1) by (2), we get, P / Q = R / S
The following video will further improve your knowledge on Wheatstone bridge
Now we discuss some of the solved problems based on Wheatstone bridge:
What’s the effective resistance of the circuits shown in the adjacent figure?
(a) It is a Wheatstone bridge that is balanced. Hence, the central resistance labeled ‘C’ can be assumed as ineffective.
R[eq] = R.
(b) The resistance R is in parallel with a balanced Wheatstone bridge.
R[eq] = R.R / R + R = R / 2
Wheatstone bridge is of immense importance and can be used in a number of applications.
askIITians offers comprehensive study material which covers all the important topics of Physics of IIT JEE. Various topics like effective resistance of Wheatstone bridge, and various solved problems on Wheatstone bridge have also been included. It is important to master this topic in order to remain competitive in the JEE.
Related resources:
|
{"url":"http://www.askiitians.com/iit-jee-electric-current/wheatstone-bridge/","timestamp":"2014-04-17T01:24:22Z","content_type":null,"content_length":"57450","record_id":"<urn:uuid:38c64e6e-6e4e-4282-bfd7-5530fdf0462b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Please check also the OpenModelica Modeling and Simulation Environment.
ObjectMath is a programming and modeling environment for object oriented mathematical modeling and generating efficient C++ or Fortran90 code for use in simulation applications, mostly (numerical,
scientific) computing.
This system partly automates the conventional approach of hand translation from mathematical models to numerical simulation code e.g. in Fortran. The ObjectMath language is an object oriented
extension to the Mathematica computer algebra language, which provides mathematical notation and symbolic transformations. Thus, the ObjectMath programming environment offers the following:
ObjectMath is designed and supported by a team at PELAB (Programming Environment Lab), Linköping University, Sweden. Some of the new developments of ObjectMath are part of the Modelica modeling
language design effort.
There is a collection of papers on various aspects of ObjectMath.
ObjectMath - the Process of Mathematical Modeling and Software Development
How we go from a initial model to the development of simulation applications ?
• The user has physical application oriented knowledge
• Express this as a mathematical, object-oriented specification
• This specification is expressed in ObjectMath
• Possibly perform symbolic transformations of formulas and equations
• Execute it and obtain results (which may cause adjustments of the specification)
• Graphically present the results (which gives understanding of the problem that often causes changes to the specifification)
How is software development of a simulation application supported?
• We specify the simulation problem through mathematical modeling.
• Possibly execute interpretively within Mathematica (for small problems)
• Perform model transformations (using the Mathematica computer algebra system)
• Automatically generate code (C++, Fortran90, parallel code - HPF)
• Specify input data for numerical experiments
• Run the experiment
• Visualize the data
roadmap of software development with ObjectMath.
ObjectMath - An Object-oriented Computer Algebra Language
ObjectMath is an Object-Oriented extension to Mathematica, a computer algebra language from Wolfram Research . Mathematica functions and equations can be grouped into classes, in order to structure
the mathematical model. Equations and functions can be inherited from general classes into more specific classes, which allows reuse within the mathematical model. Below are two example models. You
can see classes and instances in these examples, as well as inheritance relations between them (including single and multiple inheritance). The part-of relation allows structured objects to be
expressed as a composition of parts. The ObjectMath class browser can show both inheritance relations and part-of relations.
Inheritance allows formulae and equations to be reused. (For large models the amount of ObjectMath code is reduced approximately three times through reuse as compared to expressing the model in
standard Mathematica).
This object oriented way of modeling is a natural way to describe physical systems.
ObjectMath - a programming environment
The programming environment include the following components:
• The ObjectMath language - including object-oriented features and type declarations. The language is integrated with Mathematica.
• The code is edited by
□ class browser/editor
□ text editor
• Code generator to produce Optimized C++ and Fortran90 code
• Form based data entry tool to specify numerical experiments
• Visualization tool to draw curves and perform 3D graphics animations based on simulation results
internal components of the environment are connected.
small screendump of the environment
big screendump of the environment
ObjectMath - generating efficient code
• The equations, formulae and functions are symbolically transformed at compile or design time. Part of the computation is eliminated through symbolic transformations, which yield partially reduced
symbolic expressions.The rest is computed numerically at run-time. Remaining ordinary differential equations are solved numerically at run-time.
• Code for numerical computation is generated
• Optimizations:
□ common subexpression elimination
□ constant folding
□ unfolding for some functions
• The code is linked together with library routines, e.g. for i/O and numeric ODE solution.
A two-body example:
• Pure Mathematica code - 6 minutes
• Generated C++ code - 10 sec
• Generated C++ code with common subexpression elimination - 1 sec
Applications to realistic models:
A mathematical model of surface interaction have been designed in cooperation with SKF.
• 50% of implementation is generated from ObjectMath.
• 147 KB ObjectMath
Parallel code generator
This part of the system is more experimental, and has not been used in industrial applications. There are two approaches to Extracting Parallelism from Mathematical Models (so far in ObjectMath
primarily for ODEs):
• Parallelism at the equation system level
□ Analyze dependencies between equations
□ Find partly independent Strongly Connected Components (SCCs)
□ The SCCs form subsystems of equations that can be solved partly in parallel or in pipeline
• Parallelism at the equation level
□ Evaluation of ODE Right-hand sides in parallel
See also papers [1,2,10,11,12,13]
Scientific visualisation using ObjectMath
We generate efficient C++/Fortran90 code from ObjectMath models which include geometry descriptions expressed as parametric surfaces. This code is linked together with a powerful 3D browsing
environment which uses OpenGL with possible hardware support, e.g. Creator 24bit 3D graphics on UltraSparc workstations.
(GIF,color,85K) (PS, color 100K) (PS, B/W 100K)
The ObjectMath team
ObjectMath has been designed at the PELAB laboratory at Dept. of Computer and Information Science , Linköping University, Linköping, Sweden . The team has the following members: Ex-members:
• Niclas Andersson (Parallel code generation, parallel computing platforms)
• Rickard Westman (ObjectMath language and environment, networking)
• Lars Viklund (Programming environment, networking, parallel computing platforms)
• Lars Willför (Sequential code generation)
Questions about availability of the implementation should be directed to Peter Fritzson.
Visit collection of ObjectMath papers and other documents
This page has been designed by Vadim Engelson vaden@ida.liu.se.
Last change 10 April 1997
|
{"url":"http://www.ida.liu.se/labs/pelab/omath/index.html","timestamp":"2014-04-19T15:16:18Z","content_type":null,"content_length":"12927","record_id":"<urn:uuid:1e6350e8-f22e-4b92-884a-c9f4b0469686>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: st: Stata 9.2 versus Limdep
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: Stata 9.2 versus Limdep
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: Stata 9.2 versus Limdep
Date Thu, 26 Mar 2009 18:23:03 -0000
Unfortunately, I think that the spirit of Michael's comments still
A vague but not very helpful answer is that evidently Stata and Limdep
are not using exactly the same data, so some difference in results is
not at all surprising.
Of your three points, #1 clearly has no bearing on any difference
between Stata and Limdep here. I don't understand all of #2 but the same
appears to be true.
#3 raises the further question: How would Limdep not ignore missing
values? What would it do if the SKIP were not specified? But the
important question is still whether using SKIP has the same implications
as Stata's default treatment of missing values.
This can really only be explored properly if other people with access to
both Stata 9.2 and Limdep and their documentation can experiment with
1. Exactly the same dataset
2. Exactly the same commands as you typed, without any omissions
Otherwise, I don't see that experts in this field (not me) have enough
hard to bite on.
As often mentioned on this list, one answer to #1 is to use a standard
dataset downloadable from within Stata, run that and export it to
Limdep; or vice versa, if such a thing is possible.
It can be much easier to explore these questions via Stata technical
support, but I can't answer for whether StataCorp has a copy of the
version of Limdep you have (or indeed any other).
You are also presuming that people competent to answer this question
recognise all the references you give in name (date) form. That is
likely to be true, but Statalist protocol is to expect full references.
Marc Goergen
I would like to provide you with further information about my query (see
below for Michael's comments/queries).
1. I have always used Limdep and have only recently switched to Stata as
Stata seems to be the only package able to estimate the Blundell-Bond
(1998) system GMM estimator for a dynamic panel data model.
2. Bond (2002) explains the typical approach in determining whether the
first-differenced GMM estimator (Arellano and Bond (1991) or the system
GMM estimator is preferable for a particular model and dataset. An AR(1)
regression is run using a pooled OLS regression and a Within Groups
regression. Without going into too much detail, due to the omitted
variable bias, the coefficient on the lagged dependent variable obtained
from the pooled OLS regression tends to be upward biased and serves as
an upper bound. Conversely, Within Groups tends to produce downward
biased estimators. This is the reason why I used REGRESS and estimated a
simple, pooled OLS regression.
3. SKIP is a command used in Limdep to ignore missing values. This is
not done automatically and can create avoc.
>>> "Michael I. Lichter" <MLichter@Buffalo.EDU> 26/03/2009 16:03 >>>
That's not enough information for you to get an answer. What's special
about your dependent variable that led you to use LIMDEP? You're using
regular OLS regression in Stata; were you doing something different
there? What does SKIP mean? You can't assume that anybody here knows
enough about LIMDEP to interpret this. If you have fewer cases in Stata,
it's because something is missing. Were you doing pairwise deletion in
LIMDEP? Also, if this is panel data, why are you using regress instead
of xtreg? Explain this on the list, not to me directly.
Marc Goergen wrote:
> I have been running a pooled OLS regression on the same dataset in
both Limdep and Stata 9.2. However, the results I obtained are
significantly different:
> 1. Limdep seems to include roughly 10% more observations than Stata
does, despite using the skip command in Limdep.
> 2. Some of the variables that were highly significant in Limdep are no
longer significant in Stata. This is especially true for a dummy
variable which was significant at the 0.1% level in Limdep, but at best
has a p-value of 0.39 in Stata.
> 3. While for the case of Limdep the results for OLS without group
dummy variables are significantly different from those with group dummy
variables, in Stata the differences are much less pronounced.
> The command I used in Limdep is:
> REGRESS; LHS= LNAF; RHS= LNAF[-1],LNTA[-1], ...; STR=CODE;
PERIOD=YEAR; PANEL$
> The command I used in Stata is:
> regress lnaf l.lnaf l.lnta ... , robust cluster(code)
> Dropping the "robust" and "cluster" options in Stata does not bring
the results more in line with those obtained from Limdep.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-03/msg01406.html","timestamp":"2014-04-20T04:11:08Z","content_type":null,"content_length":"10173","record_id":"<urn:uuid:be2d8873-9bd4-4038-96df-401c75133356>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Project Scheduling and Copresheaves
Posted by Simon Willerton
Obsessed as I am with finding examples of enriched categories, I was struck by the unfamiliar notion of ‘activity network’ when checking a colleague’s course on graph theory. What struck me was how
scheduling actives seemed to be like finding a certain kind of copresheaf. Activity networks are also known as PERT charts to people in project management: an activity network or PERT chart records
all the dependencies between the activities need to be completed during a project, and also records the times the activities will take to complete. Once you have this information you can try to
schedule each of the activities in so that the project is completed as quickly as possible, and you can identify which activities are critical in the sense that any delay in that activity will cause
a delay to the whole project.
I noticed that there was some enriched category theory going on with these PERT charts. Patrons of the Café are probably familiar with the idea that metric spaces can be thought of as categories
enriched over $([0,\infty],\ge,+)$ the category of extended non-negative numbers. The triangle inequality for metric spaces then corresponds to the composition for enriched categories. The category
that is enriched over for PERT charts is $(\{-\infty\}\cup[0,\infty],\le,+)$ so the main difference is that the morphisms go the other way, this means that you get a reversed triangle inequality!
However, that makes sense in the scheduling context. I will explain how this gives categorical interpretations of terms like earliest start time, latest start time, float and critical path.
I have stressed the project management aspects here, but these ideas are also important, it seems, in scheduling processor jobs in parallel computing.
PERT graphs and project management
In project management, a project, for example renovating a house, will consist of various activities which need scheduling subject to various interdependencies between them. For instance, in the
house example, you can’t get a decorator to paper the walls until you’ve had a plasterer plaster them and it takes, say, ten day to plaster the walls. There are standard methods to approach the
scheduling of jobs. According to Wikipedia, the method of Program Evaluation and Review Technique (PERT) – closely associated to the critical path method –was introduced by the US Navy in the
mid-twentieth century during the Polaris missile development and had precursors in the management of the Manhattan Project.
The PERT method begins with you writing down every activity you need to complete, how long each activity will take and which activities need which others to be finished. So, ignorant of anything to
do with house renovation, let’s make up an example.
Label Activity Duration Requires
a Fix electrics 7 days
b Plaster walls 10 days a
c Lay carpets 3 days a
d Paper walls 12 days b
e Re-tile roof 15 days
Note that we’ve only put in the explicit dependencies, so that although papering the walls (activity d) also requires the fixing of electrics (activity a) to be finished we don’t put that in.
We add specific start and finish activities, which we will label S and F. Activity F has no duration and is explicitly dependent on everything which didn’t have anything dependent on it – you can’t
finish before everything is completed. Activity S has no duration, is not dependent on anything and any activity which is not already dependent on something is made dependent on S – nothing can begin
before the start.
Label Activity Duration Requires
S Start 0 days -
a Fix electrics 7 days S
b Plaster walls 10 days a
c Lay carpets 3 days a
d Paper walls 12 days b
e Re-tile roof 15 days S
F Finish 0 days d, c, e
We can now draw a graph of this data. We draw a node for each activity and and arrow for each explicit dependency. Each arrow is labelled by the amount of time after the first activity has started
that the second activity can start – here we are measuring time in days. This is the basis of what is called a PERT chart.
Those of you who have filled in grant applications may have been asked to provide a PERT chart. Now you know, more-or-less, what one is. Often they are also labelled with things such as ‘earliest
start time’, ‘latest start time’ and ‘float’, but we’ll come to those below.
Scheduling and copresheaves?
The goal of drawing up a PERT chart is to schedule the work, for instance book the contractors, so that the project finishes as quickly as possible. This means we wish to assign a start time $T(x)$
to each activity $x$ so that whenever we have activity $y$ with an explicit dependency on $x$ we have the scheduling constraint.
(1)$C(x,y)+T(x)\le T(y)$
where $C(x,y)$ is the label on the arrow in the above PERT chart, so $C(x,y)$ is how long after $x$ starts that $y$ can start.
Actually we can give ourselves such scheduling constraints (1) on the start times $T(x)$ and $T(y)$ for every pair $x$ and $y$, not just those with those with arrows between them, by setting $C(x,y)=
-\infty$ if there is no arrow from $x$ to $y$ in the graph, ie if there is no explicit dependency of $y$ on $x$. If we take $A$ to be our set of activities (including the start and finish activities)
and $V$ to be the set $\{-\infty\}\cup [0,\infty)$, then we have encoded the dependencies and times as a $V$-graph $C$, on the set $A$, this just means that we have a function
$C\colon A\times A\to V.$
I’ve obviously been thinking too much about enriched categories of late as the scheduling constraint (1) looks to me like a copresheaf condition. Remember that a copresheaf on an ordinary category $\
mathcal{A}$ is functor $T\colon \mathcal{A}\to Set$, so that means, of course, that to each object $x$ in $\mathcal{A}$ we associate a set $T(x)$ and to each pair of elements $x$ and $y$ we have a
function between sets $T\colon Hom_{\mathcal{A}}(x,y)\to Hom_{Set}(T(x),T(y))$, or to put it another way, we have a morphism in the category of sets
(2)$Hom_{\mathcal{A}}(x,y)\times T(x)\to T(y),$
where $(f,\alpha)\mapsto T(f)(\alpha)$. Compare this with the scheduling constraint (1).
Closed monoidal structure
To make this similarity into something more formal we can make $\{-\infty\}\cup[0,\infty)$ into a category by using $\le$ as a poset structure, so there is a unique morphism $a\to b$ if and only if
$a\le b$ and using $+$ as a monoidal structure. Note that this is the opposite to the category structure used on $[0,\infty]$ to define metric spaces as enriched categories – there we use $\ge$. This
means we can write the scheduling constraint (1) as the morphism
$C(x,y)\otimes T(x)\to T(y)$
which now looks more like the set copresheaf situation (2).
If we are enriching over a monoidal category $(\mathcal{V},\otimes)$ and want to talk about $\mathcal{V}$-functors into $\mathcal{V}$ then we will need to give $\mathcal{V}$ a $\mathcal{V}$-category
structure, thiat is to say we will need that $\mathcal{V}$ is a closed monoidal category. This means we need a functor $[[{-},{-}]]\colon \mathcal{V}\times \mathcal{V}\to \mathcal{V}$ such for each
$v\in \mathcal{V}$ the functor $v\otimes{-}$ is left adjoint to $[[v,{-}]]$. I’m using double brackets here not to confuse the internal hom $[[a,b]]$ with the closed interval $[a,b]$.
In our case this means we need
$a+b\le c \qquad\text{if and only if} \qquad a\le [[b,c]].$
In order to do this we will have to add a plus infinity, so we will define the category $\mathcal{R}$ to be $\{-\infty\}\cup[0,\infty]$ with $\le$ giving the morphisms and $+$ being the monoidal
structure. It is easy to say what the closed structure is on finite numbers, so for $a,b\in (-\infty,\infty)$ we define
$[[a,b]]=\begin{cases} b-a& a\le b,\\ -\infty& a\gt b\end{cases}$
I will often write $b-a$ instead of $[[a,b]]$ with the understanding that it means $-\infty$ if you get a negative answer.
Exercise: How do the monoidal and closed structures extend to include infinite numbers? Fill in the following tables, where $a$ and $b$ are finite. (I can give a hint if people need help getting
started.) $\begin{matrix} +& \color{red}{-\infty} &\color{red}{b}& \color{red}{+\infty} \\ \color{red} -\infty& -\infty&-\infty&?\\ \color{red} a &-\infty& a+b & +\infty\\ \color{red}+\infty&?&+\
infty&+\infty \end{matrix} \qquad\qquad\qquad\qquad \begin{matrix} [[,]]& \color{red}{-\infty} &\color{red}{b}& \color{red}{+\infty} \\ \color{red} -\infty& ?&?&?\\ \color{red} a &?& b-a & ?\\ \color
{red}+\infty&?&?&? \end{matrix}$
If I’d put negative numbers in as well (which I don’t think are necessary here) then I would have used ordinary subtraction between finite numbers. This is what Lawvere does when he puts this closed
monoidal structure on $[-\infty,\infty]$, the whole of the extended real numbers, in his entropy paper:
State Categories, Closed Categories, and the Existence Semi-Continuous Entropy Functions, (10 MB file!) IMA reprint 86 (1984).
He calls categories enriched over $([0,\infty],\le,+)$ ‘entropic categories’. I haven’t fully digested what use he makes of them in this paper.
Enriching over $\mathcal{R}$
We can now consider categories enriched over the closed monoidal category $\mathcal{R}=(\{-\infty\}\cup[0,\infty],\le,+)$. Let’s just remind ourselves what such a thing is. It consists of a set $A$
and a function $D\colon A\times A\to \{-\infty\}\cup[0,\infty]$ such that
• for all $x\in A$ we have $0\le D(x,x)$,
• for all $x,y,z\in A$ we have $D(x,y) +D(y,z)\le D(x,z)$.
This last condition looks bizarre as it is the triangle inequality the wrong way round! However, we will make sense of it in if we look at our scheduling example. Our PERT chart, illustrating the
explicit dependencies between events, can be thought of as an $\mathcal{R}$-graph on the set $A$, this just means it corresponds a function
$C\colon A\times A \to \{-\infty\}\cup[0,\infty]$
We can form the free $\mathcal{R}$-category on $A$ by defining
$D(x,y)\coloneqq max\{C(x,a_1)+C(a_1,a_2)+\dots +C(a_{n-1},a_n)+C(a_{n},y)\},$
where the maximum is taken over all the tuples $a_1,\dots, a_n\in A$ and over all $n\ge 0$. (Here max is the coproduct in $\mathcal{R}$.) This $\mathcal{R}$-category is now recording the implicit
dependencies. So for instance in the example above we didn’t record any explicit dependency between fixing the electrics (activity a) and finishing the project (activity $F$), so $C(a,F)=-\infty$ but
it is a consequence of the other dependencies that you need to fix the electrics at least twenty nine days before finishing the project, ie $D(a,F)=29$.
I hope that if we interpret $D(x,y)$ as the time lag between starting activity $x$ and starting activity $y$, you can now make sense of the reversed triangle inequality
$D(x,y) +D(y,z)\le D(x,z).$
The minimum time lag after starting $x$ to starting $z$ is at least as big as the minimum time lag from $x$ starting to $y$ starting plus the minimum time lag from $y$ starting to $z$ starting So we
can think of the hom-objects in an $\mathcal{R}$-category as being like minimum possible time lags rather than minimum possible distances.
Scheduling and copresheaves
We can now return to the scheduling question. We want to schedule the activities in our project subject to the dependencies between them. Here by scheduling an activity $x$ we will mean setting a
time $T(x)\in [0,\infty)$ relative to the start of the project, so that $T(S)=0$, subject to the explicit constraints, so for all activities $x,y\in A$
$C(x,y)\le T(y)-T(x).$
This means that $T$ is precisely an $\mathcal{R}$-graph function from out PERT chart to the underlying $\mathcal{R}$-graph of $\mathcal{R}$. This is corresponds exactly to an $\mathcal{R}$-functor
from the free $\mathcal{R}$-category $(A,D)$ to $\mathcal{R}$. This is just expressing the free-forgetful functor between $\mathcal{R}$-graphs and $\mathcal{R}$-categories:
$\mathcal{R}\text{-Grph}(A, U(\mathcal{R}))\cong \mathcal{R}\text{-Cat}(F(A),\mathcal{R}).$
In other words, scheduling the jobs means finding a copresheaf on the $\mathcal{R}$-category $(A,D)$ of implicit dependencies.
We can give some categorical interpretations of standard ideas in critical path analysis now.
What we really want to do is to finish the project as quickly as possible. The shortest amount of time the project will take to finish is the minimum time lag between the start and the finish, this
is just $D(S,F)$. Given that length of time, we want to know how much freedom we have in scheduling individual activities and whether delaying a specific activity will cause the whole project to be
delayed – will it matter if the plasterer doesn’t turn up on time or the walls take longer to dry out than expected. There are three basic things then to know about each activity: the earliest start
time, the latest start time, and the float.
• The earliest start time of an activity is hopefully self-explanatory. It is the minimum time lag between the start of the project and the start of the activity. In categorical terms it is just $D
(S,x)$ – the evaluation of the representable copresheaf. As functions we have that the earliest start time is a copresheaf $\text{earliest start time} =D(S,{-}).$
• The latest start time of an activity is the latest that an activity can start without delaying the whole project. The minimum time lag between the activity $x$ and the finish is $D(x,F)$, and if
the project is not delayed then the finish time is $D(S,F)$, thus the latest start time is $D(S,F)- D(x,F)$ or in internal hom terms $[[D(x,F),D(S,F)]]$. So the latest start time is the
copresheaf $\text{latest start time}=[[D({-},F),D(S,F)]].$
• The float of an activity is the difference between its earliest start time and its latest start time, and so reflects how much leeway you have in scheduling the activity. If the float is zero
then the activity is said to be critical. In our notation the float of activity $x$ is $(D(S,F)- D(x,F))-D(S,x)=D(S,F)-( D(x,F)+D(S,x) )$ or in the internal hom notation $[[D(S,x) + D(x,F) ,D
(S,F)]]$ To put it another way, the float is measuring how much the reverse triangle equality is violated for $x$ between $S$ and $F$.
A viable schedule $T$ will be one which finishes the project as soon as possible and satisfies the constraints so will be a copresheaf which satisfies
$D(S,{-})\le T({-}) \le[[D({-},F),D(S,F)]].$
I don’t know if that is a condition which crops up elsewhere in category theory.
A further important notion is that of critical path, these are the paths in the PERT graph from the start to the finish which consist of activities with float zero, so a delay of any activity on the
path will lead to a delay in the project. The path $S a b d F$ is critical in our example above. Everywhere on a critical path the triangle inequality from $S$ to $F$ is actually an equality, so
critical paths are the analogues of geodesics in metric spaces.
Further questions
All of this came out of me looking at a colleague’s course and thinking randomly “Hmm. That looks categorical.” Here are some random further questions.
1. Do other categorical notions have useful interpretations in this context? Is the notion of $\mathcal{R}$-functor between projects meaningful? How about the notion of $\mathcal{R}$-profunctor?
2. Is it useful to think of these things are ‘generalized metric spaces with negative distances’?
3. Do these ideas extend to other networks such as Petri nets or networks of resistors?
4. Does it help to think in terms of entropy as in Lawvere’s paper?
Posted at March 24, 2013 5:47 PM UTC
Re: Project Scheduling and Copresheaves
[Note: Tim emailed me this straight after a skim reading of the post and before drinking his morning coffee, so I take responsibility for posting this here. Simon.]
I liked the post. I used to teach critical path analysis, and then go on to discrete dynamical systems and (times) Petri nets. These both have strong categorical flavour.
Have you looked at the paper:
Casley, R.T., Crew, R.F., Meseguer, J., and Pratt, V.R., Temporal Structures, Proc. Category Theory and Computer Science 1989, ed. D. Pitt et al, LNCS 389, 21-51, Springer-Verlag, 1989. Revised
journal version in Mathematical Structures in Computer Science, Volume 1:2, 179-213, July 1991. (The Journal version is good preprint pdf.)
(I think that is the right one of Pratt’s work. He has written a lot that is discursive so it is difficult to find which bit it is in.)
There are links between discrete dynamical systems approaches to scheduling and tropical algebra. I used to get students doing projects on using the Max+ / tropical algebra and linear algebra to
attack fairly real life scheduling problems. That was good fun both for the students and for me! I learnt a lot!
The Petri nets aspect was explored by Meseguer, as a Petri net looks like a presentation of an symmetric monoidal category. Your post suggests that there may be an enriched version of his result for
times Petri nets. (I had meant to explore that with students but the occasion never came, and the Bangor department was closed which made things impossible.) Meseguer’s papers include
José Meseguer, Ugo Montanari: Petri Nets Are Monoids: A New Algebraic Foundation for Net Theory. LICS 1988: 155-164
and another one on Petri nets and Linear Logic from about the same time. See Meseguer’s papers at dblp.
Posted by: Tim Porter on March 25, 2013 9:19 AM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
Tim, thanks for these. I’ll have a look.
Last week I got hold of a copy of
Max Plus at Work: Modeling and Analysis of Synchronized Systems: A Course on Max-Plus Algebra and Its Applications (Princeton Series in Applied Mathematics) Bernd Heidergott, Geert Jan Olsder,
Jacob van der Woude.
They seem to take train scheduling as a key motivating example. I’m looking forward to chugging through it.
Posted by: Simon Willerton on March 26, 2013 5:40 PM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
As I said before, I found it a fun area to teach. Somehow it seems that you can persuade the students that the calculations and theory are (i) useful (ii) not too difficult as they mirror ordinary
linear algebra, and (iii) the methods reveal a lot about the algebra that they use almost automatically since here if you try to go on `automatic pilot’ you start subtracting yet that is clearly a
silly thing to do.
I would be great to see this introduced into a lot of undergraduate courses. (I usually used Stephan Gaubert’s notes as a basis. I put a link on the n-Lab…. I am also convinced that eventually the
homotopical algebra of related things will become very important.)
Posted by: Tim Porter on March 29, 2013 7:05 AM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
Here’s a link to the notes of Gaubert on discrete event systems: Introduction aux Systèmes Dynamiques à Événements Discrets.
I’ve realized that the PERT graphs above fall into the timed Petri net net. In fact they fall into the smaller net of timed event graphs.
Tim, you said to me (in an email) that you have been “trying to push the idea of modelling complex systems of various types by enriched categories for some time”. Is there somewhere where I can read
about this, or have you just been doing it verbally?
Posted by: Simon Willerton on April 8, 2013 11:55 AM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
Mostly verbally!! I did prepare some draft EPSRC proposals but then it became clear that I needed a publication trail in such areas. I had talked about enriching shape theory in something like that
way, way back in the 1980s, and also had published some stuff on using Chu spaces in their more general form (i.e. with a richer set of `truth values’) for handling observations. This was then
further pushed forward with the project work the students had done on Petri nets etc., but then time ran out as it was clear that Bangor was going to be shut down, and our work load was going up. I
talked to the then computer scientists in Bangor at the time, but then they left!
I never felt that the work would get published so left it aside. The ideas were not ripe at that time and there was not the interest in them that there might be now.
Posted by: Tim Porter on April 9, 2013 6:33 AM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
Of course, as soon as I saw my e-mail I saw a typo, (is that a version of Sod’s law?). It should be `(timed) Petri nets’, not (times)!
Posted by: Tim Porter on March 25, 2013 10:34 AM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
Posted by: Mike Shulman on March 25, 2013 12:43 PM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
Good question. I guess one answer is that I have been thinking too much about presheaves and copresheaves recently, so forgot that copresheaf might sound exotic to some (or perhaps, to most).
My emphasis on the word ‘copresheaf’ rather than ‘functor’ is due to thinking of copresheaves as being akin to functionals (ie scalar-valued functions) and functors as being akin to more general
functions – I’m reckoning that functionals are the most basic kind of function. I consider the enriching category $\mathcal{V}$ to be the ‘scalars’, so a copresheaf on $\mathcal{C}$ is just a
scalar-valued functor $\mathcal{C}\to {V}$.
In retrospect ‘functor’ might have been a less scary word to use than ‘copresheaf’, but would miss the specificity.
Posted by: Simon Willerton on March 25, 2013 4:19 PM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
So can
$D(x,y)\coloneqq max\{C(x,a_1)+C(a_1,a_2)+\dots +C(a_{n-1},a_n)+C(a_{n},y)\}$
be thought of as a path integral? As in matrix mechanics for any rig, it seems there is ‘multiplication’ (here addition) along a path and summation (here max) across paths.
Posted by: David Corfield on March 26, 2013 9:43 AM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
Yes. You can think of it like that. It is indeed a ‘sum’ over all ‘paths’ of a certain ‘product’.
You should compare it with the metric graph case if you haven’t already. Suppose you have a graph with edges marked with a given length, so if $u$ and $v$ are vertices at either end of an edge then
there is a length $L(u,v)\in [0,\infty)$. We can then get a metric on the vertex set $Vert$ by the ‘shortest path metric’. This is a free $[0,\infty]$-category by the analogous construction to that
which I did for PERT graphs: I will spell that out.
We can extend the edge lengths to a function $L\colon Vert \times Vert \to [0,\infty]$, ie a $[0,\infty]$-grap, by $L(u,v)\coloneqq \infty$ if $u$ and $v$ do not have an edge between them.
We think of $[0,\infty]$ as a monoidal catgory where we have a morphism $a\to b$ if and only if $a\ge b$ and the monoidal product is addition $+$. The coproduct for this is min. So the free $[0,\
infty]$-category $(Vert,\;d\colon Vert\times Vert \to [0,\infty])$ on the $[0,\infty]$-graph $(Vert,\; L\colon Vert\times Vert \to [0,\infty])$ is the shortest path metric:
$d(u,v)\coloneqq min\{L(u,a_1)+L(a_1,a_2)+\dots +L(a_{n-1},a_n)+L(a_{n},v)\}.$
This is again a ‘path integral’ in the sense you describe it.
Posted by: Simon Willerton on March 26, 2013 3:50 PM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
is very much like your bakery example of optimal transport
$\phi(Hovis)\gt \phi(Warburtons)+d(Warburtons,Hovis).$
Posted by: David Corfield on March 26, 2013 10:34 AM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
Indeed, it is very much like the bakery example. The condition I want satisfied there is the negation of the inequality you give, so I want
$d(Warburtons, Hovis)+ \phi(Warburtons) \ge \phi(Hovis).$
Thus $\phi$ is a copresheaf on the metric space of bakeries. I seemed to have claimed back then that it was a presheaf, but I think I got the variance wrong. I’ll talk you through this a bit further
Why do I get the opposite inequality here to the scheduling case where I claimed that a copresheaf is something which satisfies
$C(x,y)+ T(x)\le T(y)?$
That’s because PERT graphs are related to categories enriched over a poset of extended non-negative real numbers with $\le$ and metric spaces are categories enriched over a poset of extended
non-negative reals with $\ge$.
Let’s remind ourselves what a copresheaf is. Suppose $\mathcal{V}$ is a closed monoidal category. For $\mathcal{C}$ a $\mathcal{V}$-enriched category a presheaf on $\mathcal{C}$ is a $\mathcal{V}$
-functor $T\colon \mathcal{C}\to \mathcal{V}$. This means that it consists of a function on objects $T\colon ob(\mathcal{C})\to ob(\mathcal{V})$, and for each pair of objects $x,y \in ob({C})$ there
is a morphism in $\mathcal{V}$
$Hom_{\mathcal{C}}(x,y) \to Hom_{\mathcal{V}}(T(x),T(y)).$
Here $Hom_{\mathcal{V}}$ means the internal hom in $\mathcal{V}$. Because $\mathcal{V}$ is closed we have the hom-tensor adjunction in $\mathcal{V}$ so the morphism above is equivalent to a morphism
in $\mathcal{V}$
$Hom_{\mathcal{C}}(x,y)\otimes T(x) \to T(y).$
If we are taking $\mathcal{V}$ a poset of real numbers with addition as the monoidal structure, then the $\otimes$ becomes a $+$ and the existence of a morphism in $\mathcal{V}$ of the form $a\to b$
just means $a \ge b$ or $a\le b$ depending on which partial order on the reals we have taken.
In the $\le$ case the copresheaf condition is
$Hom_{\mathcal{C}}(x,y) + T(x) \le T(y),$
which is what we see for PERT graphs.
On the other hand, in the $\ge$ case (ie the case relevant for metric spaces) the copresheaf condition is
$Hom_{\mathcal{C}}(x,y) + T(x) \ge T(y),$
which is what we saw for the bakeries.
Posted by: Simon Willerton on April 1, 2013 6:41 PM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
What I always dreamed of was a nice piece of shiny machinery (matrix mechanics) with a dial for you to choose your rig, and off it would go pumping out results on classical, quantum, statistical
mechanics, homotopy theory, and now maybe optimal transport and project scheduling.
There was also the hope that rig morphisms would allow us to relate these areas, e.g., if a classical particle can get from A to B in a space, it means A and B are path connected.
Posted by: David Corfield on April 2, 2013 1:57 PM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
An interesting question in this regard is to whether or not a given rig $R$ can be categorified in the sense of finding a monoidal category whose set of (isomorphism class) of objects in the
underlying set of $R$, whose monoidal product on objects becomes the product in the rig and whose categorical coproduct becomes the addition in the rig.
By the above we can do that for $(\mathbb{R},max,+)$ and $(\mathbb{R},min,+)$. We can also do that for $(\mathbb{N},+,\times)$ by taking the category of finite sets with the monoidal product being
the cartesian product. But can we categorify $([0,\infty],+,\times)$?? That would allow us to do proper(ish) path integrals in this enriched setting.
Posted by: Simon Willerton on April 2, 2013 2:48 PM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
We had plenty of discussion over the years of using tame groupoids with obvious sum and product as a categorification of the non-negative reals. Of course, any given real will have many different
representations as a groupoid cardinality.
We even wondered about a ‘distance’ between groupoids.
Posted by: David Corfield on April 2, 2013 3:10 PM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
David wrote:
What I always dreamed of was a nice piece of shiny machinery (matrix mechanics) with a dial for you to choose your rig, and off it would go pumping out results on classical, quantum, statistical
mechanics, homotopy theory, and now maybe optimal transport and project scheduling.
We basically came up with it; the links you assembled explain it. But I guess we’d have to write things up a bit more formally, preferably all in one place, for people to see that this machinery
really exists.
When I saw Simon in Sheffield last week, he showed me the book Max Plus at Work, a rather down-to-earth introduction to the max-plus rig and its applications to train scheduling. It has a nice
section on ‘timed Petri nets’ which, I realized, are just like the ‘stochastic Petri nets’ I love except that they use the max-plus rig instead of the rig of nonnegative reals with their ordinary
addition and multiplication. So, chemical reaction theory is a deformation of train scheduling!
I wanted to write a paper explaining this then and there, and Simon sounded interested, so maybe we’ll do it. But he seems a bit more interested in enriching categories over different monoidal
categories, while I seem a bit more interested in linear algebra over different rigs (or rig categories). So we may not be seeing eye-to-eye yet. They should be closely related, of course.
Posted by: John Baez on April 5, 2013 2:02 AM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
…chemical reaction theory is a deformation of train scheduling!
I wanted to write a paper explaining this then and there…
It would be great to revisit the ideas we all had in this area. No doubt they’re quite scattered. I just came across Urs talking about path length and path integral.
Posted by: David Corfield on April 5, 2013 8:34 AM | Permalink | Reply to this
Re: Project Scheduling and Copresheaves
John, I do like your overstated, as opposed to British understated, comments.
I wanted to write a paper […] then and there.
Ah, I didn’t realize.
There’s a link to the Max Plus at Work book above – the link points to the publisher rather than to Amazon as I’m trying to avoid Amazon at the moment due to their anti-social tax avoidance methods.
(Note: there is a certain amount of British understatement in that last sentence.)
Posted by: Simon Willerton on April 9, 2013 11:40 AM | Permalink | Reply to this
|
{"url":"http://golem.ph.utexas.edu/category/2013/03/project_planning_parallel_proc.html","timestamp":"2014-04-20T03:12:07Z","content_type":null,"content_length":"119616","record_id":"<urn:uuid:3ede97c4-952e-408a-b8cb-c56e50e861a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Best Response
You've already chosen the best response.
that function is not a odd function, so a_n wont be zero ( nor will a_0) to continue with b_n, use integration by parts
Best Response
You've already chosen the best response.
Isn't 2x an odd function?
Best Response
You've already chosen the best response.
i've got: \[\frac{ 2 }{ \pi }\int\limits_{0}^{\frac{ \pi }{ 4}}2x \sin (2nx)dx\].. are you sure your process?
Best Response
You've already chosen the best response.
as I said, I don't know anything about this. How did you get 2nx? :o
Best Response
You've already chosen the best response.
What you have done seems perfectly ok. Here's the result after summing 500 terms.
Best Response
You've already chosen the best response.
@Jone133 To proceed, start from the last line of your work, \( \large b_n = \frac{\pi}{2}\int_0^\frac{\pi}{4}2x sin(nx) dx \) Integrate to get an algebraic expression of bn. You will need to
integrate by parts. Evaluate between the limits 0 and \(\pi/4\) to get the following: \[ \large b_n =\frac{4\,\mathrm{sin}\left( \frac{\pi \,n}{4}\right) -\pi \,n\,\mathrm{cos}\left( \frac{\pi
\,n}{4}\right) }{\pi \,{n}^{2}}\] which is basically your answer. The required function is then \[ f(x)=\sum_1^\infty b_n sin(nx)\] The previous post shows an image after summing the first 500
Best Response
You've already chosen the best response.
A medal for you, kind sir :) I was thinking, do I have to evaluate between \[0 \to \pi/4\] Or \[0 \to \pi\] ? I did it for 0 to pi 1st, going to 0 to pi/4 now
Best Response
You've already chosen the best response.
From 0 to pi/4 would be on the right track, otherwise it would not result in the right shape.
Best Response
You've already chosen the best response.
I can't thank you enough :D Now I have the question: "Indicate the value of the series found on the previous question, on the points: -pi/4 and 9pi/5" Do I substitute n or x? :o
Best Response
You've already chosen the best response.
The answer should be 0, because the series is supposed to have a cycle of pi. But it is not zero. The series is -pi/2 at t=-pi/4, since we only had the sine components. I believe we need to work
out both the sine and cosine components, because the given pattern is neither even nor odd, as pointed out previously by UncleRhaukus. Also, the standard period of Fourier series is 2pi. Think
there is a transformation to be done here as well. So while your solution gives the right shape for the interval [0,pi], it is not the correct periodic function, thanks to the second part. Can
you see if you can do the following: 1. map the period [0,pi] to [0,2pi] so that we can have the correct periodic function. 2. include a0, an and bn in the calculations. See what you get. If I
get some results, I will post here as well. I won't be able to work on it until later today. So you can get started in the mean time.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50f1e0cae4b0abb3d86fb7d4","timestamp":"2014-04-19T02:25:45Z","content_type":null,"content_length":"52043","record_id":"<urn:uuid:f7619648-53ac-4d41-8cc7-53f425207be5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next: Generalized RQ factorization Up: Generalized Orthogonal Factorizations Previous: Generalized Orthogonal Factorizations
The generalized QR (GQR) factorization of an n-by-m matrix A and an n-by-p matrix B is given by the pair of factorizations
where Q and Z are respectively n-by-n and p-by-p orthogonal matrices (or unitary matrices if A and B are complex). R has the form
where T has the form
Note that if B is square and nonsingular, the GQR factorization of A and B implicitly gives the QR factorization of the matrix
without explicitly computing the matrix inverse
The routine PxGGQRF computes the GQR factorization by computing first the QR factorization of A and then the RQ factorization of Q and Z can be formed explicitly or can be used just to multiply
another given matrix in the same way as the orthogonal (or unitary) matrix in the QR factorization (see section 3.3.2).
The GQR factorization was introduced in [73, 100]. The implementation of the GQR factorization here follows that in [5]. Further generalizations of the GQR factorization can be found in [36].
Susan Blackford
Tue May 13 09:21:01 EDT 1997
|
{"url":"http://netlib.org/scalapack/slug/node59.html","timestamp":"2014-04-20T13:18:52Z","content_type":null,"content_length":"4860","record_id":"<urn:uuid:d4e5a2c8-ad63-4558-bbd5-e786384d9b25>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Literal Equation Word Problems
Submit your word problem on literal equations:
Are you looking for literal equations word problems? TuLyn is the right place. We have tens of word problems on literal equations and hundreds on other math topics.
Below is the list of all word problems we have on literal equations.
Literal Equations Word Problems
Find word problems and solutions on equations.
|
{"url":"http://tulyn.com/wordproblems/literal_equations.htm","timestamp":"2014-04-20T06:01:14Z","content_type":null,"content_length":"12279","record_id":"<urn:uuid:2ad1fadf-04a1-4a13-aae8-7434176952a5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Collision Detection
Author Collision Detection
Hi guys. I have recently began to learn Java at university and the below code is a snippet from a weekly task that is soon due. Just looking for some help towards acquiring an answer (no
Joined: spoon feeding). Basically we need to write two methods. One to find the distance between the objects in the game (circles) which I have already done correctly I believe. However I'm not
Mar 12, sure how to implement the 'collision' method. The 'GameObject' is a separate class from the class below. Basically, the game has a playerObject (circle) that moves around with the keys and
2013 if it touches any circle in the GameObject[] barriers the method returns false (or true?). Please any help would really be appreciated.
Posts: 8
Oct 13, Welcome to the Ranch
Posts: I shall move this thread to the GUIs forum.
36478 Use the Math.hypot method for that calculation. One way you can do it is to see whether the separation is less than the radius of the objects, but that may only work for circles. I think
many classes in the 2D package have methods for whether they intersect.
Hi thanks for the reply.
Joined: Could some please check if i have written the two below methods correctly? The playerObject is at least detecting the 'barrier' object now. However, sometimes the two don't quite touch
Mar 12, each-other. The jUnit tests are also failing with the expected result being 6.844 but I'm getting a result of 9.844.
2013 I can't seem to find the error. (Note that the 'playerObject' and the array of objects are all circles).
Posts: 8
subject: Collision Detection
|
{"url":"http://www.coderanch.com/t/607139/GUI/java/Collision-Detection","timestamp":"2014-04-17T21:40:39Z","content_type":null,"content_length":"25617","record_id":"<urn:uuid:ed736790-a412-452d-9917-ad22b08af6fc>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Largest ever Mersenne prime number discovered, has millions of digits
Prime numbers have always fascinated mathematicians. They pepper the natural numbers with a disconcerting degree of unpredictability, yet are distributed quite evenly. So much of mathematics finds
surprising and beautiful connections between seemingly disparate concepts. There is a reassuring recurrence of patterns. This is not so with the primes.
Number enthusiasts have long sought underlying rules to explain why certain numbers are prime. Two different approaches go hand in hand — proving that a given number is prime, and predicting patterns
to find undiscovered primes.
The Sieve of Eratosthenes exemplifies an early attempt. Picture the natural numbers (any number you can use to count objects) ordered on a line. The first number is one. Recalling the definition of a
prime number (a natural number greater than one with no factors besides itself and one), we see that one itself is off the table. Beginning with two, we’ve already found our first prime! Indeed, it
meets all of the specifications. We also now know a lot of numbers that aren’t prime — that is, all multiples of two.
With hardly any work at all, we can cross out half of the numbers we were considering. The next number that isn’t crossed out is three — another prime! After crossing out every third number, the next
unmarked number is five, again prime. If you haven’t caught the pattern, as you work your way along the number line, any unmarked number you come across will be prime, because if it had any factors,
you would have crossed it out along with them already.
The Sieve of Erastothenes is a fairly efficient algorithm to find all the primes in the range of the low hundreds, but it’s easy to see that without the aid of a computer it is impractical for
anything larger. It also can’t just tell you if any given number is prime without running through the whole process.
Exploiting modular arithmetic, Fermat’s Little (but not his last) Theorem can be used to prove that a given number is composite (not prime) or likely to be prime. Unfortunately, some composite
numbers can pass this and other primality tests. These numbers are called psuedoprime.
Some mathematicians noticed that (2^n)-1 tends to be prime when n is prime. Initially, it was thought this worked for all primes. Hudalricus Regius busted that myth in 1536 when he found that (2^11)
-1 = 2047 = 23*89. Marin Mersenne, a french monk, postulated in 1644 that it was true for n = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127 and 257, and that for n < 257, all the rest result in composites. He
wasn’t completely right (the complete list is n = 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107 and 127), but he still got his name on primes of the form (2^n)-1.
With the advent of high-performance electronic computing in the second half of the twentieth century, researchers began using computers to find increasingly large primes. It is no accident that the
largest primes discovered are all Mersenne primes. This is because they are the easiest large numbers to prove prime.
The GIMPS (Great Internet Mersenne Prime Search) is a distributed computing project that has been working to find new primes since 1996. In its 17 years, GIMPS has found all 14 of the largest
mersenne primes. According to mersenne.org, the system can handle up to 150 trillion calculations per second on 360,000 CPUs. Volunteers can contribute to the project by installing the GIMPS software
on their personal computers. GIMPS offers cash rewards of $3,000 to the lucky finders of new primes. In addition, the Electronic Frontier Foundation is offering a $150,000 bounty for the first prime
with at least 100 million digits and $250,000 for at least one billion digits.
Dr. Curtis Cooper, of the University of Central Missouri, recently found his third record-breaking prime. On January 25, Curtis confirmed that (2^57,885,161)-1 was the 48th mersenne prime. It has
17,425,170 digits. The proof took 39 days to compute. He is eligible for a $3,000 award from GIMPS.
|
{"url":"http://www.calvin.edu/chimes/2013/02/15/largest-ever-mersenne-prime-number-discovered-has-millions-of-digits/","timestamp":"2014-04-20T22:29:43Z","content_type":null,"content_length":"32861","record_id":"<urn:uuid:44867439-de54-478a-8810-e9ae8ef135c1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Single Variable Subtraction Equations
7.8: Single Variable Subtraction Equations
Difficulty Level:
Created by: CK-12
Practice Single Variable Subtraction Equations
Remember Marc and his lunch?
In the last Concept, you figured out that Marc spent $6.15 on lunch. After Marc had paid for his lunch, he bought an ice cream cone. It cost $3.15. If Marc received $5.85 back as change, how much did
he give the cashier?
You can write a single variable subtraction equation to represent this dilemma.
Solving the equation will give you a solution to the problem presented.
Pay attention and you will learn how to do this in this Concept.
To solve an equation in which a number is subtracted from a variable, we can use the inverse of subtraction––addition. We can add that number to both sides of the equation to solve it.
You can think about this as working backwards from the operation. If we have a problem with addition, we subtract. If we have a problem with subtraction, we add.
We must add the number to both sides of the equation because of the Addition Property of Equality , which states:
if $a=b$$a+c=b+c$
So, if you add a number, $c$$c$
Let's apply this information to a problem.
Solve for $a \ a-15=18$
In the equation, 15 is subtracted from $a$ add 15 to both sides of the equation to solve for $a$
$a-15 &= 18\\a-15+15 &= 18+15\\a+(-15+15) &= 33\\a+0 &= 33\\a &= 33$
Notice how we rewrote the subtraction as adding a negative integer.
The value of $a$
Here is another one.
Solve for $k \ k-\frac{1}{3}=\frac{2}{3}$
In the equation, $\frac{1}{3}$ subtracted from $k$ add $\frac{1}{3}$$k$
$k-\frac{1}{3} &= \frac{2}{3}\\k-\frac{1}{3}+\frac{1}{3} &= \frac{2}{3}+\frac{1}{3}\\k+\left(-\frac{1}{3}+\frac{1}{3}\right) &= \frac{3}{3}\\k+0 &= \frac{3}{3}\\k &= \frac{3}{3}=1$
The value of $k$
Again, we are using a property. The Subtraction Property of Equality states that as long as you subtract the same quantity to both sides of an equation, that the equation will remain equal.
Each of these properties makes use of an inverse operation. If the operation in the equation is addition, then you use the Subtraction Property of Equality. If the operation in the equation is
subtraction, then you use the Addition Property of Equality.
Solve each equation.
Example A
Solution: $66$
Example B
Solution: $6.9$
Example C
Solution: $\frac{3}{4}$
Here is the original problem once again.
In the last Concept, you figured out that Marc spent $6.15 on lunch. After Marc had paid for his lunch, he bought an ice cream cone. It cost $3.15. If Marc received $5.85 back as change, how much did
he give the cashier?
You can write a single variable subtraction equation to represent this dilemma.
Solving the equation will give you a solution to the problem presented.
First, let's write the equation.
Our unknown is the amount of money Marc gave the cashier. Let's call that $x$
Then we know that the ice cream cone cost $3.15.
$x - 3.15$
Marc received $5.85 in change.
$x - 3.15 = 5.85$
Now we can solve this equation.
$x = 5.85 + 3.15$
$x = 9.00$
This is our answer.
Here are the vocabulary words in this Concept.
Isolate the variable
an explanation used to describe the action of getting the variable alone on one side of the equal sign.
Inverse Operation
the opposite operation
Subtraction Property of Equality
states that you can subtract the same quantity from both sides of an equation and have the equation still balance.
Addition Property of Equality
states that you can add the same quantity to both sides of an equation and have the equation still balance.
Guided Practice
Here is one for you to try on your own.
Harry earned $19.50 this week. That is $6.50 less than he earned last week.
a. Write an equation to represent $m$
b. Determine how much money Harry earned last week.
Consider part $a$
Use a number, an operation sign, a variable, or an equal sign to represent each part of the problem.
$& \text{Harry earned} \ \underline{\19.50 \ \text{this week}}. \ \text{That} \ \underline{\text{is}} \ \underline{\6.50} \ \underline{\text{less than}} \ldots \underline{\text{last week}.}\\& \qquad
\qquad \qquad \qquad \quad \downarrow \qquad \qquad \qquad \ \ \downarrow \quad \Box \quad \quad \downarrow \qquad \qquad \qquad \Box\\& \qquad \qquad \qquad \qquad \quad \downarrow \qquad \qquad \
qquad \ \ \downarrow \quad \Box \quad \quad \downarrow \qquad \qquad \qquad \Box\\& \qquad \qquad \qquad \qquad \quad \downarrow \qquad \qquad \qquad \ \ \downarrow \quad \Box \quad \quad \downarrow
\qquad \qquad \qquad \Box\\& \qquad \qquad \qquad \qquad \ 19.50 \qquad \qquad \quad \ \ = \quad m \quad \quad - \qquad \qquad \quad 6.50$
This equation, $19.50=m-6.50$$m$
Next, consider part $b$
Solve the equation to find the number of blue tiles in the bag.
$19.50 &= m-6.50\\19.50+6.50 &= m-6.50+6.50\\26.00 &= m+(-6.50+6.50)\\26 &= m+0\\26 &= m$
Harry earned $26.00 last week.
Video Review
Here is a video for review.
- This is a James Sousa video on solving single variable subtraction equations.
Directions: Solve each single-variable subtraction equation.
1. $x-8=9$
2. $x-18=29$
3. $a-9=29$
4. $a-4=30$
5. $b-14=27$
6. $b-13=50$
7. $y-23=57$
8. $y-15=27$
9. $x-9=32$
10. $c-19=32$
11. $x-1.9=3.2$
12. $y-2.9=4.5$
13. $c-6.7=8.9$
14. $c-1.23=3.54$
15. $c-5.67=8.97$
Files can only be attached to the latest version of Modality
|
{"url":"http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts---Grade-7/r4/section/7.8/anchor-content","timestamp":"2014-04-17T16:58:02Z","content_type":null,"content_length":"140109","record_id":"<urn:uuid:879acc1a-a5b9-4d5a-b8ea-6ee89aedc258>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dynamic Fuzzy Logic-Ant Colony System-Based
Applied Computational Intelligence and Soft Computing
Volume 2010 (2010), Article ID 428270, 13 pages
Research Article
Dynamic Fuzzy Logic-Ant Colony System-Based Route Selection System
^1Department of Electrical Engineering, Shahid Bahonar University of Kerman, Kerman 76169-133, Iran
^2International Center for Science, High Technology, and Environmental Sciences, Kerman, Iran
^3Advanced Communications Research Institute, Sharif University of Technology, Tehran, Iran
Received 8 June 2010; Accepted 23 August 2010
Academic Editor: Hani Hagras
Copyright © 2010 Hojjat Salehinejad and Siamak Talebi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Route selection in metropolises based on specific desires is a major problem for city travelers as well as a challenging demand of car navigation systems. This paper introduces a multiparameter route
selection system which employs fuzzy logic (FL) for local pheromone updating of an ant colony system (ACS) in detection of optimum multiparameter direction between two desired points, origin and
destination (O/D). The importance rates of parameters such as path length and traffic are adjustable by the user. In this system, online traffic data are supplied directly by a traffic control center
(TCC) and further minutes traffic data are predicted by employing artificial neural networks (ANNs). The proposed system is simulated on a region of London, United Kingdom, and the results are
1. Introduction
In route selection problems, typically a pair of origin and destination (O/D) is given, while there are many possible routes for selection. The objective is to find a route with the least cost, based
on the costs calculated for different possible directions.
During the past years, many researchers have proposed methods for optimum route selection by considering some important parameters for city travelers. Some recent navigation systems have embedded
algorithms that attempt to minimize journey distance and/or travel time. However, many drivers are now becoming increasingly concerned with rising fuel costs, waste of time in traffic congestions,
and pollutant emissions.
Finding the shortest route between a pair of points is an NP-hard problem that requires enumerating all the possible routes. In addition, most users nowadays not only need routes with the shortest
distance, but also require routes which can satisfy their other desires. Such users mostly need safe, low-traffic, and scenic routes with the fewest number of junctions to avoid traffic lights. The
published research papers in the literature have not addressed a fast, dynamic, low-complicated, and practical system that can be available almost anywhere and satisfy all these desires by comprising
all important parameters.
The proposed fuzzy logic-ant colony system (FLACS) in this paper introduces a new dynamic multiparameter vehicle navigation system that satisfies the above claims. This system uses a combination of
fuzzy logic (FL) and ant colony system (ACS) algorithm in order to find an optimum multiparameter route between a pair of O/D. An optimum route refers to a route that attempts to satisfy all desired
parameters of a user. These parameters are “Distance,” “Traffic,” and “Incident Risk.” The set of parameters can also be extended by adding “Width” (number of lanes), “Quality” (medical treatment
facilities, entertainments, and etc.), and number of “Traffic Lights” parameters. In this system, current traffic data are supplied by a traffic control center (TCC) and artificial neural networks
(ANNs) are employed for traffic data estimation of coming minutes. As user might not like the first selected route, the proposed system is capable of considering previously selected routes by user
and providing a ranking set of feasible routes.
The remainder of this paper is organized as follows. The next section reviews some route selection-related works. A brief review on basic principles of ACS as well as FL and ANN is presented in
Section 3. Section 4 demonstrates details of the proposed system and simulation results are discussed in Section 5. Finally, the paper is concluded in Section 6.
2. Related Route Selection Works
Barth et al. in [1] have developed some environmentally friendly navigation techniques that focus on minimizing energy consumption and pollutant emissions. These methods combine sophisticated mobile
source energy and emission models with route minimization algorithms that are used for navigational purposes. These methods have been applied on several case studies in southern California, USA.
Authors of [2] have emphasized on more important parameters such as number of traffic lights, right turns, and stop signs for route selection. Although route planning has been widely studied, most of
the available applications are primarily targeted at finding the shortest travel time or the shortest path routes, which is insufficient for dynamic route planning in real life scenario [3]. The
approach in [4] provides an optimal route planning by making efficient use of underlying geometrical structure. It combines classical artificial intelligence exploration with computational geometry.
Given a set of global positioning system (GPS) trajectories, the input is refined by geometric filtering and rounding algorithms. This method has some computational complexities as well as being
depended on the GPS to maintain optimal route planning.
In [5], an approach combining offline precomputation of optimal candidate paths with online path retrieval and dynamic adaptation is proposed. Based on a static traffic data file, a partially
disjoint candidate path set is constructed prior to the trip using a heuristic link weight increment method. This method satisfies reasonable path constraints that meet the driver preferences as well
as alternative path constraints that limit the joint failure probability for candidate paths. This algorithm is tested on randomly generated road networks [5]. Routing vehicles based on real-time
traffic conditions has presented significant reduction of travel time and, hence, cost in high volume traffic situations. The authors in [6] model the dynamic route determination problem as a Markov
Decision Process (MDP) and present procedures for identifying traffic data that have no decision making value. These methods are examined based on actual data collected on a route network in
southeast Michigan, USA [6]. In [7], historical data and experiences are used to decide an optimum route based on the analysis of unblocked reliability and the circuitous length. This can provide
reasonable route guidance to the traveler, even without real-time traffic information. In [8], a novel concept of an intelligent navigator that can give a driver timely advice on safe and efficient
driving is presented. From both the current traffic conditions obtained from visual data and the driver goals and preferences in driving, it autonomously generates advice and presents it to the
driver. Two main components of this intelligent navigator are the advice generation system and the road scene recognition system.
The FL system is a popular and powerful tool implemented by researchers for optimum route selection [9–11]. Teodorovic and Kikuchi have applied FL methodology for the first time in route selection [
11]. Their proposed method only considers the travel time parameter and cannot be easily generalized to multiple routes [9, 10]. The proposed hybrid evolutionary algorithm for solving the dynamic
route planning problem (HEADRPP) in [3] comprises a fuzzy logic implementation (FLI) and a graph partitioning algorithm (GPA) incorporated into a genetic algorithm (GA) core, and offers both
optimized shortest path and shortest time routes to the user [3]. Kambayashi et al. have also employed a GA to find a quasioptimal route for the driver. They attempted to integrate uncomfortable
turns into the conditions of their proposed GA based route selection algorithm [12].
By considering the route selection problem as a multicriteria problem, an approach of ant colony optimization (ACO) is presented in [13] to solve a spatial reconfiguration problem of multimodal
transport network. This work is consisted of real-time route planning with three criteria to be optimized: travel time, travel distance, and number of vehicles. However in this work, as an
application of reconfiguration, the execution time is very important. Another ants algorithm-based approach is presented in [14], which is capable of processing a digital image and detecting tracks
left by preceding vehicles on ice and snow. Salehinejad et al. have proposed a series of route selection systems based on ACS in [15–17]. In [17], a combination of A-Star (A*) algorithm and ACS is
employed where the A* algorithm invigorates some paths pheromones in ants algorithm. In this work, A* algorithm is a prologue of the ants algorithm. It invigorates some produced directions by itself
in order to help ants algorithm to recognize best direction with higher reliability and lower cost than pure ants algorithm. This is done by updating (increasing) pheromone amount of directions found
in the A* algorithm. In [15, 16], the ACS employs ANNs to predict traffic data of further minutes in offline and online modes, respectively. Most data prediction techniques rely on accuracy of a
plant model or knowledge of the stochastic processes [18–20]. This is while the ANNs have suggested an alternative approach in the literature. Some benefits of ANNs are relatively insensitive to
erroneous or missing data and handling nonlinear systems, which is an important issue for treating highly dynamic traffic data [21, 22].
3. A Survey on Ant Colony System, Fuzzy Logic, and Artificial Neural Networks
3.1. Ant Colony System
The ACO is a class of algorithms whose first member called ant system (AS) was initially proposed by Dorigo et al. [23]. Although real ants are blind, they are capable of finding shortest path from
food source to their nest by exploiting a liquid substance, called pheromone, which they release on the transit route. The developed AS strategy attempts to simulate behavior of real ants with the
addition of several artificial characteristics: visibility, memory, and discrete time to resolve many complex problems successfully such as the traveling salesman problem (TSP) [24], vehicle routing
problem (VRP) [25], and best path planning [26]. Even though many changes have been applied to the ACO algorithms during the past years, their fundamental ant behavioral mechanism that is positive
feedback process demonstrated by a colony of ants is still the same. Ants algorithm has also plenty of networking applications such as in communication networks [27] and electrical distribution
networks [28]. Different steps of a simple ant colony system algorithm are as follows.
Problem Graph Depiction
Artificial ants move between discrete states in discrete environments. Since the problems solved by ACS algorithm are often discrete, they can be represented by a graph with nodes and routes.
Ants Distribution Initializing
A number of ants are placed on the origin nodes. The number of ants is often defined based on trial and error and number of nodes in the region.
Ants Probability Distribution Rule
Ants probabilistic transition between nodes can also be specified as node transition rule. The transition probability of ant from node to node is given by where and are the pheromone intensity and
the cost of route between nodes and , respectively. Relative importance of and are controlled by parameters and , respectively. The is set of unavailable routes (visited nodes) for ant .
Update Global Trail
When every ant has assembled a solution, at the end of each cycle, the intensity of pheromone is updated by a pheromone trail updating rule. This rule for ACS algorithm is given as where is a
constant parameter named pheromone evaporation and is number of ants. The amount of pheromone laid on the route between nodes and by ant is where is a constant parameter and is the cost value of the
found solution by ant .
Stopping Procedure
This procedure is completed by arriving to a predefined number of cycles, or the maximum number of cycles between two improvements of the global best solutions.
3.2. Fuzzy Logic
The fuzzy set theory was initially introduced by Lotfi Zadeh in 1965. In this theory, the usual set theory is generalized so that an object cannot only be seen as an element of a set (membership
value 1) or not an element of this set (membership value 0), but it can also have a membership value between 0 and 1. Therefore a fuzzy set is defined by its membership functions which are allowed to
assume any value in the interval [,] instead of their characteristic function, which is defined to assume the values 0 or 1 only [29].
Later in 1970, Assilian and Mamdani developed the fuzzy control concept to control complex processes particularly when no strict model of the processes exists [30, 31]. Fuzzy control can be described
as a means of control working with conditional sentences called linguistic “IF-THEN” rules rather than mathematical equations. The deduction of the rule is called inference and requires definition of
a membership function characterizing this inference. This function determines the degree of truth of each proposition [31].
Different stages of a simple fuzzy logic controlling system are as follows.
Each fuzzy system is realized in the form of fuzzy rules such as where and N are variables of the condition part, is variable of the action part, and , , and are fuzzy parameters characterized by
membership functions. The condition parts of control rules make use of measurements which are usually real numbers [30]. Figure 1 presents Mamdani’s approach to the fuzzy inference procedure for two
rules and arbitrarily membership functions. By considering and as value domains, the real valued measurements and are matched to their corresponding fuzzy variables by determining their membership
values as .
Fuzzy Inference Engine
By considering and for all the control rules in the rule base, the truth value of each rule in the premise is derived by building the conjunctions of the matching membership values as where the “”
conjunction represents one of the Mamdani’s implication such as “min” function [30].
The truth degree of rules I and II is represented by and , respectively. These also define the membership values and of the fuzzy subsets and for the measurement m and , respectively. By considering
as the value domain of the output variable, the fuzzy control output is represented by the aggregation of all fuzzy subsets . Its membership values are determined by disjunction of all the membership
values as where the “*” disjunction is the “max” function when used with Mamdani’s implication [30, 31].
The fuzzy result, which is outcome of the inferences, is transformed into a real value that can be used as control input. Since the desired output is a nonfuzzy outcome, a quantitative value of the
control output is determined by defuzzifying . There are two common methods for defuzzification which are the “center of gravity” and “mean of maxima” methods. Interested readers are referred to [30]
for more descriptions.
3.3. Artificial Neural Networks
The brain is an extensive structure consisted of many connected neural cells called “neurons.” In ANNs, it is claimed to imitate biological brain neural networks into mathematical model. A model of
the brain connects many linear or nonlinear neuron models and processes information in a parallel distributed manner. Since neural networks have learning and self-organization capabilities, they can
adapt to changes in data and learning the characteristics of input signal. Such networks can learn a mapping between an input and an output space and synthesize an associative memory that retrieves
the appropriate output when presented with the input and generalizes when presented with new inputs [32]. By considering the above characteristics, neural networks are employed today in many fields
including pattern recognition, identification, speech, vision, and control systems [33].
A neuron with a single -element input vector is shown in Figure 2 where are individual element inputs and are weights of connections. The ANNs can be trained to perform a particular function by
adjusting the values of the weights [33]. The neuron unit has a bias , which is summed with the weighted inputs to form the net input . Output of the neuron is the weighted sum of input signals as
The neuron activation function is often a continuous and nonlinear function which is called sigmoid function and is defined as where is a constant and .
One largely used category of the ANNs is called feed-forward net. This hierarchical structure is consisted of some layers without interconnections between neurons in each layer, and signals flow from
input layer to output layer in one direction as in Figure 3. Readers are encouraged to refer to [32, 33] for more details.
4. Proposed Route Selection System
4.1. System Structure
The proposed system is executed locally for every single vehicle. It finds directions with minimum costs based on the importance rates of user desired parameters. Architecture of the proposed system
is presented in Figure 4. In this system, the traffic signal is provided by a TCC and contains current traffic data which is updated regularly. Memory of the system comprises statistical data,
average speed of the vehicle, current saved traffic data, current time, and so forth,. System information like statistical data is available on smart cards for different cities. Therefore, user can
use the system in other regions supported by a TCC just by purveying the appropriate smart card.
In this system, ANNs are used for traffic estimation of coming minutes. The manner depicted in [22] is followed for employing ANNs as traffic predictor in this work. The employed ANN is consisted of
one hidden layer with hidden neurons, inputs, and one output as in Figure 3. By considering statistical traffic data of last years in a typical time period as input, the ANN structure is trained for
making traffic predictions. Then, it can estimate traffic load of further minutes by considering current traffic data available by TCC as inputs. For a detailed description of the employed ANN,
enthusiastic readers may refer to [22].
In Figure 4, time needed to move between two junctions, before movement of the vehicle is estimated in the “time delay estimation” block. This is due to having different traffic loads in different
hours of day and night. Therefore, the system must have an estimate of arrival time to other junctions to use the appropriate data of the arrival time to that junction. This estimation is done by
considering the vehicle average speed, the distance between junctions, and the current traffic flow. By taking into account the predicted traffic data and estimated journey time delay, correspondent
predicted traffic data are used in the system.
The proposed system has the privilege of evading upcoming congestions. The system is aware of current vehicle location by using GPS. Therefore, if a congestion happens on the suggested direction, the
system immediately recommends nearest direction to the user parameters and current direction based on the new conditions to evade upcoming congestion. The system also has the capability of
considering the previously selected directions. This task is performed by updating pheromone amount of these directions in the ACS. In order to have a more user friendly system, it can provide the
user some candidate directions to choose from. These attributes are optional for users and more features can be developed in future works. The next subsection discusses the proposed FLACS-based
structure in details.
4.2. Fuzzy Logic-Ant Colony System-Based Model
The pseudocode of the proposed FLACS-based system is presented in Algorithm 1. Its different steps are described as follow.
It is consisted of initial values of the algorithm parameters such as number of ants, evaporation coefficient, and average speed of the vehicle.
Locate Ants
Ants are located on the start point in this stage. An active ant refers to the ant, which has not arrived to the destination yet and is not blocked in a junction. Since each ant can traverse each
junction once in each iteration, an ant is blocked in a junction when it has no chance of continuing its transition toward the destination and has no possible route to move backward. A blocked ant at
a junction is depicted in Figure 5.
Construct Probability
In this step, the probability of each possible direct route is calculated based on its cost function for each active ant. The probability of displacing from junction to junction for ant is as where
is the direct route pheromone intensity from junction to . Parameter controls the importance of and is set to 2 [22–24, 27]. The list is the set of direct blocked routes (visited nodes). Parameters
set is a collection of most important parameters for drivers taking journeys in metropolises. For more simplicity, the “Distance,’’ “Traffic Flow,” and “Incident Risk” parameters are considered in
this set. However, it can be developed by adding other parameters such as “Width” of the road (number of lanes), number of “Traffic Lights,” and “Road Quality” (medical treatment facilities, number
of gas stations, and entertainments), [15–17]. Cost function of each parameter is , where is normalized in and significance of each is adjustable by for all parameters. The considered parameters are
described as follows.(i)Distance. where the is distance between junctions and at time . Since some routes are two-way or one-way in specific hours, this parameter is a function of time. Longer
distance increases total cost and therefore, decreases the probability of the longer route selection in (9).(ii)Traffic Flow. where is traffic load of the route between junctions and at time .
Considering this parameter in the route selection systems has many benefits such as less air pollution, less time wastage in the traffic, and less gas usage. As traffic on a route grows, total cost
increases and consequently it decreases the probability of selecting that route by the system in (9).(iii)Incident Risk. where is risk of route between junctions and at time . This parameter is a
measure of incidents happening risk, which might occur on the route based on the statistical data. This parameter has direct relation with total cost, therefore risky routes have less selection
probability in (9).
Select Route
A random parameter with uniform probability is compared with the parameter , where and is usually fixed to 0.9 [34]. The comparison result between and picks up one of the two selection methods for
the active ant to continue its route to the next junction as If is greater than , active ant selects the route with the highest probability, otherwise, Roulette Wheel rule is selected to choose the
next junction through probabilities.
Update Tabu List
In this step, the route (selected node) which ant has been chosen is added to the tabu list in order not to be selected again. Furthermore, its probability will not be calculated anymore.
If ant has arrived at the destination or is blocked in a junction as in Figure 5, it is omitted from the active ant list. In other words, this step deactivates the blocked or arrived ant in the
current iteration.
Update Pheromone
The ACS pheromone system is consisted of two main rules: first is applied whilst constructing solutions (local pheromone update rule) and the other rule is applied after all ants have finished
constructing a solution (global pheromone update rule). The pheromone amount of the route between junctions and is updated for ant as where is the amount of local pheromone updating. The value of is
the output of a FL system. By considering the FL approach in Section 3.2, and Mamdani’s implication as our approach, structure of the employed FL system is illustrated in Figure 6. Inputs of the FL
system are the total amount of “Distance,” “Traffic Flow,” and “Incident Risk” of the direction which ant has selected. By considering computing complexities, only two input fuzzy sets, “Low” and
“High”, are defined for each input. The corresponding Trapezoidal-shaped membership functions of the input variables are presented in Figures 7, 8, and 9. In these functions, it is assumed that if
total distance of selected direction is more than 80% of maximum total visited distance by ants in the same loop, its membership function for “High” is unity whereas when total distance of selected
direction is less than 20% of maximum total visited distance by ants in the same loop, its membership function for “Low” is unity. The similar definitions are considered for the “Traffic Flow” and
“Incident Risk” parameters. Four fuzzy sets are considered for the output variable as in Figure 10. Levels 1–4 represent different levels of pheromone density, which are “Very Weak,” “Weak,”
“Strong,” and “Very Strong,” respectively. At the final stage, with respect to the most defuzzification techniques mentioned in Section 3.2, the “center of gravity” method is employed in this FL
system in order to resolve a single output value from the fuzzy set.Since important rates of parameters are different and are defined by different users, various fuzzy rules are predefined in the
system. Therefore, according to the preferences of each user, appropriate fuzzy rules are loaded into the FL system. A surface-plot of the IF-THEN rules is presented in Figure 11. In this figure,
rates of the considered parameters versus pheromone are presented. As an example, IF-THEN rules set of a specific parameters importance rate set is presented in Table 1. In this example, preferences
of the user for the “Distance,” “Traffic,” and “Incident Risk” parameters are “High,” “Low,” and “Low,” respectively. Therefore, directions with closer costs to the preferences achieve more local
pheromone update. This is while routes with the “Low,” “High,” and “High” preferences are achieving the least local pheromone update.
The traditional last step of each completed loop is global pheromone updating defined as where is the evaporation coefficient and is usually set to 0.9 [15–17, 34].
Select Best Direction
After loops, direction with the lowest cost from origin to destination is recommended by the system.
5. Simulation Results
Performance of the proposed FLACS route selection system versus ACS based and A*-ACS systems presented in [15–17] is evaluated and discussed in this section.
The proposed system is applied on a part of London, United Kingdom, routes network. The selected region is consisted of 42 junctions as in Figure 12. The statistical traffic data used in these
simulations are provided by London traffic control center (LTCC). In all simulations, start time is 4:00 PM and average speed of vehicle is considered 40km/h as default. The system is run in 30
loops with 15 initial ants as default. Number of running cycles, iterations, and ants are defined based on the number of junctions in the search area and therefore complexity of the region. In this
system, the evaporation coefficient is considered 0.9, , and . A desktop computer with Intel Core2Quad Q8300 2.5GHz CPU and 3MB of RAM is employed for simulations in MATLAB 2009b environment.
In the first simulation, 10O/D pairs are randomly selected. Average of their cost averages for different number of cycles and a set of user preferences (Distance = High; Traffic = Low; Incident Risk
= Low) is evaluated. As presented in Figure 13, the considered systems have the most cost averages in the first cycles. By following an alternative descending manner, cost averages of ACS and A*-ACS
systems almost arrive to a stable value by the 14th cycle. This is while the FLACS system has arrived to a stable cost average point, which is totally less than other two systems, by the 10th cycle.
Since the FLACS system converges faster with a less cost average than the other two systems, this fact shows enhanced performance of the proposed system versus A*-ACS and ACS systems.
In general, the comparison between ACS and A*-ACS systems in Figure 13 illustrates that the A*-ACS algorithm has less costs average than ACS algorithm. This difference is due to local pheromone
updating of the ACS by A* algorithm. In addition, comparison between A*-ACS and FLACS algorithms illustrates that the FLACS algorithm has the least costs averages. This fact, demonstrates performance
of FL in ACS local pheromone updating versus A* algorithm.
In another simulation, performance of the ACS, A*-ACS, and FLACS systems are compared for each “Distance,” “Traffic,” and “Incident Risk” parameters separately. For each parameter, average of 10
randomly selected O/D pairs cost averages is compared for different values of parameters importance rates, rated between 0 and 1. As it is illustrated in Figure 14, cost averages of all the three
systems increase by increasing the importance rate of “Distance” parameter. Although the systems have almost similar behavior versus different “Traffic” parameter importance rates, but the ACS and
FLACS algorithms have totally the most and the least average of costs, respectively. This is while it has been demonstrated in [17] and also the previous simulation that the A*-ACS system has less
average of costs than ACS system.
As analysis of cost averages for different importance rates of “Traffic” parameter demonstrates in Figure 15, the systems have different behavior for various values of this parameter. In fact, the
alternations are less smoother than the “Distance” parameter. This is due to estimation of further traffic data by ANNs. Therefore, traffic data used in each system is not exactly same as the other
system. By considering this fact, generally the FLACS algorithm has the least average of costs than the other algorithms. In a similar simulation, performance of the systems is studied for different
values of “Incident Risk” parameter as in Figure 16. Similar to the “Distance” parameter, the systems behavior is smooth in a descending manner.
In Figure 17, average of 10 randomly selected O/D pairs cost averages versus different number of ants is studied. As this figure illustrates, by considering one ant in the systems as the worker,
averages of costs are so high. By activating more ants, the systems averages of costs decrease while converging to a specific value of costs average. The convergence point for the FLACS, A*-ACS, and
ACS systems is at 17, 17, and 18 number of ants, respectively. However, after this point, behavior of the FLACS system costs average is static, while the ACS system has more alternations than A*-ACS
system. This figure clearly demonstrates performance of the proposed FLACS system versus other two systems.
In another study, performances of the systems are compared for three specific O/D pairs in Table 2. As it is indicated, the ACS has the most cost but less running time among other systems. This is
while the A*-ACS, due to computational load of its A* search engine, has more running time but less cost than ACS. Even though the FLACS also has the extra FL system component respect to ACS, the
simulation results demonstrate that the FLACS has less running time than A*-ACS and the least cost among all systems. However, due to its FL system component, its running time is more than the ACS.
In a specific case, the FLACS and A*-ACS have recommended similar direction for the O/D pair 6/13. However, the FLACS method has recommended this direction with less running time as well as less cost
than the ACS method, which demonstrates performance of the FLACS system.
Totally, the systems have demonstrated almost the same growth pattern by parameters increment which is due to the common ACS core of these methods. However, the performance analyze of the proposed
FLACS system for different values of “Distance,” “Traffic,” and “Incident Risk” parameters demonstrates its less cost average versus A*-ACS and pure ACS systems. This also means that employment of
the FL technique in local pheromone updating has not forced or converged the FLACS to a specific result, but could help it to achieve optimum results with fewer costs than pure ACS and A*-ACS
algorithms. The privilege of FLACS versus other methods is its dynamic behavior in different regions with different complexities due to existence of the FL part. Therefore, the system performs with
fewer total costs as well as better performance.
6. Conclusion and Future Works
The proposed system in this paper introduces a dynamic route selection system which employs fuzzy logic (FL) and ant colony system (ACS) for multiparameter route selection in urban areas. This system
considers a set of important parameter for city travelers: “Distance,” “Traffic,” and “Incident Risk.” However, this set can be developed by considering other parameters such as “Width” (number of
lanes), “Quality” (medical treatment facilities, entertainments, etc.), and number of “Traffic Lights.”
In this work, costs of possible routes are calculated based on the adjusted desired parameters of the user. Then direction with the optimum cost is selected by using the proposed fuzzy logic-ant
colony system (FLACS) algorithm. For real-time applications, fuzzy logic is considered as a management mechanism for the proposed ACS local pheromone updating. This technique improves ACS performance
and prepares a real applicable dynamic system for different regions. This work can also be developed for daily life usages by employing some advanced technologies such as vehicle-to-vehicle (V2V)
communication and networking technologies. Another version of this work can also be developed for passengers, being available on their mobile cell phones or personal digital assistances (PDAs).
The proposed system can have lots of real-time applications for emergency services, tourist guides, and generally for anyone who wants to have a low-cost, safe, and comfortable journey in urban
The authors would like to acknowledge the anonymous reviewers for their insightful suggestions. The authors would also like to appreciate customer service advisor of the London Traffic Control
Center, United Kingdom, for providing traffic data of this project.
1. M. Barth, K. Boriboonsomsin, and A. Vu, “Environmentally-Friendly navigation,” in Proceedings of the 10th International IEEE Conference on Intelligent Transportation Systems (ITSC '07), pp.
684–689, October 2007. View at Publisher · View at Google Scholar
2. H. Wang and B. Zhang, “Route planning and navigation system for an autonomous land vehicle,” in Proceedings of the 3rd International Conference on Vehicle Navigation and Information Systems (VNIS
'92), pp. 135–140, Norway, 1992.
3. L. W. Lup and D. Srinivasan, “A hybrid evolutionary algorithm for dynamic route planning,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '07), pp. 4743–4749, Singapore,
September 2007. View at Publisher · View at Google Scholar
4. S. Edelkamp, S. Jabbar, and T. Willhalm, “Geometric travel planning,” IEEE Transactions on Intelligent Transportation Systems, vol. 6, no. 1, pp. 5–16, 2005. View at Publisher · View at Google
Scholar · View at Scopus
5. C. Yanyan, M. G. H. Bell, and K. Bogenberger, “Reliable pretrip multipath planning and dynamic adaptation for a centralized road navigation system,” IEEE Transactions on Intelligent
Transportation Systems, vol. 8, no. 1, pp. 14–19, 2007. View at Publisher · View at Google Scholar · View at Scopus
6. S. Kim, M. E. Lewis, and C. C. White, “State space reduction for nonstationary stochastic shortest path problems with real-time traffic information,” IEEE Transactions on Intelligent
Transportation Systems, vol. 6, no. 3, pp. 273–284, 2005. View at Publisher · View at Google Scholar · View at Scopus
7. C. Yanyan, L. Ying, and D. Huabing, “The model of optimum route selection in vehicle automatic navigation system based on unblocked reliability analyses,” IEEE Intelligent Transportation Systems
Proceedings, vol. 2, pp. 975–978, 2003.
8. J. Miura, M. Itoh, and Y. Shirai, “Toward vision-based intelligent navigator: its concept and prototype,” IEEE Transactions on Intelligent Transportation Systems, vol. 3, no. 2, pp. 136–146,
2002. View at Publisher · View at Google Scholar · View at Scopus
9. G. K. H. Pang, K. Takahashi, T. Yokota, and H. Takenaga, “Adaptive route selection for dynamic route guidance system based on fuzzy-neural approaches,” IEEE Transactions on Vehicular Technology,
vol. 48, no. 6, pp. 2028–2041, 1999. View at Publisher · View at Google Scholar · View at Scopus
10. G. Pang and M.-H. Chu, “Route selection for vehicle navigation and control,” in Proceedings of the 5th IEEE International Conference on Industrial Informatics (INDIN '07), pp. 693–698, June 2007.
View at Publisher · View at Google Scholar
11. D. Teodorovic and S. Kikuchi, “Transportation route choice model using fuzzy interface technique,” in Proceedings of the 1st International Conference Uncertainty Modeling and Analysis: Fuzzy
Reasoning, Probabilistic Models, and Risk Management, pp. 140–145, Maryland University, College Park, 1990.
12. Y. Kambayashi, Y. Tsujimura, H. Yamachi, and H. Yamamoto, “Integrating uncomfortable intersection-turns to subjectively optimal route selection using genetic algorithm,” in Proceedings of the 5th
IEEE International Conference on Computational Cybernetics (ICCC '07), pp. 203–208, Tunisia, October 2007. View at Publisher · View at Google Scholar
13. S. Zidi, S. Maouche, and S. Hammadi, “Real-time route planning of the public transportation system,” in Proceedings of the IEEE Intelligent Transportation Systems Conference (ITSC '06), pp.
55–60, Canada, September 2006.
14. A. Broggi, M. Cellario, P. Lombardi, and M. Porta, “An evolutionary approach to visual sensing for vehicle navigation,” IEEE Transactions on Industrial Electronics, vol. 50, no. 1, pp. 18–29,
2003. View at Publisher · View at Google Scholar · View at Scopus
15. H. Salehinejad, F. Pouladi, and S. Talebi, “A new route selection system: multiparameter ant algorithm based vehicle navigation approach,” in Proceedings of the International Conference on
Computational Intelligence for Modeling, Control and Automation, pp. 1089–1094, IEEE Computer Society, Vienna, Austria, 2008.
16. H. Salehinejad and S. Talebi, “A new ant algorithm based vehicle navigation system: a wireless networking approach,” in Proceedings of the International Symposium on Telecommunications (IST '08),
pp. 36–41, Tehran, Iran, August 2008. View at Publisher · View at Google Scholar
17. H. Salehinejad, H. Nezamabadi-Pour, S. Saryazdi, and F. Farrahi-Moghaddam, “Combined A*-ants algorithm: a new multi-parameter vehicle navigation scheme,” in Proceedings of the 16th Iranian
Conference on Electrical Engineering (ICEE '08), pp. 154–159, Tehran, Iran, 2008.
18. S. Abed and C. Swann, “Analysis of freeway traffic times-series data by using Box Jenkins techniques,” Transportation Research, no. 72, pp. 1–9, 1979. View at Scopus
19. H. Nicholson and C. D. Swann, “The prediction of traffic flow volumes based on spectral analysis,” Transportation Research, vol. 8, no. 6, pp. 533–538, 1974. View at Scopus
20. I. Okutani and Y. J. Stephanedes, “Dynamic prediction of traffic volume through Kalman filtering theory,” Transportation Research. Part B, vol. 18, no. 1, pp. 1–11, 1984. View at Scopus
21. I. Ohe, H. Kawashima, M. Kojima, and Y. Kaneko, “A method for automatic detection of traffic incidents using neural networks,” in Proceedings of the Vehicle Navigation and Information Systems
Conference in Conjunction with the Pacific Rim TransTech Conference, A Ride into the Future, pp. 231–235, 1995.
22. C. Taylor and D. Meldrum, “Freeway data prediction using neural networks,” in Proceedings of the Vehicle Navigation and Information Systems Conference in Conjunction with the Pacific Rim
TransTech Conference, A Ride into the Future, pp. 225–230, 1995.
23. M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: optimization by a colony of cooperating agents,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 26, no. 1, pp. 29–41, 1996.
View at Scopus
24. M. Dorigo and L. M. Gambardella, “Ant colony system: a cooperative learning approach to the traveling salesman problem,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 1–24,
1997. View at Scopus
25. L. W. Dong and C. T. Xiang, “Ant colony optimization for VRP and mail delivery problems,” in Proceedings of the IEEE International Conference on Industrial Informatics (INDIN '06), pp. 1143–1148,
August 2006. View at Publisher · View at Google Scholar
26. C. L. Liu, “Best path planning for public transportation systems,” in Proceedings of the IEEE 5th International Conference on Intelligence Transportation Systems, pp. 834–839, 2002.
27. M. Dorigo, “Ant foraging behavior, combinatorial optimization, and routing in communication networks,” Swarm Intelligence: From Natural to Artificial Systems, pp. 25–107, 1999, Santa Fe Institute
Studies in the Sciences of Complexity.
28. S. Favuzza, G. Graditi, M. G. Ippolito, and E. R. Sanseverino, “Optimal electrical distribution systems reinforcement planning using gas micro turbines by dynamic ant colony search algorithm,”
IEEE Transactions on Power Systems, vol. 22, no. 2, pp. 580–587, 2007. View at Publisher · View at Google Scholar · View at Scopus
29. CH. Schuh, “Fuzzy sets and their application in medicine,” in Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society (NAFIPS '05), pp. 86–91, Detroit, Mich,
USA, 2005.
30. E. H. Mamdani, “Application of fuzzy algorithms for control of simple dynamic plant,” Proceedings of the Institution of Electrical Engineers, vol. 121, no. 12, pp. 1585–1588, 1974.
31. P. Chemouil, J. Khalfet, and M. Lebourges, “Fuzzy control approach for adaptive traffic routing,” IEEE Communications Magazine, vol. 33, no. 7, pp. 70–76, 1995. View at Publisher · View at Google
32. T. Fukuda and T. Shibata, “Theory and applications of neural networks for industrial control systems,” IEEE Transactions on Industrial Electronics, vol. 39, no. 6, pp. 472–489, 1992. View at
Publisher · View at Google Scholar
33. H. Demuth, M. Beale, and M. Hagan, Neural Network Toolbox For Use with MATLAB${}^{®}$, The MathWorks, 2006.
34. V. Maniezzo, L. M. Gambardella, and F. De Luigi, Ant Colony Optimization, New Optimization Techniques in Engineering, Springer, Berlin, Germany, 2004.
|
{"url":"http://www.hindawi.com/journals/acisc/2010/428270/","timestamp":"2014-04-21T08:52:02Z","content_type":null,"content_length":"221890","record_id":"<urn:uuid:49dc8c53-3921-44e4-b145-13560a15e382>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kashmere Gardens, Houston, TX
Humble, TX 77396
Professor, researcher, businessman wants to help you excel
..., variables and inequality) and manipulations (e.g., solving equations and factoring) you learn in algebra are a kind of language that will be used in geometry,
algebra 2
, trigonometry, calculus, probability and statistics. These subjects use the same notations as...
Offering 10+ subjects including algebra 2
|
{"url":"http://www.wyzant.com/Kashmere_Gardens_Houston_TX_algebra_2_tutors.aspx","timestamp":"2014-04-19T12:50:28Z","content_type":null,"content_length":"62792","record_id":"<urn:uuid:15f77d0e-f4fb-478f-a628-af8bfcd1d107>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hulmeville, PA Precalculus Tutor
Find a Hulmeville, PA Precalculus Tutor
...He is a retired Vice-President of an international Aerospace company. His industrial career included technical presentations and workshops, throughout North America and Europe, to multinational
companies, to NATO, and to trade delegations from China and Russia. In particular, he was proud to be...
10 Subjects: including precalculus, calculus, algebra 1, GRE
...I have a B.A. in scientific illustration. I have been drawing all my life and regard drawing skills as one the the most basic and fundamental skills in my "toolbox." Over the years and through
my degree program, I have improved my drawing and experienced many critiques. Drawing is a skill developed from observation and understanding of form, line, shape, and volume.
19 Subjects: including precalculus, calculus, geometry, algebra 1
...I have a BS in Mathematics from Ohio University, and am currently working on my masters in Education. I can give you the attention and dedication you need to become successful in the math areas
you want to become better in. We can work with your school curriculum and on other topics.
14 Subjects: including precalculus, calculus, geometry, ASVAB
...After my graduation, I worked for 1 year as a high-school teacher, teaching geometry to 6th graders and physics to 9th and 10th graders, which I left then to pursue my PhD. During my PhD I
assisted the Applied Statistics class as part of my teaching duties. During my undergraduate as well as my graduate studies I became very well computer skilled, especially programming.
20 Subjects: including precalculus, Spanish, physics, calculus
...The writing section assesses whether you can concisely write two 30-minute organized essays. I am confident that I can help you significantly improve your scores in all three areas. I have been
tutoring students in all areas of test prep ACT (all), SAT (all), PRAXIS, and ASVAB for several years.
62 Subjects: including precalculus, reading, English, calculus
|
{"url":"http://www.purplemath.com/Hulmeville_PA_precalculus_tutors.php","timestamp":"2014-04-19T09:55:00Z","content_type":null,"content_length":"24522","record_id":"<urn:uuid:98cc8ae3-314f-46bf-930c-2a94571d2494>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How many hp is 5250 watts?
You asked:
How many hp is 5250 watts?
7.04071674657409 horsepower
the amount of power 7.04071674657409 horsepower
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/how_many_hp_is_5250_watts","timestamp":"2014-04-19T04:47:39Z","content_type":null,"content_length":"49047","record_id":"<urn:uuid:add3f54f-e165-404c-b7c6-e03df235b55c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
|
graphing problem
May 18th 2010, 11:52 PM #1
Jan 2010
graphing problem
i have this equation:
solve for x:
x=0, x=3
In addition:
graph f(x)=ln(x) and -f(x-3)+2 by using transformations. Explain the relationship between the solutions to the original equation and the points of intersection on your graph.
How do you graph -f(x-3)+2?
please help Idon't understand the problem. thanks!
Last edited by Anemori; May 19th 2010 at 02:55 AM.
Use the laws of logarithms:
Now use the logarithms as exponents to the base e. You'll get a quadratic equation. Keep in mind that this equation is only valid for x > 3
In addition:
graph f(x)=ln(x) and -f(x-3)+2 by using transformations. Explain the relationship between the solutions to the original equation and the points of intersection on your graph.
How do you graph -f(x-3)+2?
please help Idon't understand the problem. thanks!
1. Graph $f(x)=\ln(x)$
2. f(x-3) means the graph of f is translated by 3 units to the right.
3. -f(x-3) means the translated graph is refelcted about the x-axis.
4. -f(x-3) + 2 means the translated and reflected graph is translated by 2 units upwards.
5. The x-coordinate of the point of intersection is the solution to the equation above:
$\ln(x)+\ln(x-3)=2~\implies~\underbrace{\ln(x)}_{f(x)}=\underbra ce{-\ln(x-3)+2}_{-f(x-3)+2}$
May 19th 2010, 05:37 AM #2
|
{"url":"http://mathhelpforum.com/algebra/145486-graphing-problem.html","timestamp":"2014-04-17T23:08:18Z","content_type":null,"content_length":"34647","record_id":"<urn:uuid:8ac4ba05-3768-4919-96ff-f2c4877bb8c1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relative Motion - Introduction
Note: The activities in this module make reference to the computer algebra system (CAS) Maple, and links are provided to download Maple files. Any other CAS can be used instead (e.g., Mathematica,
Mathcad, etc.) as long as the user is familiar with that CAS system. Maple is not required for the use of the ideas in this module, but it is required for opening and executing the downloadable
Larry Gladney is Associate Professor of Physics and Dennis DeTurck is Professor of Mathematics, both at the University of Pennsylvania.
Equations are used to describe the motion of objects. Usually the independent variable in these equations is t, for time. Depending on whether the motion is taking place along a line (one dimension),
in a plane (two dimensions), or in space (three dimensions), we use one, two or three functions to specify the position of the object at any time. This module concerns the process of changing the
point (and direction) of reference from which the motion is viewed -- this is important for solving physics problems, designing automatic pilots and other robotic devices, and video games.
Published July 2001
© 2001 by Larry Gladney and Dennis DeTurck
|
{"url":"http://www.maa.org/publications/periodicals/loci/joma/relative-motion-introduction?device=mobile","timestamp":"2014-04-17T03:11:12Z","content_type":null,"content_length":"22292","record_id":"<urn:uuid:7adba9b0-f6c9-4dde-921c-24a2877a2bb3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This page has been
, but needs to be
The part depending on p in the second integral
$=-\frac{\mu ep}{2}\int\int\int\left(B\frac{d}{dz}\frac{1}{R}-\frac{Cd}{dy}\frac{1}{R}\right)dx\ dy\ dz$,
or (see Maxwell's 'Electricity and Magnetism,' § 405)
$=-\frac{\mu ep}{2}F'_{1}$.
Adding this to the term $epF'_{1}$ already obtained, we get $\tfrac{\mu ep}{2}F'_{1}$ as the part of the kinetic energy depending on p. We have evidently similar expressions for the parts of the
kinetic energy depending on q and r. Hence the part of the kinetic energy with which we are concerned will
$=\frac{\mu e}{2}\cdot\left(F'_{1}p+G'_{1}q+H'_{1}r\right)$.
By Lagrange's equations, the force on the sphere parallel to the axis of x
$=\frac{\mu e}{2}\left\{ p\frac{dF'_{1}}{dx}+q\frac{dG'_{1}}{dx}+r\frac{dH'_{1}}{dx}-\frac{dF'_{1}}{dt}\right\}$
$=\frac{\mu e}{2}\left\{ p\frac{dF'_{1}}{dx}+q\frac{dG'_{1}}{dx}+r\frac{dH'_{1}}{dx}-p\frac{dF'_{1}}{dx}-q\frac{dF'_{1}}{dy}-r\frac{dF'_{1}}{dz}\right\}$
$=\frac{\mu e}{2}\left\{ q\left(\frac{dG'_{1}}{dx}-\frac{dF'_{1}}{dy}\right)-r\left(\frac{dF'_{1}}{dz}-\frac{dH'_{1}}{dx}\right)\right\}$
$=\frac{\mu e}{2}\left(qc_{1}-rb_{1}\right)$.
Similarly, the force parallel to the axis of y
$=\frac{\mu e}{2}\left(ra_{1}-pc_{1}\right)$, (5)
the force parallel to the axis of z
$=\frac{\mu e}{2}\left(pb_{1}-qa_{1}\right)$
where a[1], b[1], c[1] are the components of magnetic induction at the centre of the sphere due to the external magnet. These forces are the same as would act on unit length of a conductor at the
centre of the sphere carrying a current whose components are $\tfrac{\mu ep}{2}$, $\tfrac{\mu eq}{2}$, $\tfrac{\mu er}{2}$ . The resultant force is perpendicular
|
{"url":"http://en.wikisource.org/wiki/Page:Thomson1881.djvu/13","timestamp":"2014-04-17T19:07:21Z","content_type":null,"content_length":"25081","record_id":"<urn:uuid:8cd256b7-e57e-4bae-adbb-a10b762399d6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Existence Results for the Distributed Order Fractional Hybrid Differential Equations
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 163648, 16 pages
Research Article
Existence Results for the Distributed Order Fractional Hybrid Differential Equations
Faculty of Mathematical Sciences, Shahrekord University, P.O. Box 115, Shahrekord, Iran
Received 22 July 2012; Accepted 7 October 2012
Academic Editor: Yongfu Su
Copyright © 2012 Hossein Noroozi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
We introduce the distributed order fractional hybrid differential equations (DOFHDEs) involving the Riemann-Liouville differential operator of order with respect to a nonnegative density function.
Furthermore, an existence theorem for the fractional hybrid differential equations of distributed order is proved via a fixed point theorem in the Banach algebras under the mixed Lipschitz and
Caratheodory conditions.
1. Introduction
The differential equations involving Riemann-Liouville differential operators of fractional order are very important in the modeling of several physical phenomena [1, 2]. In recent years, quadratic
perturbations of nonlinear differential equations and first-order ordinary functional differential equations in Banach algebras, have attracted much attention to researchers. These type of equations
have been called the hybrid differential equations [3–8]. One of the important first-order hybrid differential equations (HDE) is defined as [4, 9] where is a bounded interval in for some and with .
Also, and , such that is the class of continuous functions and is called the Caratheodory class of functions which are Lebesgue integrable bounded by a Lebesgue integrable function on . Moreover (i)
the map is measurable for each , (ii)the map is continuous for each . For the above hybrid differential equation, Dhage and Lakshmikantham [9] established existence, uniqueness, and some fundamental
differential inequalities. Also, they stated some theoretical approximation results for the extremal solutions between the given lower and upper solutions [10]. Later, Zhao. et al. [11] developed the
following fractional hybrid differential equations involving the Riemann-Liouville differential operators of order , where is bounded in for some and , .
They established the existence, uniqueness, and some fundamental fractional differential inequalities to prove existence of the extremal solutions of (1.2). Also, they considered necessary tools
under the mixed Lipschitz and Caratheodory conditions to prove the comparison principle.
Now, in this article in view of the distributed order fractional derivative [12–14], we develop the distributed order fractional hybrid differential equations (DOFHDEs) with respect to a nonnegative
density function.
In this regard, in Section 2 we introduce the distributed order fractional hybrid differential equation. Section 3 is about some main theorems which are used in this paper. In Section 4, we prove the
existence theorem for this class of equations, and we express some special cases for the density function used in the distributed order fractional hybrid differential equation. Finally, the main
conclusions are set.
2. The Fractional Hybrid Differential Equation of Distributed Order
In this section, we recall some definitions which are used throughout this paper.
Definition 2.1 (see [1, 2]). The fractional integral of order with the lower limit for the function is defined as
Definition 2.2 (see [1, 2]). The Riemann-Liouville derivative of order with the lower limit for the function can be written as
Definition 2.3. The distributed order fractional hybrid differential equation (DOFHDEs), involving the Riemann-Liouville differential operator of order with respect to the nonnegative density
function , is defined as Moreover, the function is continuous for each , where is bounded in for some . Also, and .
3. The Main Theorems
In this section, we state the existence theorem for the DOFHDE (2.3) on . For this purpose, we define a supremum norm of in as and for is a multiplication in this space. We consider is a Banach
algebra with respect to norm and multiplication (3.2). Moreover the norm for is defined by Now, for expressing the existence theorem for the DOFHDE (2.3), we state a fixed point theorem in the Banach
Theorem 3.1 (see [15]). Let be a nonempty, closed convex, and bounded subset of the Banach algebra and let and be two operators such that (a) is Lipschitz constant , (b) is completely continuous, (c)
for all , (d), where .
Then the operator equation has a solution in .
At this point, we consider some hypotheses as follows. The function is increasing in almost everywhere for . There exists a constant such that for all and . There exists a function and a
real nonnegative upper bound such that for all and .
Theorem 3.2 (Titchmarsh theorem [16]). Let be an analytic function which has a branch cut on the real negative semiaxis. Furthermore, has the following properties: for any sector , where . Then, the
Laplace transform inversion can be written as the Laplace transform of the imaginary part of the function as follows:
Definition 3.3. Suppose that be a metric space and let . Then, is equicontinuous if for all there exists such that for all and
Theorem 3.4 (Arzela-Ascoli theorem [17]). Let be a compact metric space and let . Then, is compact if and only if is closed, bounded, and equicontinuous.
Theorem 3.5 (Lebesgue dominated convergence theorem [18]). Let be a sequence of real-valued measurable functions on a measure space . Also, suppose that the sequence converges pointwise to a function
and is dominated by some integrable function in the sense that for all numbers in the index set of the sequence and all points in . Then, is integrable and
4. Existence Theorem for the DOFHDEs
We apply the following lemma to prove the main existence theorem of this section.
Lemma 4.1. Assume that hypothesis () holds in pervious section, then for any and , the function is a solution of the DOFHDE (2.3) if and only if satisfies the following equation such that and
Proof. Applying the Laplace transform on both sides of (2.3) and letting we have Since , we have and hence, such that Now, using the inverse Laplace transform on both sides of (4.6) and applying the
convolution product, we get or equivalently Since is an analytic function which has a branch cut on the real negative semiaxis, according to the Titchmarsh Theorem 3.2 we get which by the Laplace
transform definition, (4.1) is held. Conversely, let satisfies (4.1), therefore, satisfies the equivalent equation (4.9). By in (4.1), we have According to hypothesis , the map is injective in and
hence . Next, with dividing (4.9) by and using the Laplace transform operator on both sides of this equation, (4.6) also holds. Since , we obtain (4.4) and by applying the inverse Laplace transform,
(2.3) also holds.
Theorem 4.2. Suppose that hypothesis ()–() hold. Further, if then, the DOFHDE (2.3) has a solution defined on J.
Proof. We set as a Banach algebra and define a subset of by such that It is obvious that is closed and if , then and , also by properties of the norm, we get Therefore, is a convex and bounded and by
applying Lemma 4.1, DOFHDE (2.3) is equivalent to (4.1).
Define operators and by thus, from (4.1), we obtain the operator equation as follows: If operators and satisfy all the conditions of Theorem 3.1, then the operator equation (4.17) has a solution in .
For this paper, let which by hypothesis we have and if for all take a supremum over , then we have Therefore, is a Lipschitz operator on with the Lipschitz constant , and the condition (a) from
Theorem 3.1 is held. Now, for checking the condition (b) from this theorem, first, we shall show that is continuous on .
Let be a sequence in such that with . By applying the Lebesgue-dominated convergence Theorem 3.5 for all , we get Thus, is a continuous operator on . In next stage, we shall show that is a compact
operator on . For this paper, we shall show that is a uniformly bounded and eqicontinuous set in . Let , then by hypothesis for all we have Let such that . Then by the existence Laplace transform
theorem [19], there exists a constant such that for a constant that , Hence, we find an upper bound for the integral of (4.22) as follows: such that Finally, with respect to the inequality (4.22) we
obtain which by applying supremum over , we get for all Thus, is uniformly bounded on .
In this stage, now we show that is an equicontinuous set in . Let , with . In this respect, we have for all If we set and , then by Laplace transform definition and (4.23), for and we can write
Therefore, we have Also, by (4.24) we have Finally, with respect to (4.28), (4.30), and (4.31) we obtain Hence, for , there exists such that if , then for all and all we have which implies that is an
equicontinuous set in and according to the Arzela-Ascoli Theorem 3.4, is compact. Therefore is continuous and compact operator on into and is a completely continuous operator on and the condition (b)
from the Theorem 3.1 is held.
For checking the condition (c) of Theorem 3.1, let and be arbitrary such that . Then, by hypothesis we get Therefore, which by taking a supremum over , we obtain Thus, the condition (c) of Theorem
3.1 is satisfied. If we consider the hypothesis (d) of Theorem 3.1 is satisfied.
Hence, all the conditions of Theorem 3.1 are satisfied and therefore the operator equation has a solution in . As a result, the DOFHDE (2.3) has a solution defined on and proof is completed.
5. Some Special Cases
In this section, we discuss some special cases of the density function for the DOFHDE (2.3) and we find the operators and which introduce in Theorem 4.2. In proof of Lemma 4.1, the following equation
is equivalent to the DOFHDE (2.3), such that,
(1) Let . Then we have Thus, where is the exponential integral defined by Therefore, for this case, the DOFHDE (2.3) is and it is equivalent to the following equation: such that the operators and in
Theorem 4.2 are
(2) Two-term equation: Let , which also, and are nonnegative constant coefficients and is the Dirac delta function. Then by the following inverse Laplace transform [2]: where is the Mittag-Leffler
function in two parameters we get the DOFHDE (2.3) as It is equivalent to the following equation such that the operators and in Theorem 4.2 are
(3) Three-term equation: Let, which and , , and are nonnegative constant coefficients and is the Dirac delta function. Then, by virtue of [2] where and is the th derivative of the Mittag-Leffler
function in two parameters We get the DOFHDE (2.3) as It is equivalent to the following equation: such that the operators and in Theorem 4.2 are
(4) General Case: -term equation: suppose that which and for are nonnegative constant coefficients. Therefore, by the following inverse Laplace transform [2], we have where Thus, for this case, the
DOFHDE (2.3) is It is equivalent to the following equation: and the operators and in Theorem 4.2 are given by
6. Conclusions
In this paper, we introduced a new class; the fractional hybrid differential equations of distributed order and stated an existence theorem for it. We pointed out a fixed point theorem in the Banach
algebra for the existence of solution. Basis of this theorem is on finding two operator equations which in special cases for multiterms fractional hybrid equations are given with respect to the
derivatives of Mittag-Leffler function.
The authors have been partially supported by the Center of Excellence for Mathematics, Shahrekord University.
1. A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, vol. 204 of North-Holland Mathematics Studies, Elsevier, Amsterdam, The
Netherlands, 2006.
2. I. Podlubny, Fractional Differential Equations, vol. 198 of Mathematics in Science and Engineering, Academic Press, San Diego, Calif, USA, 1999.
3. B. C. Dhage, “Nonlinear quadratic first order functional integro-differential equations with periodic boundary conditions,” Dynamic Systems and Applications, vol. 18, no. 2, pp. 303–322, 2009.
View at Zentralblatt MATH
4. B. C. Dhage, “Quadratic perturbations of periodic boundary value problems of second order ordinary differential equations,” Differential Equations & Applications, vol. 2, no. 4, pp. 465–486,
2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
5. B. C. Dhage and B. D. Karande, “First order integro-differential equations in Banach algebras involving Caratheodory and discontinuous nonlinearities,” Electronic Journal of Qualitative Theory of
Differential Equations, vol. 21, 16 pages, 2005. View at Zentralblatt MATH
6. B. Dhage and D. O'Regan, “A fixed point theorem in Banach algebras with applications to functional integral equations,” Functional Differential Equations, vol. 7, no. 3-4, pp. 259–267, 2000. View
at Zentralblatt MATH
7. B. C. Dhage, S. N. Salunkhe, R. P. Agarwal, and W. Zhang, “A functional differential equation in Banach algebras,” Mathematical Inequalities & Applications, vol. 8, no. 1, pp. 89–99, 2005. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH
8. P. Omari and F. Zanolin, “Remarks on periodic solutions for first order nonlinear differential systems,” Unione Matematica Italiana. Bollettino. B. Serie VI, vol. 2, no. 1, pp. 207–218, 1983.
View at Zentralblatt MATH
9. B. C. Dhage and V. Lakshmikantham, “Basic results on hybrid differential equations,” Nonlinear Analysis. Hybrid Systems, vol. 4, no. 3, pp. 414–424, 2010. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH
10. B. C. Dhage, “Theoretical approximation methods for hybrid differential equations,” Dynamic Systems and Applications, vol. 20, no. 4, pp. 455–478, 2011. View at Zentralblatt MATH
11. Y. Zhao, S. Sun, Z. Han, and Q. Li, “Theory of fractional hybrid differential equations,” Computers & Mathematics with Applications, vol. 62, no. 3, pp. 1312–1324, 2011. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH
12. M. Caputo, Elasticita e Dissipazione, Zanichelli, Bologna, Italy, 1969.
13. M. Caputo, “Mean fractional-order-derivatives differential equations and filters,” Annali dell'Università di Ferrara. Nuova Serie. Sezione VII. Scienze Matematiche, vol. 41, pp. 73–84, 1995. View
at Zentralblatt MATH
14. M. Caputo, “Distributed order differential equations modelling dielectric induction and diffusion,” Fractional Calculus & Applied Analysis, vol. 4, no. 4, pp. 421–442, 2001. View at Zentralblatt
15. B. C. Dhage, “On a fixed point theorem in Banach algebras with applications,” Applied Mathematics Letters, vol. 18, no. 3, pp. 273–280, 2005. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH
16. A. V. Bobylev and C. Cercignani, “The inverse Laplace transform of some analytic functions with an application to the eternal solutions of the Boltzmann equation,” Applied Mathematics Letters,
vol. 15, no. 7, pp. 807–813, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
17. W. Rudin, Real and Complex Analysis, McGraw-Hill, New York, NY, USA, 1966.
18. G. B. Folland, Real Analysis: Modern Techniques and Their Applications, Pure and Applied Mathematics, John Wiley & Sons, New York, NY, USA, 2nd edition, 1999.
19. B. Davis, Integral Transforms and Their Applications, Springer, New York, NY, USA, 3rd edition, 2001.
|
{"url":"http://www.hindawi.com/journals/aaa/2012/163648/","timestamp":"2014-04-17T21:39:40Z","content_type":null,"content_length":"669537","record_id":"<urn:uuid:e5fab715-17f9-404d-9508-c6eeb98331c1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
By Ben Crighton
BBC Radio 4, More Or Less
Early on in his research, Alex was confident he would find the solution
A 20-year-old Birmingham University student has won a $25,000 (£12,500) maths prize for proving that a certain type of very simple computer, given enough time and memory, could solve any problem that
a supercomputer could solve.
Alex Smith, an electrical and computer engineering undergraduate, first heard about the prize in an internet chatroom earlier this year.
He spent his summer holidays trying to crack it, and insists it was the challenge, not the money, that appealed to him.
"It was just something to have a go at, and see how far I could go before getting stuck," he told BBC Radio 4's numbers programme, More Or Less.
Turing machines
The idea of a simple computer that could solve any problem came from the brilliant British mathematician Alan Turing in the 1930s.
Ever since Turing first proposed the idea of a universal machine, mathematicians have been competing to find the simplest one
Turing, who later played a vital role in deciphering secret German codes during World War II, was the first to realise that there could be a single "universal computer" that could be programmed to do
any task.
Instead of having a different machine for each task, you could have just one piece of hardware and simply change the software.
Turing machines are not real computers, but hypothetical ones, arranged by "state" and "colour".
Ever since Turing first proposed the idea of a universal machine, mathematicians have been competing to find the simplest one.
It was already known that a two-state, two-colour machine could not be universal, but in his 2002 book A New Kind of Science, another British mathematician Stephen Wolfram speculated that a Turing
machine with two states and three colours might be.
The problem was that Wolfram could not prove it. So in May this year he offered a cash prize to anyone who could.
'Obvious' solution
Alex Smith submitted his initial solution in just five weeks.
There was no actual moment of surprise because it became more and more obvious as time went on
Alex Smith, Wolfram Turing prize winner
He says: "It was maybe a couple of weeks until the second version and then maybe a few more weeks before all the loose ends were tied up."
Seemingly unfazed by winning Wolfram's prize, he says it became clear quite early on that he would solve the problem.
"There was no actual moment of surprise because it became more and more obvious as time went on.
"By the time it was announced, I already knew I'd won," he adds.
Maybe it is just as well that the two, three Turing machine is an imaginary one. If it actually existed, users might get frustrated. According to Alex, even a simple sum like two plus two could take
a while.
He says: "You wouldn't get it finished in a reasonable time at all. We're talking about millennia, maybe until the end of the Universe. I haven't worked it out exactly."
Practical applications?
So has Stephen Wolfram's quest for the simplest universal Turing machine been a purely academic exercise?
Wolfram has written on his personal blog that he thinks there are practical applications.
He says: "When we think of nanoscale computers, we usually imagine carefully engineering them to mimic the architecture of the computers we know today... [but] we don't have to carefully build things
up with engineering.
"We can just go out and search in the computational universe, and find things like universal computers - that are simple enough that we can imagine making them out of molecules."
Back in Birmingham, Alex is not sure what he will do with his $25,000 prize.
He says: "I've just put it in the bank for the time being, and I'm going to leave most of it there until something comes up."
More Or Less will be broadcast on Monday, 26 November, 2007, on BBC Radio 4 at 1630 GMT and is presented by Tim Harford. You can also listen again to the programme after it has been broadcast from
the More Or Less website. You can also subscribe to the podcast
|
{"url":"http://news.bbc.co.uk/2/hi/programmes/more_or_less/7112637.stm","timestamp":"2014-04-20T22:07:39Z","content_type":null,"content_length":"55956","record_id":"<urn:uuid:6f81e395-1f62-444e-b539-9cd0c60e2599>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: [TowerTalk] Feedpoint Impedance
Ward Silver wrote:
>>>> Radiation resistance should be fixed..
>>> Oh, right...duh.
>>> The current equation will do - but the radiator (I may have confused
>>> things by referring to it as a "dipole") may be anywhere between one-half
>>> and one wavelength long, so the current won't be a simple cosine
>>> function. It *will* be zero at the ends :-)
>> I think Orfanidis's book has the equation you're looking for.. actually
>> several approximations, of varying fidelity..
>> I'm pretty sure there's some standard assumptions of the current
>> distribution shape (ranging from uniform for very short, to triangular to
>> sinusoidal to something else), possibly broken up into segments.
>> What sort of accuracy are you looking for?
> 20% ought to do it. There will be significant variation depending on
> proximity of ground, type of ground, and so forth. At this point, I'm just
> doing a feasibility study.
> 73, Ward N0AX
Here you go...
Current on a thin wire of length 2*h
k is propagation constant (i.e. pi gives you a half wavelength dipole)
z is the distance from the center
I(0) is the current at the center
I(z) = I(0) * sin( k *(h-abs(z)))/sin(k*h)
(Eq 21.4.2 from Orfanidis's book)
Seems to match (by eye) what I got from a series of NEC models..
I haven't looked into the off center fed aspect...This is just the
current distribution on a center fed wire.
Later pages in Chapter 21 of the book talk about King's three term
approximation (which works up to about 1.25 lambda). Figure 21.6.1
shows a comparison between sinusoidal, King and numerical integration.
If you want real pain, he goes on to work out an exact kernel for the
solution of Pocklington's equation using, why yes, elliptic integrals.
(I'd just use his canned Matlab routines....)
Jim, W6RMK
TowerTalk mailing list
|
{"url":"http://lists.contesting.com/_towertalk/2008-07/msg00799.html?contestingsid=1pns4glun795lim7o181373ek6","timestamp":"2014-04-16T10:33:29Z","content_type":null,"content_length":"10106","record_id":"<urn:uuid:f78130c9-bac4-4c81-aea6-64eb01beff60>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Getting Started with Gravity and Magnetic Data
Using and Understanding Gravity Data
Introduction | Gravity | Corrections | Density | References
The Role of Density
A knowledge of the density of various rock units is essential in gravity studies for several reasons. In fact, a major limitation in the quantitative interpretation of gravity data is the need to
estimate density values and to make simplifying assumptions about the distribution of density with the Earth. The earth is complex and the variations in density in the upper few kilometers of the
crust are large. The use of a single average density in Bouguer and terrain corrections is thus a major source of uncertainty in the calculation of values for these corrections. This fact is often
overlooked as we worry about making very precise measurements of gravity and then calculate anomaly values whose accuracy is limited by our lack of detailed information on density.
A basic step in the reduction of gravity measurements to interpretable anomaly values is calculation of the Bouguer correction that requires an estimation of density. At any specific gravity
station, one can think of the rock mass whose density we seek as being a slab extending from the station to the elevation of lowest gravity reading in the study area (Figure 1). If the lowest
station is above the datum (as is usually the case), each station shares a slab which extends from this lowest elevation down to the datum so this portion of the Bouguer correction is a constant
shared by all of the stations (Figure 1).
No one density value is truly appropriate, but when using the tradition approach it is necessary to use one value when calculating Bouguer anomaly values. When in doubt, the standard density value
for upper crustal rocks is 2.67 gm/cc.
In order to make terrain corrections, a similar density estimation is needed. However in this case, the value sought is the average density of the topography near a particular station. It is normal
to use the same value as was used in the Bouguer correction, but this need not necessarily be the case when the topography and geology is complex.
As mentioned in the discussion of the Bouguer correction, modern digital elevation data are making it possible to construct realistic models of topography that include laterally varying density.
Although preferable, this approach still requires the estimation of density of the column of rock between the Earth's surface and the reduction datum. From a traditional point of view, this
approach represents a merging of the Bouguer and terrain corrections that are then applied to Free Air anomaly values. One can also extend this approach to greater depths and vary the density
laterally, and consider it a geologic model of the upper crust that attempts to predict Free Air anomaly values. The Bouguer and terrain corrections then become unnecessary since the topography
simply becomes part of the geologic model which is being constructed.
When one begins to construct computer models based on gravity anomalies, densities must be assigned to all of the geologic bodies that make up the model. Here one needs to use all of the data at
hand to come up with these density estimates. Geologic mapping, drill hole data, measurements on samples from the field, etc. are examples of information one might use estimate density.
Measurements of Density
Density can be measured (or estimated) in many ways. In general, in situ measurements are better because they produce average values for fairly large bodies of rock that are in place. With
laboratory measurements, one must always worry about the effects of porosity, temperature, saturating fluids, pressure, and small sample size as factors that might make the values measured
unrepresentative of rock in place.
Many tabulations of typical densities for various rock types have been compiled (e.g., Telford et al., 1990). Thus one can simply look up the density value expected for a particular rock type
(Table 1).
Samples can be collected during field work and brought back to the laboratory for measurement. The density of cores and cuttings available from wells in the region of interest can be also be
Most wells that have been drilled during the exploration for petroleum, minerals, and water are surveyed by down hole geophysical logging techniques, and these geophysical logs are a good source of
density values. Density logs are often available and can be used directly to estimate the density of rock units encountered in the subsurface. However in many areas, sonic logs (seismic velocity)
are more common than density logs. In these areas, the Nafe-Drake or a similar relationship between seismic velocity and density (e.g., Barton, 1986) can be used to estimate density values.
The borehole gravity meter is an excellent (but rare) source of density data. This approach is ideal because it infers density from down hole measurements of gravity. These measurements are thus in
situ averages based on a sizable volume of rock not just a small sample.
The Nettleton technique (Nettleton, 1939) involves picking a place where the geology is simple and measuring gravity across a topographic feature. One then calculates the Bouguer gravity anomaly
profile using a series of density values. If the geology is truly simple, the gravity profile will be flat when the right density value is used in the Bouguer and terrain corrections.
One can also use a group of gravity readings in an area and simply find the density value where the correlation between topography and Bouguer anomaly values disappears.
Enhancement of Gravity Anomalies (Filtering)
Gravity and magnetic anomalies whose wavelengths are long relative to the dimensions of the geologic objectives of a particular investigation are called regional anomalies. Because shallow geologic
features can have large lateral dimensions, one has to be careful, but regional anomalies are usually thought to reflect the effects of relatively deep features. Anomalies whose wavelengths are
similar to the dimensions of the geologic objectives of a particular investigation are called local anomalies. In the processing of gravity data, it is usually preferable to attempt to separate the
regional and local anomalies prior to interpretation. The regional anomaly can be estimated employing a variety of analytical techniques. Once this is done, the simple difference between the
observed gravity anomalies and the interpreted regional anomaly is called the residual anomaly.
The techniques used to separate regional and local gravity anomalies take many forms and can all be considered as filtering in a general sense (e.g., Blakley, 1996). Many of these techniques are
the same as those employed in enhancing traditional remote sensing imagery. The process usually begins with a data set consisting of Bouguer gravity anomaly or magnetic anomaly values, and the
first step is to produce an anomaly map such as the one shown in Figure 3.
The initial step in processing gravity and is the creation of a regular grid from the irregularly spaced data points. This step is required to even create a simple contour map, and in general
purpose software, it may not receive the careful attention it deserves since all subsequent results depend on the fidelity of this grid as a representation of the actual data. On land, gravity data
tend to be very irregularly spaced with areas of dense data and areas of sparse data. This irregularity is often due to topography in that mountainous areas generally have more difficult access
than valleys and plains. It may also be due to difficulty in gaining access to private property and sensitive areas. In the case of marine data, the measurements are dense along the tracks that the
ships follow with relatively large gaps between tracks. Airborne and satellite gravity measurements involve complex processing that is beyond the scope of this discussion. However once these data
are processed, the remainder of the analysis is similar to that of land and marine data.
There are a number of software packages that have been designed for the processing of gravity data, and several gridding techniques available in these packages. The minimum curvature technique
works well and is illustrative of the desire to honor individual data points as much as possible while realizing that gravity has an inherent smoothness due to the behavior of the Earth's gravity
field. In this technique, the surface of minimum curvature is fitted to the data points surrounding a particular grid node, and the value on this surface at the node is determined. One can
intuitively conclude that the proper grid interval is approximately the mean spacing between readings in an area. A good gridding routine should honor individual gravity values and not produce
spurious values in areas of sparse data. Once the gridding is complete, the grid interval (usually 100's of meters) can be thought of as being analogous to the pixel interval in remote sensing
The term filtering can be applied to any of the various techniques that attempt to separate anomalies on the basis of their wavelength and/or trend (e.g., Blakely, 1996). The term separate is a
good intuitive one because the idea is construct an image (anomaly map) and then use filtering to separate anomalies of interest to the interpreter from other interfering anomalies (see regional
versus local anomalies above). In fact, fitting a low order polynomial surface (3rd order is used often) to a grid to approximate the regional is a common practice Then subtracting the values
representing this surface from the original grid values creates a residual grid that represents the local anomalies.
In gravity studies, the familiar concepts of high pass, low pass, and bandpass filters are applied in either the frequency or spatial domains. In Figure 4 and Figure 5 for example, successively
longer wavelengths have been removed from the Bouguer anomaly map shown in Figure 3. At least to some extent, these maps enhance anomalies due to features in the upper crust at the expense of
anomalies due to deep-seated features.
Directional filters are also used to select anomalies based on their trend. In addition, a number of specialized techniques have been developed for the enhancement of maps of anomalies based on the
physics of the gravity and magnetic fields and are discussed below. The various approaches to filtering can be sophisticated mathematically, but the choice of filter parameters or design of the
convolution operator always involves a degree of subjectivity. It is useful to remember that the basic steps in enhancing a map of gravity anomalies in order to emphasize features in the Earth's
crust are: 1) First remove a conservative regional trend from the data. The choice of regional is usually not critical but may greatly help in interpretations (e.g., Simpson et al., 1986). Because
the goal is to remove long wavelength anomalies, this step consists of applying a gentle high pass filter. Over most continental areas, Bouguer anomaly values are large negative numbers, thus the
usual practice of padding the edges of a grid with zeros prior to applying a Fourier transform and filtering will create large edge effects. One way to avoid this effect is to first remove the mean
in the data and gridding an area larger than the image to be displayed. However in areas where large regional anomalies are present, it may be best to fit a low order polynomial surface to the
gridded values, and then continue the processing with the residual values with respect to this surface. 2) One can then apply additional filters as needed to remove unwanted wavelengths or trends.
In addition to the usual wavelength filters, a variety of specialized filters have been developed for gravity data that include:
Upward continuation - A process (low pass filter) by which a map simulating the result if the survey had been conducted on a plane at a higher elevation is constructed. This process is based on the
physical fact that the further the observation is from the body causing the anomaly, the broader the anomaly. It is mathematically stable because it involves extracting long wavelength anomalies
from short wavelength ones.
Downward continuation - A high pass filter by which a map simulating the result if the survey had been conducted on a plane at a lower elevation is constructed. In theory, this process enhances
anomalies due to relatively shallow sources. However, care should be taken when applying this process to anything but very clean, densely-sampled data sets, because of the potential for amplifying
noise due to mathematical instability.
Vertical derivatives - In this technique, the vertical rate of change of the gravity or magnetic field is estimated (usually the 1st or 2nd derivative). This is a specialized high pass filter, but
the units of the resulting image are not milligals or nanoteslas and they cannot be modeled without special manipulations of the modeling software. As in the case of downward continuation, care
should be taken when applying this process to anything but very clean data sets because of the potential for amplifying noise. This process has some similarities to non-directional edge enhancement
techniques used in the analysis of remote sensing images.
Strike filtering - This technique is directly analogous to the directional filters used in the analysis of remote sensing images. In gravity processing, the goal is to remove the effects of some
linear trend with a particular azimuth. For example in much of the central U.S., the ancient processes that formed the Earth's crust created a northeast trending structural fabric that is reflected
in gravity and magnetic maps in the area and can obscure other anomalies. Thus, one might want to apply a strike-reject filter which deletes linear anomalies whose trends (azimuths) range from
N30oE to N60oE.
Horizontal gradients - In this technique, faults and other abrupt geologic discontinuities (edges) are detected based on the high horizontal gradients that they produce. Simple difference equations
are usually employed to calculate the gradients along the rows and columns of the grid. A linear maximum in the gradient is interpreted as a discontinuity such as a fault. These features are easy
to extract graphically to be used as an overlay on the original gravity or magnetic map or on products such as Landsat images.
Computer Modeling
In most applications of gravity techniques, the data processing and qualitative interpretation of maps is followed by a quantitative interpretation in which a profile (or grid) of anomaly values is
modeled by constructing an earth model whose calculated gravitational and/or magnetic effect closely approximates the observed profile (or grid). Modeling of profiles of anomaly values has become
common place and should be considered a routine part of any investigation of the subsurface. For example, a model for a profile across Figure 3 is shown in Figure 6. In its simplest form, the
process of constructing an earth model is one of trial and error iteration in which one's knowledge of the local geology, data from drill holes, and other data such as seismic surveys are valuable
constraints in the process. As the modeling proceeds, one must make choices concerning density and geometry of the bodies of rock that make up the model. In the absence of any constraints (which is
rare), the process is subject to considerable ambiguity since there will be many subsurface structural configurations which can fit the observed data. With some constraints, one can usually feel
that the process has yielded a very useful interpretation of the subsurface. However, ambiguities will always remain just as they do in all other geophysical techniques aimed at studying the
structure of the subsurface.
There are countless published articles on possible mathematical approaches to the modeling. However, for the two dimensional case (i.e. the modeling of profiles drawn perpendicular the structural
grain in the area of interest) a very flexible and easy approach is used almost universally. This technique is based on the work of Hubbert (1948), Talwani et al. (1959), and Cady (1980), although
many groups have written their own versions of this software with increasingly effective graphical interfaces and output. The original computer program was published by Talwani et al. (1959), and
Cady (1980) was among the first to introduce an approximation (called 2 1/2 D) that allows for a certain degree of three dimensionality. In the original formulation of Hubbert (1948), the earth
model was composed of bodies of polygonal cross section that extended to infinity in and out of the plane of the profile of gravity readings. In the 2 1/2 D formulation, the bodies can be assigned
finite strike-lengths in both directions. Today, anyone can have a 2 1/2 D modeling running on their PC.
The use of three dimensional approaches is not as common as it should be because of the complexity of constructing and manipulating the earth model. However, there are many 3-D approaches available
(e. g., Blakely, 1996). As discussed above, a full 3-D calculation of the gravitational attraction of the topography using a modern digital terrain model is the ultimate way to calculate Bouguer
and terrain corrections as well as construct earth models. This type of approach will be employed more often in the future as terrain data and the computer software needed becomes more readily
Gravity modeling is an ideal field in which to apply formal inverse techniques. This is a fairly complex subject mathematically. However, the idea is to let the computer automatically make the
changes in a starting earth model that the interpreter constructs. Thus, the interpreter is saved from tedious "tweaking" of the model to make the observed and calculated values match. In addition,
the thinking is that the computer will be unbiased relative to a human. The process can also give some formal estimates of the uncertainties in the interpretation. Inverse modeling packages are
readily available and can also run on PCs.
|
{"url":"http://research.utep.edu/Default.aspx?PageContentID=3949&tabid=38186","timestamp":"2014-04-16T07:14:28Z","content_type":null,"content_length":"35149","record_id":"<urn:uuid:d61b3ab6-5442-4427-a583-47904b02ba37>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
|
what math skills and other skills do i need to make and program robots?
New lower price for Axon II ($78) and Axon Mote ($58).
Topic: what math skills and other skills do i need to make and program robots? (Read 3294 times)
0 Members and 1 Guest are viewing this topic.
• Beginner
• Posts: 1
• Helpful? 0
I have only read a bit about them so far but I would like to know what else I should know.
• Contest Winner
• Supreme Robot
• Posts: 3,653
• Helpful? 21
• narobo.com
for your first robots barely any math is needed - just multiplication and basic algebra . Line following , obstacle avoiding , even talking robots do not need big complicated math.
For advanced robots you need complicated math - by advanced I mean bipeds, flying robots, like that
What robot do you plan on building?
Check out the Roboduino, Arduino-compatible board!
Link: http://curiousinventor.com/kits/roboduino
• Jr. Member
• Posts: 16
• Helpful? 0
what kind of math is needed in a biped liek a 6 or 4 servo? like when would you need to use your known math concepts and incorporate them into your biped?
• Contest Winner
• Supreme Robot
• Posts: 3,653
• Helpful? 21
• narobo.com
what kind of math is needed in a biped liek a 6 or 4 servo? like when would you need to use your known math concepts and incorporate them into your biped?
what do you mean what kind of math. Think for a second how on earth you can balance a biped , which would ordinarily tip over if not for the perfect synchronization of movements from the servos. Now
how do you calculate those precise movements? or an even better example is the Kalman filter which is almost always necessary with bipeds. Google it , and be careful not to get dizzy from looking at
all the calculations
Check out the Roboduino, Arduino-compatible board!
Link: http://curiousinventor.com/kits/roboduino
• Robot Overlord
• Posts: 255
• Helpful? 0
Simple answer for ya:
Basic Robots (wheeled wandering robots) - Algebra
Hobby Level Advanced (basic bipeds, trackers, etc) - Trig/Precalc
Advanced Robots (true bipeds, flyers, etc) - Calculus
blog: www.iamwhen.com
Saving the world from humanity one robot at a time.
• Jr. Member
• Posts: 38
• Helpful? 0
I did a lot of calculus and it is really of not so much use.
A little bit of statistics and probability theory is useful. A little bit of discrete maths and logic. It is better to really study and take on board the basics of a few maths topics, rather than
get too deep into one topic. You can opt to put together simple maths ideas in a creative way. There are some simple ideas that are really amazing and highly useful like the Chebyshev inequality
that are hardly ever used because people get lost in the complexity of maths!
• Supreme Robot
• Posts: 953
• Helpful? 5
• cooll
assumptions,equations,calculations...are the mathematics knowledge u need to implement... for simpler robots its all abt assumptions and deriving the right threshold..as ur robot gets complicated so
does ur algorithm and ur equation which control it... for image processing...geometry is everything... triangulation,orientation,centre of gravity... etc are needed to be accrurate and efficient..
for UAV ...gps navigation...shortest path,angulr orientation...etc are needed..
the general scope of mathematics that can be used in robotics is as huge as mathemtics itself....but u can build a robot by using some of te most basic mathematical concepts lik addition,subtraction
it all depends on what ur building ad how ur building it...if u can build a robot without any maths then so be it...if u can build a robot using the most complex of equations nd theories then also
so be it... ..
JAYDEEP ...
"IN THE END IT DOESNT EVEN MATTER"
• Supreme Robot
• Posts: 742
• Helpful? 23
• Nuclear Engineer · Roboticist
You probably don't need quantum chromatography
I ♥ ☢
• Supreme Robot
• Posts: 519
• Helpful? 4
• Mmmm... Plasma
I did a lot of calculus and it is really of not so much use.
Oh my. Isaac Newton is rolling over in his grave.
I have only read a bit about them so far but I would like to know what else I should know.
what kind of math is needed in a biped liek a 6 or 4 servo? like when would you need to use your known math concepts and incorporate them into your biped?
I did a lot of calculus and it is really of not so much use.
|
{"url":"http://www.societyofrobots.com/robotforum/index.php?topic=4809.msg37737","timestamp":"2014-04-19T18:21:41Z","content_type":null,"content_length":"64596","record_id":"<urn:uuid:e166d963-afc9-4ecb-86cc-e46440424fd6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A rational point in the scheme of pointed degree n rational functions [0912.2227]
up vote 1 down vote favorite
The following question is related to "Remark 2.2" in Christophe Cazanave's paper "Algebraic homotopy classes of algebraic functions". I decided to add the arxiv article-id to the questions title to
invite other people who like to study this article to do the same. My hope is that this will lead to a culture of discussing arxiv articles on the overflow.
Question: Let $F_n$ be the open subscheme of $\mathbb{A}^{2n}=\mathrm{Spec}(k[a_{0},\ldots,a_{n-1},b_{0},\ldots,b_{n-1}])$ complementary to the hypersurface of equation $res_{n,n}(X^{n}+a_{n-1}X^
{n-1}+\ldots+a_{0},b_{n-1}X^{n-1}+\ldots+b_{0})$. Let $R$ be a ring. The claim is that an $R$-point of $F_{n}$ is a pair $(A,B)$ of polynomials of $R[X]$, where $A$ is monic of degree $n$, $B$ is of
degree strictly less than $n$ and the scalar $res_{n,n}(A,B)$ is invertible. How can I see that a morphism $\mathrm{Spec}(R)\rightarrow F_n$ gives (and is the same as) a pair of polynomials in $R[X]
1 Isn't this more or less by definition? To say that F_n is complementary to the described hypersurface is precisely to say that F_n represents the described functor. – Qiaochu Yuan Jan 12 '10 at
Just a suggestion: maybe put something like [Cazanave, 0912.2227] in the title? I have no idea who Cazanave is, but I suspect that if I had read this paper I'd have a better chance of remembering
the author's name than the arXiv number. – Michael Lugo Jan 12 '10 at 21:29
I think that the statement is true, so it should be more or less by definition. I might have the wrong definitions?! Could you make this a little more explicit, please? – user2146 Jan 13 '10 at
Michael, I think [Cazanave, 0912.2227] would be too long. If not for this paper/question, then for other papers with more authors. I wanted to make it possible to search the MO for a certain
article-id and to find all discussions related to this article. – user2146 Jan 13 '10 at 9:29
add comment
1 Answer
active oldest votes
I'm not so good on the scheme-theoretic language, so let me embed $F_n$ as the affine variety $\text{res}\_{n,n}(X^n + ..., b_{n-1} X^{n-1} + ...) y = 1$ one dimension up. Then a
up vote 3 morphism $k[a_0, ... a_{n-1}, b_0, ... b_{n-1}, y]/(\text{stuff}) \to R$ is precisely (assuming that Cazanava means either $k = \mathbb{Z}$ or $R$ a $k$-algebra) a choice, for each
down vote variable $a_i, b_i, y$, of an element of $R$ subject to the condition that the resultant times $y$ is equal to $1$, i.e. the resultant is invertible in $R$.
Let me denote the resultant just by $f$ and let $R$ be a $k$-algebra. A morphism $\mathrm{spec}(R)\rightarrow F_{n}=\mathrm{spec}(k[...]_{f})$ is the same as a $k$-alg morphism $k
[...]\rightarrow R$ such that the image of $f$ is invertible and this morphism gives (and is determined by) the values in $R$ for the variables $a_i,b_i$. I was stupidly fixed to a
particular stupid idea and that made me unable to see this elementary statement works. Your post helped me to correct my idea, thanks. – user2146 Jan 14 '10 at 14:14
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/11583/a-rational-point-in-the-scheme-of-pointed-degree-n-rational-functions-0912-2227/11656","timestamp":"2014-04-18T03:17:31Z","content_type":null,"content_length":"57753","record_id":"<urn:uuid:2086d12e-db1f-4e85-b35a-072ea8fa07e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I am trying to find out two unknown values from one equation
May 21st 2008, 08:10 PM
I am trying to find out two unknown values from one equation
Does anyone know how to find two unknown values in the one equation it is for a year twelve maths b assignment. I am trying to find out what c and k are in this equation (by the way c and k are
T=TS + ce^(-kt)
Where TS= Temperature of surroundings which equals 23 degrees
T= Temperature of the body at a time (t)
t= Time
c & k= constants
Thank you for your help.
May 21st 2008, 08:24 PM
Does anyone know how to find two unknown values in the one equation it is for a year twelve maths b assignment. I am trying to find out what c and k are in this equation (by the way c and k are
T=TS + ce^(-kt)
Where TS= Temperature of surroundings which equals 23 degrees
T= Temperature of the body at a time (t)
t= Time
c & k= constants
Thank you for your help.
You need data, say some times $t_1, .. t_n$, and the corresponding tempratures $T(t_1), .. T(t_n)$.
May 21st 2008, 08:33 PM
I have Values for T and t but I don't know how to find the 2 unknowns in the 1 eqn
thanks ronL, in class we got the time and temperature which is:
t= 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
T= 93 75 62 51 45 41 37 35 33 31 30 29 28 27.8 27
But even if I substitute those numbers in I still have two unknowns left, and I don't know what to do after I substitute T and t in.
Thank you for your help
May 21st 2008, 08:39 PM
Does anyone know how to find two unknown values in the one equation it is for a year twelve maths b assignment. I am trying to find out what c and k are in this equation (by the way c and k are
T=TS + ce^(-kt)
Where TS= Temperature of surroundings which equals 23 degrees
T= Temperature of the body at a time (t)
t= Time
c & k= constants
Thank you for your help.
Check your problem again...these types of problems usually come with initial conditions such as
when t= y=
May 21st 2008, 09:00 PM
The question in full
ok umm.. this is the question in full:
"Newton's Law of Cooling states that the rate of change of temperature of a body is proportional to the difference in temperature between the body and the temperature of its surrounds."
(I don't actually understand the above sentence is that what you mean by t= y=)
"The formula fo Newton's law of cooling is:
T=TS + ce^(-kt)
Where TS= Temperature of surroundings which equals 23 degrees
T= Temperature of the body at a time (t)
t= Time
c & k= constants"
I then have made a graph of time (t) verse temperature (T) when t=x and T=y
It then says to
"Determine how well Newton's Law of Cooling models your experimental data. Your solution should include the use of a spreadsheet and a comparative graph."
So I thought that it meant that I had to come up with values for the constants so that I could model the formula against the data I obtained in a graph.
Thanks for your help.
May 22nd 2008, 06:23 AM
thanks ronL, in class we got the time and temperature which is:
t= 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
T= 93 75 62 51 45 41 37 35 33 31 30 29 28 27.8 27
But even if I substitute those numbers in I still have two unknowns left, and I don't know what to do after I substitute T and t in.
Thank you for your help
You wish to fit a curve of the form:
to this data.
$(T(t)-T_s)=c e^{-kt}$
Now take logs to get:
so plot $t$ against $u(t)=\log(T(t)-T_s)$ , this should be a straight line and its slope will be $-k$, and the intercept $\log(c)$.
|
{"url":"http://mathhelpforum.com/pre-calculus/39224-i-am-trying-find-out-two-unknown-values-one-equation-print.html","timestamp":"2014-04-21T06:43:07Z","content_type":null,"content_length":"13030","record_id":"<urn:uuid:6a037e10-b01d-48e6-a704-e4a03f073224>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The International System of Units
3.1: The International System of Units
Created by: CK-12
Lesson Objectives
• Identify the seven base units of the International System of Units.
• Know the commonly used metric prefixes.
• Convert between three temperature scales.
• Understand volume and energy as combinations of SI Units.
• Distinguish between mass and weight.
Lesson Vocabulary
• energy
• International System of Units (SI)
• joule
• kinetic energy
• liter
• measurement
• scientific notation
• temperature
• weight
Check Your Understanding
Recalling Prior Knowledge
• Why is the metric system easier to use than the English system of units?
• How is scientific notation used to represent very large or very small numbers?
• What units are used to measure length, mass, and volume in the metric system?
The temperature outside is 52 degrees Fahrenheit. Your height is 67 inches and your weight is 145 pounds. All are examples of measurements. A measurement is a quantity that includes both a number and
a unit. If someone were to describe the height of a building as 85, that would be meaningless. 85 meters? 85 feet? Without a unit, a measurement does not convey enough information to be useful. In
this lesson, we will begin an exploration of the units that are typically used in chemistry.
SI Base Units
All measurements depend on the use of units that are well known and understood. The English system of measurement units (inches, feet, ounces, etc.) is not used in science because of the difficulty
in converting from one unit to another. The metric system is used because all metric units are based on multiples of 10, making conversions very simple. The metric system was originally established
in France in 1795. The International System of Units is a system of measurement based on the metric system. The acronym SI is commonly used to refer to this system and stands for the French term, Le
Système International d’Unités. The SI was adopted by international agreement in 1960 and is composed of seven base units (see Table below).
SI Base Units of Measurement
│ Quantity │SI Base Unit │Symbol│
│Length │meter │m │
│Mass │kilogram │kg │
│Temperature │kelvin │K │
│Time │second │s │
│Amount of a Substance │mole │mol │
│Electric Current │ampere │A │
│Luminous Intensity │candela │cd │
The first five units are frequently encountered in chemistry. The amount of a substance, the mole, will be discussed in detail in a later chapter. All other measurement quantities, such as volume,
force, and energy, can be derived from these seven base units.
You can learn more about base units at www.nist.gov/pml/wmd/metric/si-units.cfm.
Metric Prefixes and Scientific Notation
As stated earlier, conversions between metric system units are straightforward because the system is based on powers of ten. For example, meters, centimeters, and millimeters are all metric units of
length. There are 10 millimeters in 1 centimeter and 100 centimeters in 1 meter. Prefixes are used to distinguish between units of different size. Table below lists the most common metric prefixes
and their relationship to the central unit, which has no prefix. Length is used as an example to demonstrate the relative size of each prefixed unit.
SI Prefixes
│Prefix│Unit Abbreviation│Exponential Factor│ Meaning │ Example │
│giga │G │10^9 │1,000,000,000 │1 gigameter (Gm) = 10^9 m │
│mega │M │10^6 │1,000,000 │1 megameter (Mm) = 10^6 m │
│kilo │k │10^3 │1000 │1 kilometer (km) = 1000 m │
│hecto │h │10^2 │100 │1 hectometer (hm) = 100 m │
│deka │da │10^1 │10 │1 dekameter (dam) = 10 m │
│ │ │10^0 │1 │1 meter (m) │
│deci │d │10^-1 │1/10 │1 decimeter (dm) = 0.1 m │
│centi │c │10^-2 │1/100 │1 centimeter (cm) = 0.01 m │
│milli │m │10^-3 │1/1000 │1 millimeter (mm) = 0.001 m │
│micro │µ │10^-6 │1/1,000,000 │1 micrometer (µm) = 10^-6 m │
│nano │n │10^-9 │1/1,000,000,000 │1 nanometer (nm) = 10^-9 m │
│pico │p │10^-12 │1/1,000,000,000,000│1 picometer (pm) = 10^-12 m │
There are more prefixes, although some of them rarely used. Have you ever heard of a zeptometer? You can learn more about metric prefixes at www.nist.gov/pml/wmd/metric/prefixes.cfm.
Table above introduces a very useful tool for working with numbers that are either very large or very small. Scientific notation is a way to express numbers as the product of two numbers: a
coefficient and the number 10 raised to a power. As an example, the distance from Earth to the Sun is about 150,000,000,000 meters – a very large distance indeed. In scientific notation, the distance
is written as 1.5 × 10^11 m. The coefficient is 1.5 and must be a number greater than or equal to 1 and less than 10. The power of 10, or exponent, is 11. See Figure below for two more examples of
scientific notation. Scientific notation is sometimes referred to as exponential notation.
The Sun is very large and very distant, so solar data is better expressed in scientific notation. The mass of the Sun is 2.0 × 10^30 kg and its diameter is 1.4 × 10^9 m.
Very small numbers can also be expressed using scientific notation. The mass of an electron in decimal notation is 0.000000000000000000000000000911 grams. In scientific notation, the mass is
expressed as 9.11 × 10^-28 g. Notice that the value of the exponent is chosen so that the coefficient is between 1 and 10.
Typical Units in Chemistry
Length and Volume
The SI basic unit of length, or linear measure, is the meter (m). All measurements of length may be made in meters, though the prefixes listed in Table above will often be more convenient. The width
of a room may be expressed as about 5 meters (m), whereas a large distance such as the distance between New York City and Chicago is better expressed as 1150 kilometers (km). Very small distances can
be expressed in units such as the millimeter or the micrometer. The width of a typical human hair is about 20 micrometers (µm).
Volume is the amount of space occupied by a sample of matter (see examples in Figure below). The volume of a regular object can be calculated by multiplying its length by its width by its height.
Since each of those is a linear measurement, we say that units of volume are derived from units of length. The SI unit of volume is the cubic meter (m^3), which is the volume occupied by a cube that
measures 1 m on each side. This very large volume is not very convenient for typical use in a chemistry laboratory. A liter (L) is the volume of a cube that measures 10 cm (1 dm) on each side. A
liter is thus equal to both 1000 cm^3 (10 cm × 10 cm × 10 cm) and to 1 dm^3. A smaller unit of volume that is commonly used is the milliliter (mL). A milliliter is the volume of a cube that measures
1 cm on each side. Therefore, a milliliter is equal to a cubic centimeter (cm^3). There are 1000 mL in 1 L, which is the same as saying that there are 1000 cm^3 in 1 dm^3.
(A) A typical water bottle is 1 liter in volume. (B) These dice measure 1 cm on each side, so each die has a volume of 1 cm^3 or 1 mL. (C) Volume in the laboratory is often measured with graduated
cylinders, which come in a variety of sizes.
You can watch a video about measuring volume using graduated cylinders at http://www.benchfly.com/video/153/how-to-use-graduated-cylinders/.
Mass and Weight
Mass is a measure of the amount of matter that an object contains. The mass of an object is made in comparison to the standard mass of 1 kilogram. The kilogram was originally defined as the mass of 1
L of liquid water at 4°C (the volume of a liquid changes slightly with temperature). In the laboratory, mass is measured with a balance (Figure below), which must be calibrated with a standard mass
so that its measurements are accurate.
An analytical balance in the laboratory takes very sensitive measurements of mass, usually in grams.
You can watch a short video about using an analytical balance at http://www.benchfly.com/video/54/how-to-weigh-small-amounts/.
Other common units of mass are the gram and the milligram. A gram is 1/1000th of a kilogram, meaning that there are 1000 g in 1 kg. A milligram is 1/1000th of a gram, so there are 1000 mg in 1 g.
Mass is often confused with the term weight. Weight is a measure of force that is equal to the gravitational pull on an object. The weight of an object is dependent on its location. On the moon, the
force due to gravity is about one sixth that of the gravitational force on Earth. Therefore, a given object will weigh six times more on Earth than it does on the moon. Since mass is dependent only
on the amount of matter present in an object, mass does not change with location. Weight measurements are often made with a spring scale by reading the distance that a certain object pulls down and
stretches a spring.
Temperature and Energy
Touch the top of the stove after it has been on and it feels hot. Hold an ice cube in your hand and it feels cold. Why? The particles of matter in a hot object are moving much faster than the
particles of matter in a cold object. An object’s kinetic energy is the energy due to motion. The particles of matter that make up the hot stove have a greater amount of kinetic energy than those in
the ice cube (see Figure below). Temperature is a measure of the average kinetic energy of the particles in matter. In everyday usage, temperature is how hot or cold an object is. Temperature
determines the direction of heat transfer. When two objects at different temperatures are brought into contact with one another, heat flows from the object at the higher temperature to the object at
the lower temperature. This occurs until their temperatures are the same.
The glowing charcoal on the left is composed of particles with a high level of kinetic energy, while the snow and ice on the right are made of particles with much less kinetic energy.
Temperature can be measured with several different scales. The Fahrenheit scale is typically not used for scientific purposes. The Celsius scale of the metric system is named after Swedish astronomer
Anders Celsius (1701-1744). The Celsius scale sets the freezing point and boiling point of water at 0°C and 100°C, respectively. The distance between those two points is divided into 100 equal
intervals, each of which is referred to as one degree.
The Kelvin temperature scale is named after Scottish physicist and mathematician Lord Kelvin (1824-1907). It is based on molecular motion, with the temperature of 0 K, also known as absolute zero,
being the point where all molecular motion ceases. The freezing point of water on the Kelvin scale is 273.15 K, while the boiling point is 373.15 K. As can be seen by the 100 kelvin difference
between the two, a change of one degree on the Celsius scale is equivalent to the change of one kelvin on the Kelvin scale. Converting from one scale to another is easy, as you simply add or subtract
273 (Figure below).
$^\circ\text{C} &= \text{K} - 273 \\\text{K} &= \ ^\circ\text{C} + 273$
A comparison of the Kelvin (left) and Celsius (right) temperature scales. The two scales differ from one another by 273.15 degrees.
Energy is defined as the capacity to do work or to produce heat. As discussed previously, kinetic energy is one type of energy and is associated with motion. Another frequently encountered form of
energy is potential energy, which is a type of energy that is stored in matter. The joule (J) is the SI unit of energy and is named after English physicist James Prescott Joule (1818-1889). In terms
of SI base units, a joule is equal to a kilogram times a meter squared divided by a second squared (kg•m^2/s^2). A common non-SI unit of energy that is often used is the calorie (cal), which is equal
to 4.184 J.
Lesson Summary
• Measurements are critical to any field of science and must consist of a quantity and an appropriate unit. The International System of Units consists of seven base units.
• The metric system utilizes prefixes and powers of 10 to make conversions between units easy.
• Length (m), mass (kg), temperature (K), time (s), and amount (mol) are the base units that are most frequently used in chemistry. Quantities such as volume and energy can be derived from
combinations of the base units.
Lesson Review Questions
Reviewing Concepts
1. Give the SI base unit of measurement for each of the following quantities.
1. mass
2. length
3. time
4. temperature
2. Convert the following numbers into scientific notation.
1. 85,000,000
2. 0.00019
3. Put the following into decimal notation.
1. 8.72 × 10^-8
2. 3 × 10^4
4. Place the following units of mass in order from smallest to largest: g, kg, μg, g, pg, Mg, ng, cg, dg.
5. Explain what is wrong with the following statement: “This rock weighs 8 kilograms.”
6. What is absolute zero on the Celsius temperature scale?
7. Calculate the volume in mL of a cube that is 2.20 cm on each side.
8. A rectangular solid has a volume of 80 cm^3. Its length is 2.0 cm and its width is 8.0 cm. What is the height of the solid?
9. Convert the following Celsius temperatures to Kelvin.
1. 36°C
2. −104°
10. Convert the following Kelvin temperatures to degrees Celsius.
1. 188 K
2. 631 K
11. Temperature in degrees Fahrenheit can be converted to Celsius by first subtracting 32, then dividing by 1.8. What is the Celsius temperature outside on a warm 88°F day?
12. Two samples of water are at different temperatures. A 2 L sample is at 40°C, while a 1 L sample is at 70°C.
13. The particles of which sample have a larger average kinetic energy?
14. The water samples are mixed. Assuming no heat loss, what will be the temperature of the 3 L of water?
Further Reading / Supplemental Links
• E.A. Mechtly, International System of Units: Fundamental Constants and Conversion Factors. Stipes Pub Llc, 1977.
• SI Metric System: (http://www.simetric.co.uk/sibasis.htm)
• You can do an online metric system crossword puzzle at http://education.jlab.org/sciencecrossword/metricsystem_01.html.
• Take a museum tour of weights and measurements at museum.nist.gov/exhibits/ex1/index.html.
• You can view a comparison of the sizes of viruses, DNA, and biological molecules, along with information about DNA-based computers, at http://publications.nigms.nih.gov/chemhealth/cool.htm.
• Time is standardized by an atomic clock. You can view a short video about the Amazing Atomic Clock at http://video.pbs.org/video/2167682634.
• Test your metric skills by playing Metric System Hangman. There are two Hangman games.
Points to Consider
Conversions between units of the metric system are made easy because they are related by powers of ten and because the prefixes are consistent across various types of measurement (length, volume,
mass, etc.).
• What is the mass in grams of a 2.50 kg book?
• What is the length in cm of a field that is 0.65 km?
Files can only be attached to the latest version of None
|
{"url":"http://www.ck12.org/book/CK-12-Chemistry---Intermediate/r6/section/3.1/","timestamp":"2014-04-18T20:01:27Z","content_type":null,"content_length":"130338","record_id":"<urn:uuid:2a9b532a-454a-4b73-afef-0ae490c3e9de>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Narrow results:
Time Period
Additional Subjects
Type of Item
Unique Algebraic Remainders on the Sibṭ’s Commentary on the Yāsamīnīyya
This work is an elaboration of the commentary written by the Egyptian mathematician Sibṭ al-Māridīnī—i.e., a commentary on another commentary—on the urjūzah (versified introduction) to the science
of algebra, originally composed by the Berber mathematician and man of letters Abū Muḥammad ‘Abd-Allāh al-Ishbīlī al-Marrakushī, also known as Ibn al-Yāsamīn, who died in 1204 (600 AH). Al-Yāsamīn
summarized his mathematical knowledge in a versified treatise known as the Yāsamīnīyya (The treatise by al-Yāsamīn). Around the end of the 15th century, al-Yāsamīn’s verses were the object of a ...
Contributed by
On the Sphere and the Cylinder; On the Measurement of the Circle; On Conoids and Spheroids; On Spirals; On the Equilibrium of Planes; On the Quadrature of the Parabola; The Sand Reckoner
In the middle of the 15th century, a number of manuscripts by the third-century BC Greek mathematician Archimedes began to circulate in the humanistic centers in the courts of Italy. Piero della
Francesca (circa 1416–92), the Renaissance artist best known for the frescos he painted for the Vatican and for the chapels in Arezzo, transcribed a copy of a Latin translation of Archimedes’s
geometry (a compilation of seven surviving treatises) and illustrated it with more than 200 drawings representing the mathematical theorems in the texts. This manuscript, long ...
Contributed by
The Recension of Euclid's "Elements"
This work is a printed edition of Kitāb taḥrīr uṣūl li-Uqlīdus (The recension of Euclid's Elements) by one of the intellectual luminaries of the Islamic world, the Persian polymath Naṣīr al-Dīn
Muḥammad ibn Muḥammad al-Ṭūsī (1201–74). After his death al-Ṭūsī was referred to as al-muʿallim al-thālith (the third teacher, with Aristotle and Fārābī referred to as the first and second
teachers, respectively). An extraordinarily prolific author, al-Ṭūsī made notable contributions to most of the intellectual fields of his era, writing on theology, mysticism, logic ...
Contributed by
The Threefold Lily of Practical Arithmetic
Johannes Huswirth (Sanensis) was a German arithmetician who flourished around 1500. Nothing is known of his life. That he is sometimes referred to as Sanensis suggests that he may have come from
Sayn, Germany. Arithmetice Lilium Triplicis Practice (The threefold lily of practical arithmetic) presents basic arithmetic operations such as addition and multiplication for whole numbers and
fractions. It treats much of the same material that Huswirth had covered in an earlier work, Enchirdion Algorismi (Handbook of algorithms). The work includes two woodcut illustrations; one of God
the Father and ...
Contributed by
Treatise on the Craft of Weight Measurement
This work is a treatise on the construction and use of the weighing balance (qabān, also qapān). It brings together geometric, mechanical, and arithmetic knowledge needed to construct and utilize
measuring devices for weighing heavy and irregularly-shaped objects. The author’s name is unknown, but excerpts from another work by an already-deceased Shaykh ‘Abd al-Majīd al-Shāmulī al-Maḥallī
are quoted in the treatise. The last page of the manuscript contains a sheet of verses that describe the basics of using a weighing balance, in a form that is easy to remember ...
Contributed by
Work on Trigonometry
This work is a treatise on trigonometry by Li Madou, the Chinese name of the Italian Jesuit Matteo Ricci (1552–1610). Ricci left for China in 1581 and arrived in Macao in 1582. Together with Luo
Mingjian (Michele Ruggieri, 1543–1607), he began his mission in Zhaoqing, Guangdong Province, where he published his Wan guo yu tu (Map of 10,000 countries), which was well received by Chinese
scholars. He was expelled from Zhaoqing and went to Jiangxi, where in 1596 he became the superior of the mission. He lived ...
Contributed by
Guide to Operations on Irrational Radicals for Neophytes
This mathematical treatise by Muḥammad b. Abi al-Fatḥ Muḥammad b. al-Sharafī Abi al-Rūḥ ‘Īsā b. Aḥmad al-Ṣūfī al-Shāfi‘ī al-Muqrī, was written in 1491-92 (897 AH). It begins with a "General
Introduction," followed by two main parts, with a concluding section on the study of cubes and cube roots. Part I, "Operations on Simple Irrational Radicals," is divided into four chapters. Chapter
1 covers simplification of radicals. Chapters 2, 3, and 4 deal respectively with the multiplication, addition and subtraction, and division of radicals. Part II, on "Operations with Compound ...
Contributed by
Comprehensive Reference on Algebra and Equations
This manuscript is a didactic work on arithmetic and algebra, composed in versified form, as a qasīda of 59 verses. It was composed by Ibn al-Hā’im al-Fardī in 1402 (804 A.H.). The beginning of the
work also names ‛Alī b. ‛Abd al-Samad al-Muqrī al-Mālikī (died Dhu al-Ḥijja 1381 [782 A.H.]), a scholar and teacher who had come to Egypt and taught at the ‛Amr b. ‛As madrasa for several years.
The main part of the qasīda begins by introducing and defining key terms in arithmetic and algebra ...
Contributed by
Sakhāqī’s Book [of Arithmetic]
This work is a tutorial text on elementary arithmetic, in 20 folios. It is divided into an introduction, 11 chapters, and a conclusion. In the beginning, the sign for zero is introduced, along with
the nine Indian numerals, written in two alternative forms. This is followed by a presentation of the place system. The first four chapters cover, respectively, addition, subtraction,
multiplication, and division. Chapter five introduces operations on non-whole numbers. The remaining six chapters discuss fractions and operations on them.
Contributed by
Easing the Difficulty of Arithmetic and Planar Geometry
This work is a comprehensive tutorial guide on arithmetic and plane geometry, in 197 folio pages. It also discusses monetary conversion. The work is composed in verse form, and is meant as a
commentary on existing textbooks. The author gives the following personal account of the writing of this guide: In Rajab 827 A.H. (May 1424) he traveled from Damascus to Quds al-Sharīf (in
Palestine), where he met two scholars named Ismā‘īl ibn Sharaf and Zayn al-Dīn Māhir. There he took lessons on arithmetic, using an introductory book ...
Contributed by
The Book of Instruction on Deviant Planes and Simple Planes
This manuscript is a work on practical astronomy and the drawing of the circle of projection and related concepts from spherical trigonometry. It is rich with geometric diagrams, tables of
empirical observations, and computations based upon these observations. An interesting feature of the manuscript is the appearance on the margins of the cover, and on several pages in the
manuscript, of edifying verses, proverbs, and witty remarks. One reads, for example, “It is strange to find in the world a jaundiced physician, a dim-eyed ophthalmologist, and a blind astronomer.”
Most ...
Contributed by
A Friendly Gift on the Science of Arithmetic
This treatise deals specifically with basic arithmetic, as needed for computing the division of inheritance according to Islamic law. It contains 48 folios and is divided into an introduction,
three chapters, and a conclusion. The introduction discusses the idea of numbers as an introduction to the science of arithmetic. Chapter I discusses the multiplication of integers. Chapter II is
on the division of integers and the computation of common factors. Chapter III deals extensively with fractions and arithmetic operations on them. The author, an Egyptian jurist and mathematician,
was the ...
Contributed by
Commentary on the Gift of Arithmetic
This work is by Abd-Allāh Ibn Bahā al-Din Muhammad Ibn Abd-Allāh al-Shanshāri al-Shāfīī, an expert in calculating al-Fardī (inheritance portions). The cover page of the manuscript bears a magical
form or talisman for finding a lost object. The main text is a detailed commentary on Tuhfat al-ahbāb fi al-hisāb (The friendly gift of arithmetic) by the renowned Egyptian scholar Badr al-Dīn
Muhammad Ibn Muhammad Ibn Ahmad (1423–1506), known as the Sibt (grandson of) al-Mardini, who taught arithmetic and astronomy in Alazhar for several years. The original work has an ...
Contributed by
The Fathiyya Essay on Using the Mughayyab Quadrant
This treatise by Badruddin al-Maridini (died 1506 [912 AH]), better known as Sibt al-Maridini, includes an introduction, 20 sections, and a conclusion. The treatise discusses a range of issues in
astronomy, surveying, and mathematics. It describes the sine quadrant and parallel circles, and explains how to measure the width of a river, the angle of a star, the depth of a well, or the height
of a mountain. Al-Maridini, whose parents were from Damascus, was born, raised, and educated in Cairo late in the Mamluk Dynasty (1250–1517). The manuscript ...
Contributed by
The Light of the Glitter in Mathematics
This work is a versified treatise on arithmetic (‘ilam al- ḥisāb), and specifically the art of dividing inheritance (farā’iḍ), which has application in Islamic law. After a standard expression of
praise for the Prophet, his companions, and later followers, the text introduces the system of place values and explains multiplication of multi-digit whole numbers and simple and compound
fractions. The text presents multiple examples that are described in verbal terms. As noted at the end of the manuscript, which was completed on Monday, 20 Rabī‘ I of the year ...
Contributed by
Commentary by Islam's Sheikh Zakariyya al-Ansari on Ibn al-Hā’im's Poem on the Science of Algebra and Balancing Called the Creator's Epiphany in Explaining the Cogent
This work is a commentary on a versified, 59-line introduction to algebra, entitled Al-Muqni‘ fī al-jabr wa al-muqābila, by the prolific and influential mathematician, jurist, and man of letters
Abū al-‘Abbās Shihāb al-Dīn Aḥmad ibn Muḥammad ibn ‘Alī al-Maqdisī al-Shāfi‘ī, known as Ibn al-Hā’im (circa 1356-1412 [circa 753-815 AH]). It clarifies the nomenclature and explains the basic
concepts of algebra, and provides succinct examples. The manuscript, completed on Thursday night, 8 Sha‘bān 1305 AH (March 21, 1888), is in the hand of Tāhā ibn Yūsuf.
Contributed by
Glosses of al-Hifnī on the Yāsamīnīyya
This work is an elaboration of the commentary written by the Egyptian mathematician Sibṭ al-Māridīnī (i.e., a commentary on another commentary), on the versified introduction, or urjūzah, to the
science of algebra, originally composed by the Berber mathematician and man of letters Abū Muḥammad ‘Abd-Allāh al-Ishbīlī al-Marrakushī, also known as Ibn al-Yāsamīn (died 1204 [600 AH]). Ibn
al-Yāsamīn’s work has not been examined in detail by scholars, so the apparent inclusion in this treatise of original lines by Ibn Yasamīn is of great importance in studying his contribution ...
Contributed by
The Best of Arithmetic
This treatise on the art of arithmetic, completed in the late 1880s, opens a window into the early interaction between traditional and modern mathematical pedagogy in Egypt. The use of French loan
words, such as million, along with some modern notation, indicates the author’s familiarity with developments in the teaching of arithmetic at the time. The work has an introduction followed by ten
chapters and a conclusion. Following traditional praise for God, the Prophet Muhammad, and virtuous vanguards of learning, the treatise opens by introducing arithmetic as a useful ...
Contributed by
Guidebook for Students on the Use of Arithmetic
This guidebook is a short commentary on a work on arithmetic entitled al-Wasīla (The tool) completed in the 14th century by Shihāb al-Dīn Ahmad ibn Alī ibn Imād. The commentary is by the renowned
Egyptian scholar known as Sibt (grandson of) al-Māridīnī (1423–1506), who taught mathematical sciences at Alazhar for a long time. The body of the work begins with a general discussion on numbers,
and forms a standard introduction to arithmetic. The manuscript, which was completed by Ahmad ibn Yūnus al-Chalabī al-Hanafī in 1496 (AH 903) at the ...
Contributed by
The Introductory Epistle on Sinusoidal Operations
This manuscript is a copy of al-Risāla al-Fatḥīya fī al-a‘māl al-jaybīya (The introductory epistle on sinusoidal operations) by Muḥammad ibn Muḥammad ibn Aḥmad Abu ‘Abd Allāh, Badr al-Dīn
(1423–1506), known as Sibṭ al-Māridīnī or the grandson of al-Māridīnī, in honor of his mother’s father, a famous astronomer. The manuscript consists of 16 pages of 14 lines each, and includes an
introduction and 20 bābs (chapters or articles). They range in length from a few lines to a page, and cover such topics as determination of the cardinal ...
Contributed by
Bulgarian Arithmetic
Arithmetics were a popular genre of textbooks during the era of the Bulgarian National Revival in the 19th century, when it was widely believed that everyone, especially future businessmen, needed
to know basic mathematics. Bulgarian Arithmetic was the fourth such text published in this era, in 1845. The author, Khristodul Kostovich Sichan-Nikolov (1808–89), was a monk, teacher, writer, and
publicist, often assisted in his scholarly pursuits by the writer, educator, and priest Neofit Rilski. Before writing his own text, Sichan-Nikolov had been involved as the editor of the first ...
Contributed by
|
{"url":"http://www.wdl.org/en/search/?ddc=51","timestamp":"2014-04-16T10:12:51Z","content_type":null,"content_length":"68023","record_id":"<urn:uuid:a501a1c9-17b1-47d4-85ba-3bd8f98ffc69>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
|